text
stringlengths
0
1.22M
Fermionic T-duality in fermionic double space ††thanks: Work supported in part by the Serbian Ministry of Education, Science and Technological Development, under contract No. 171031. B. Nikolić and B. Sazdović Institute of Physics University of Belgrade P.O.Box 57, 11001 Belgrade, Serbia e-mail address: [email protected] address: [email protected] Abstract In this article we offer the interpretation of the fermionic T-duality of the type II superstring theory in double space. We generalize the idea of double space doubling the fermionic sector of the superspace. In such doubled space fermionic T-duality is represented as permutation of the fermionic coordinates $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ with the corresponding fermionic T-dual ones, $\vartheta_{\alpha}$ and $\bar{\vartheta}_{\alpha}$, respectively. Demanding that T-dual transformation law has the same form as initial one, we obtain the known form of the fermionic T-dual NS-R i R-R background fields. Fermionic T-dual NS-NS background fields are obtained under some assumptions. We conclude that only symmetric part of R-R field strength and symmetric part of its fermionic T-dual contribute to the fermionic T-duality transformation of dilaton field and analyze the dilaton field in fermionic double space. As a model we use the ghost free action of type II superstring in pure spinor formulation in approximation of constant background fields up to the quadratic terms. 1 Introduction Two theories T-dual to one another can be viewed as being physically identical [1, 2]. T-duality presents an important tool which shows the equivalence of different geometries and topologies. The useful T-duality procedure was first introduced by Buscher [3]. Mathematical realization of T-duality is given by Buscher T-dualization procedure [3], which is considered as standard one. There are also other frameworks in which we can represent T-dualization which should agree with the Buscher procedure. It is double space formalism which was the subject of the articles about twenty years ago [4, 5, 6, 7, 8]. Double space is spanned by coordinates $Z^{M}=(x^{\mu}\;\;\;y_{\mu})^{T}$ $(\mu=0,1,2,\dots,D-1)$, where $x^{\mu}$ and $y_{\mu}$ are the coordinates of the $D$-dimensional initial and T-dual space-time, respectively. Interest for this subject emerged recently with papers [9, 10, 11, 12, 13], where T-duality along some subset of $d$ coordinates is considered as $O(d,d)$ symmetry transformation and [14, 15], where it is considered as permutation of $d$ initial with corresponding $d$ T-dual coordinates. Until recently only T-duality along bosonic coordinates has been considered. Analyzing the gluon scattering amplitudes in $N=4$ super Yang-Mills theory, a new kind of T-dual symmetry, fermionic T-duality, was discovered [16, 17]. It is a part of the dual superconformal symmetry which should be connected to integrability and it is valid just at string tree level. Mathematically, fermionic T-duality is realized within the same procedure as bosonic one, except that dualization is performed along fermionic variables. So, it can be considered as a generalization of Buscher T-duality. Fermionic T-duality consists in certain non-local redefinitions of the fermionic variables of the superstring mapping a supersymmetric background to another supersymmetric background. In Refs.[16, 17] it was shown that fermionic T-duality maps gluon scattering amplitudes in the original theory to an object very close to Wilson loops in the dual one. Calculation of gluon scattering amplitudes in the initial theory is equivalent to the calculation of Wilson loops in fermionic T-dual theory. Generalizing the idea of double space to the fermionic case we would get fermionic double space in which fermionic T-duality is a symmetry [18] which exchanges scattering amplitudes and Wilson loops. Fermionic double space can be also successfully applied in random lattice [19], where doubling of the supercoordinate was done. Relation between fermionic T-duality and open string noncommutativity was considered in Ref.[20]. Let us explain our motivation for fermionic T-duality. It is well known that T-duality is important feature in understanding the M-theory. In fact, five consistent superstring theories are connected by web of T and S dualities. We are going to pay attention to the T-duality, hoping that S-duality (which can be understood as transformation of dilaton background field also) can be later successfully incorporated into our procedure. If we start with arbitrary (of five consistent superstring) theory and find all corresponding T-dual theories we can achieve any of other four consistent superstring theories. But to obtain formulation of M-theory it is not enough. We must construct one theory which contains the initial theory and all corresponding T-dual ones. In the bosonic case (which is substantially simpler that supersymmetric one) we have succeeded to realize such program. In Refs.[14, 15] we doubled all bosonic coordinates and showed that such theory contained the initial and all corresponding T-dual theories. We can connect arbitrary two of these theories just replacing some initial coordinates $x^{a}$ with corresponding T-dual ones $y_{a}$. This is equivalent with T-dualization along coordinates $x^{a}$. So, introducing double space T-duality ceases to be transformation which connects two physically equivalent theories but it becomes symmetry transformation in extended space with respect to permutation group. We proved this in the bosonic string case both for constant and for weakly curved background with linear dependence on coordinates. Unfortunately, this is not enough for construction of M-theory, because the T-duality for superstrings is much more complicated then in the bosonic case [21]. In Ref.[22] we have tried to extend such approach to the type II theories. In fact, doubling all bosonic coordinates we have unified types IIA, IIB as well as type $II^{\star}$ [23] (obtained by T-dualization along time-like direction) theories. There is an incompleteness in such approach. Doubling all bosonic coordinates, by simple permutations of initial with corresponding T-dual coordinates, we obtained all T-dual background fields except T-dual R-R field strength $F^{\alpha\beta}$. To obtain ${}_{a}F^{\alpha\beta}$ (the field strength after T-dualization along coordinates $x^{a}$) we need to introduce some additional assumptions. The explanation is that R-R field strength $F^{\alpha\beta}$ appears coupled with fermionic momenta $\pi_{\alpha}$ and $\bar{\pi}_{\alpha}$ along which we did not performed T-dualization and consequently we did not double these variables. It is an analogue of $ij$-term in approach of Refs.[9, 10] where $x^{i}$ coordinates are not doubled. Therefore, in the first step of our approach to the formulation of M-theory (unification of types II theories) we must include T-dualization along fermionic variables ($\pi_{\alpha}$ and $\bar{\pi}_{\alpha}$ in particular case). It means that we should doubled these fermionic variables, also. The present article represents a necessary step for understanding T-dualization along all fermionic coordinates in fermionic double space. We expect that final step in construction of M-theory will be unification of all theories obtained after T-dualization along all bosonic and all fermionic variables [18, 19]. In that case we should double all coordinates in superspace, anticipating that some super permutation will connect arbitrary two of our five consistent super symmetric string theories. In this article we are going to double fermionic sector of type II theories adding to the coordinates $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ their fermionic T-duals, $\vartheta_{\alpha}$ and $\bar{\vartheta}_{\alpha}$, where index $\alpha$ counts independent real components of the spinors, $\alpha=1,2,\dots,16$. Rewriting T-dual transformation laws in terms of the double coordinates, $\Theta^{A}=(\theta^{\alpha},\vartheta_{\alpha})$ and $\bar{\Theta}^{A}=(\bar{\theta}^{\alpha},\bar{\vartheta}_{\alpha})$, we define the ”fermionic generalized metric” ${\cal F}_{AB}$ and the generalized currents $\bar{\cal J}_{+A}$ and ${\cal J}_{-A}$. The permutation matrix ${\cal T}^{A}{}_{B}$ exchanges $\bar{\theta}^{\alpha}$ and $\theta^{\alpha}$ with their T-dual partners, $\bar{\vartheta}_{\alpha}$ and $\vartheta_{\alpha}$, respectively. From the requirement that fermionic T-dual coordinates, ${}^{\star}\Theta^{A}={\cal T}^{A}{}_{B}\Theta^{B}$ and ${}^{\star}\bar{\Theta}^{A}={\cal T}^{A}{}_{B}\bar{\Theta}^{B}$, have the same transformation law as initial ones, $\Theta^{A}$ and $\bar{\Theta}^{A}$, we obtain the expressions for fermionic T-dual generalized metric, ${}^{\star}{\cal F}_{AB}=({\cal T}{\cal F}{\cal T})_{AB}$, and T-dual currents, ${}^{\star}\bar{\cal J}_{+A}={\cal T}_{A}{}^{B}\bar{\cal J}_{+B}$ and ${}^{\star}{\cal J}_{-A}={\cal T}_{A}{}^{B}{\cal J}_{-B}$, in terms of the initial ones. These expressions produce the expression for fermionic T-dual NS-R fields and R-R field strength. Expressions for fermionic T-dual metric and Kalb-Ramond field are obtained separately under some assumptions. We conclude that only symmetric part of R-R field strength, $F_{s}^{\alpha\beta}=\frac{1}{2}(F^{\alpha\beta}+F^{\beta\alpha})$, and symmetric part of its fermionic T-dual, ${}^{\star}F_{\alpha\beta}^{s}=\frac{1}{2}({}^{\star}F_{\alpha\beta}+{}^{\star}F_{\beta\alpha})$, give contribution to the dilaton field transformation under fermionic T-duality. We also investigate the dilaton field in double space. 2 Type II superstring and fermionic T-duality In this section we will introduce the action of type II superstring theory in pure spinor formulation and perform fermionic T-duality [16, 17, 20] using fermionic analogue of Buscher rules [3]. 2.1 Action and supergravity constraints In this manuscript we use the action of type II superstring theory in pure spinor formulation [24] up to the quadratic terms with constant background fields. Here we will derive the final form of the action which will be exploited in the further analysis. It corresponds to the actions used in Refs.[25, 26, 27, 28]. The sigma model action for type II superstring of Ref.[29] is of the form $$S=S_{0}+V_{SG}\,,$$ (2.1) where $S_{0}$ is the action in the flat background $$S_{0}=\int_{\Sigma}d^{2}\xi\left(\frac{\kappa}{2}\eta^{mn}\eta_{\mu\nu}\partial_{m}x^{\mu}\partial_{n}x^{\nu}-\pi_{\alpha}\partial_{-}\theta^{\alpha}+\partial_{+}\bar{\theta}^{\alpha}\bar{\pi}_{\alpha}\right)+S_{\lambda}+S_{\bar{\lambda}}\,,$$ (2.2) and it is deformed by integrated form of the massless type II supergravity vertex operator $$V_{SG}=\int_{\Sigma}d^{2}\xi(X^{T})^{M}A_{MN}\bar{X}^{N}\,.$$ (2.3) The vectors $X^{M}$ and $X^{N}$ are defined as $$X^{M}=\left(\begin{array}[]{c}\partial_{+}\theta^{\alpha}\\ \Pi_{+}^{\mu}\\ d_{\alpha}\\ \frac{1}{2}N_{+}^{\mu\nu}\end{array}\right)\,,\quad\bar{X}^{M}=\left(\begin{array}[]{c}\partial_{-}\bar{\theta}^{\alpha}\\ \Pi_{-}^{\mu}\\ \bar{d}_{\alpha}\\ \frac{1}{2}\bar{N}_{-}^{\mu\nu}\end{array}\right),$$ (2.4) and supermatrix $A_{MN}$ is of the form $$A_{MN}=\left(\begin{array}[]{cccc}A_{\alpha\beta}&A_{\alpha\nu}&E_{\alpha}{}^{\beta}&\Omega_{\alpha,\mu\nu}\\ A_{\mu\beta}&A_{\mu\nu}&\bar{E}_{\mu}^{\beta}&\Omega_{\mu,\nu\rho}\\ E^{\alpha}{}_{\beta}&E^{\alpha}_{\nu}&{\rm P}^{\alpha\beta}&C^{\alpha}{}_{\mu\nu}\\ \Omega_{\mu\nu,\beta}&\Omega_{\mu\nu,\rho}&\bar{C}_{\mu\nu}{}^{\beta}&S_{\mu\nu,\rho\sigma}\end{array}\right)\,,$$ (2.5) where notation and definitions are taken from Ref.[29]. The actions for pure spinors, $S_{\lambda}$ and $S_{\bar{\lambda}}$, are free field actions and fully decoupled from the rest of action $S_{0}$. The world sheet $\Sigma$ is parameterized by $\xi^{m}=(\xi^{0}=\tau\,,\xi^{1}=\sigma)$ and $\partial_{\pm}=\partial_{\tau}\pm\partial_{\sigma}$. Bosonic part of superspace is spanned by coordinates $x^{\mu}$ ($\mu=0,1,2,\dots,9$), while the fermionic one is spanned by $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ $(\alpha=1,2,\dots,16)$. The variables $\pi_{\alpha}$ and $\bar{\pi}_{\alpha}$ are canonically conjugated momenta to $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$, respectively. All spinors are Majorana-Weyl ones, which means that each of them has 16 independent real components. Matrix with superfields generally depends on $x^{\mu}$, $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$. The superfields $A_{\mu\nu}$, $\bar{E}_{\mu}{}^{\alpha}$, $E^{\alpha}{}_{\mu}$ and ${\rm P}^{\alpha\beta}$ are known as physical superfields, while the fields given in the first column and first row are auxiliary superfields because they can be expressed in terms of the physical ones [29]. The rest ones, $\Omega_{\mu,\nu\rho}(\Omega_{\mu\nu,\rho})$, $C^{\alpha}{}_{\mu\nu}(\bar{C}_{\mu\nu}{}^{\alpha})$ and $S_{\mu\nu,\rho\sigma}$, are curvatures (field strengths) for physical superfields. The expanded form of the vertex operator (2.3) is [29] $$\displaystyle V_{SG}$$ $$\displaystyle=$$ $$\displaystyle\int d^{2}\xi\left[\partial_{+}\theta^{\alpha}A_{\alpha\beta}\partial_{-}\bar{\theta}^{\beta}+\partial_{+}\theta^{\alpha}A_{\alpha\mu}\Pi_{-}^{\mu}+\Pi_{+}^{\mu}A_{\mu\alpha}\partial_{-}\bar{\theta}^{\alpha}+\Pi_{+}^{\mu}A_{\mu\nu}\Pi_{-}^{\nu}\right.$$ (2.6) $$\displaystyle+$$ $$\displaystyle d_{\alpha}E^{\alpha}{}_{\beta}\partial_{-}\bar{\theta}^{\beta}+d_{\alpha}E^{\alpha}{}_{\mu}\Pi_{-}^{\mu}+\partial_{+}\theta^{\alpha}E_{\alpha}{}^{\beta}\bar{d}_{\beta}+\Pi_{+}^{\mu}E_{\mu}{}^{\beta}\bar{d}_{\beta}+d_{\alpha}{\rm P}^{\alpha\beta}\bar{d}_{\beta}$$ $$\displaystyle+$$ $$\displaystyle\frac{1}{2}N_{+}^{\mu\nu}\Omega_{\mu\nu,\beta}\partial_{-}\bar{\theta}^{\beta}+\frac{1}{2}N_{+}^{\mu\nu}\Omega_{\mu\nu,\rho}\Pi_{-}^{\rho}+\frac{1}{2}\partial_{+}\theta^{\alpha}\Omega_{\alpha,\mu\nu}\bar{N}_{-}^{\mu\nu}+\frac{1}{2}\Pi_{+}^{\mu}\Omega_{\mu,\nu\rho}\bar{N}_{-}^{\nu\rho}$$ $$\displaystyle+$$ $$\displaystyle\left.\frac{1}{2}N_{+}^{\mu\nu}\bar{C}_{\mu\nu}{}^{\beta}\bar{d}_{\beta}+\frac{1}{2}d_{\alpha}C^{\alpha}{}_{\mu\nu}\bar{N}_{-}^{\mu\nu}+\frac{1}{4}N_{+}^{\mu\nu}S_{\mu\nu,\rho\sigma}\bar{N}_{-}^{\rho\sigma}\right]\,.$$ The supergravity constraints are the conditions obtained as a consequence of nilpotency and (anti)holomorphicity of BRST operators $Q=\int\lambda^{\alpha}d_{\alpha}$ and $\bar{Q}=\int\bar{\lambda}^{\alpha}\bar{d}_{\alpha}$, where $\lambda^{\alpha}$ and $\bar{\lambda}^{\alpha}$ are pure spinors and $d_{\alpha}$ and $\bar{d}_{\alpha}$ are independent variables. Let us discuss the choice of background fields satisfying superspace equations of motion in the context of supergravity constraints which are explained in details for pure spinor formalism in Refs.[32, 29]. In order to implement T-duality many restrictions should be imposed. For example, in bosonic case one should assume the existence of Killing vectors, which in fact means background fields independence on corresponding suitably selected coordinates. The idea is to avoid dependence on the coordinate $x^{\mu}$ and allow only dependence on the $\sigma$ and $\tau$ derivatives of the coordinates, ${\dot{x}}^{\mu}$ and $x^{\prime\mu}$. The case with explicit dependence on the coordinate requires particular attention and has been considered in Ref.[30]. Similar simplifications must be imposed in consideration of the non-commutativity of the coordinates [31, 30]. A similar situation occurs in the supersymmetric case. In order to perform fermionic T-duality we must avoid explicit dependence of background fields on the fermionic coordinates $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ (fermionic coordinates are Killing spinors) and allow only dependence on the $\sigma$ and $\tau$ derivatives of these coordinates. Assumption of existence of Killing spinors produces that the auxiliary superfields should be taken to be zero what can be seen from Eq.(5.5) of Ref.[29]. The right-hand side of the equations of motion for background fields (see for example [33]) is energy-momentum tensor which is generally square of field strengths. In our case physical superfields $G_{\mu\nu}$, $B_{\mu\nu}$, $\Phi$, $\Psi_{\mu}^{\alpha}$ and $\bar{\Psi}^{\alpha}_{\mu}$ are constant (do not depend on $x^{\mu}$, $\theta^{\alpha}$,$\bar{\theta}^{\alpha}$) and corresponding field strengths, $\Omega_{\mu,\nu\rho}(\Omega_{\mu\nu,\rho})$, $C^{\alpha}{}_{\mu\nu}(\bar{C}_{\mu\nu}{}^{\alpha})$ and $S_{\mu\nu,\rho\sigma}$, are zero. The only nontrivial contribution of the quadratic terms in equations of motion comes from constant field strength ${\rm P}^{\alpha\beta}$. It can induce back-reaction to the background fields. In order to analyze this issue we will use relations from Eq.(3.6) of Ref.[29] labeled by $(\frac{1}{2},\frac{3}{2},\frac{3}{2})$ $$D_{\alpha}{\rm P}^{\beta\gamma}-\frac{1}{4}(\Gamma^{\mu\nu})_{\alpha}{}^{\beta}\bar{C}_{\mu\nu}{}^{\gamma}=0\,,\quad\bar{D}_{\alpha}{\rm P}^{\beta\gamma}-\frac{1}{4}(\Gamma^{\mu\nu})_{\alpha}{}^{\gamma}C^{\beta}{}_{\mu\nu}=0\,,$$ (2.7) in which derivative of ${\rm P}^{\alpha\beta}$ appears. Here $$D_{\alpha}=\frac{\partial}{\partial\theta^{\alpha}}+\frac{1}{2}(\Gamma^{\mu}\theta)_{\alpha}\frac{\partial}{\partial x^{\mu}}\,,\quad\bar{D}_{\alpha}=\frac{\partial}{\partial\bar{\theta}^{\alpha}}+\frac{1}{2}(\Gamma^{\mu}\bar{\theta})_{\alpha}\frac{\partial}{\partial x^{\mu}}\,,$$ (2.8) are superspace covariant derivatives and $C^{\alpha}{}_{\mu\nu}$ and $\bar{C}_{\mu\nu}{}^{\alpha}$ are field strengths for gravitino fields $\Psi^{\alpha}_{\mu}$ and $\bar{\Psi}_{\mu}^{\alpha}$, respectively. In order to perform fermionic T-dualization along all fermionic directions, $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$, we assume that they are Killing spinors which means $$\frac{\partial{\rm P}^{\beta\gamma}}{\partial\theta^{\alpha}}=\frac{\partial{\rm P}^{\beta\gamma}}{\partial\bar{\theta}^{\alpha}}=0\,.$$ (2.9) Taking into account that gravitino fields, $\Psi^{\alpha}_{\mu}$ and $\bar{\Psi}^{\alpha}_{\mu}$, are constant (corresponding field strengths are zero), from the equations (2.7) it follows $$(\Gamma^{\mu})_{\alpha\delta}\partial_{\mu}{\rm P}^{\beta\gamma}=0\,.$$ (2.10) Note that this is more general case than equation of motion for R-R field strength, $(\Gamma^{\mu})_{\alpha\beta}\partial_{\mu}{\rm P}^{\beta\gamma}=0$, given in Eq.(3.11) of Ref.[29] where there is summation over spinor indices. Our choice of constant ${\rm P}^{\alpha\beta}$ is consistent with this condition. It is known fact that even constant R-R field strength produces back-reaction on background fields. In order to cancel non-quadratic terms originating from back-reaction, the constant R-R field strength must satisfy additional conditions - $AdS_{5}\times S_{5}$ coset geometry or self-duality condition. Taking into account these assumptions there exists solution $$\Pi_{\pm}^{\mu}\to\partial_{\pm}x^{\mu}\,,\quad d_{\alpha}\to\pi_{\alpha}\,,\quad\bar{d}_{\alpha}\to\bar{\pi}_{\alpha}\,,$$ (2.11) and only nontrivial superfields take the form $$A_{\mu\nu}=\kappa(\frac{1}{2}g_{\mu\nu}+B_{\mu\nu})\,,\quad E^{\alpha}_{\nu}=-\Psi^{\alpha}_{\nu}\,,\quad\bar{E}_{\mu}^{\alpha}=\bar{\Psi}_{\mu}^{\alpha}\,,\quad{\rm P}^{\alpha\beta}=\frac{2}{\kappa}P^{\alpha\beta}=\frac{2}{\kappa}e^{\frac{\Phi}{2}}F^{\alpha\beta}\,,$$ (2.12) where $g_{\mu\nu}$ is symmetric and $B_{\mu\nu}$ is antisymmetric tensor. The final form of the vertex operator under these assumptions is $$\displaystyle V_{SG}=\int_{\Sigma}d^{2}\xi\left[\kappa(\frac{1}{2}g_{\mu\nu}+B_{\mu\nu})\partial_{+}x^{\mu}\partial_{-}x^{\nu}-\pi_{\alpha}\Psi^{\alpha}_{\mu}\partial_{-}x^{\mu}+\partial_{+}x^{\mu}\bar{\Psi}^{\alpha}_{\mu}\bar{\pi}_{\alpha}+\frac{2}{\kappa}\pi_{\alpha}P^{\alpha\beta}\pi_{\beta}\right]\,.$$ (2.13) Consequently, the action $S$ is of the form $$\displaystyle S=\kappa\int_{\Sigma}d^{2}\xi\left[\partial_{+}x^{\mu}\Pi_{+\mu\nu}\partial_{-}x^{\nu}+\frac{1}{4\pi\kappa}\Phi R^{(2)}\right]$$ $$\displaystyle+$$ $$\displaystyle\int_{\Sigma}d^{2}\xi\left[-\pi_{\alpha}\partial_{-}(\theta^{\alpha}+\Psi^{\alpha}_{\mu}x^{\mu})+\partial_{+}(\bar{\theta}^{\alpha}+\bar{\Psi}^{\alpha}_{\mu}x^{\mu})\bar{\pi}_{\alpha}+\frac{2}{\kappa}\pi_{\alpha}P^{\alpha\beta}\bar{\pi}_{\beta}\right]\,,$$ where $G_{\mu\nu}=\eta_{\mu\nu}+g_{\mu\nu}$ and $$\Pi_{\pm\mu\nu}=B_{\mu\nu}\pm\frac{1}{2}G_{\mu\nu}\,.$$ (2.15) All terms containing pure spinors vanished because curvatures are zero under our assumption that physical superfields are constant. Actions $S_{\lambda}$ and $S_{\bar{\lambda}}$ are fully decoupled from the rest action and can be neglected in the further analysis. The action, in its final form, is ghost independent. Here we work both with type IIA and type IIB superstring theory. The difference is in the chirality of NS-R background fields and content of R-R sector. In NS-R sector there are two gravitino fields $\Psi^{\alpha}_{\mu}$ and $\bar{\Psi}^{\alpha}_{\mu}$ which are Majorana-Weyl spinors of the opposite chirality in type IIA and same chirality in type IIB theory. The same feature stands for the pairs of spinors $(\theta^{\alpha},\bar{\theta}^{\alpha})$ and $(\pi_{\alpha},\bar{\pi}_{\alpha})$. The R-R field strength $F^{\alpha\beta}$ is expressed in terms of the antisymmetric tensors $F_{(k)}=F_{\mu_{1}\mu_{2}\dots\mu_{k}}$ [1] $$F^{\alpha\beta}=\sum_{k=0}^{D}\frac{1}{k!}F_{(k)}(C\Gamma_{(k)})^{\alpha\beta}\,,\quad\left[\Gamma_{(k)}^{\alpha\beta}=(\Gamma^{[\mu_{1}\dots\mu_{k}]})^{\alpha\beta}\right]$$ (2.16) where $$\Gamma^{[\mu_{1}\mu_{2}\dots\mu_{k}]}\equiv\Gamma^{[\mu_{1}}\Gamma^{\mu_{2}}\dots\Gamma^{\mu_{k}]}\,,$$ (2.17) is basis of completely antisymmetrized product of gamma matrices and $C$ is charge conjugation operator. For more technical details regarding gamma matrices see the first reference in [1]. R-R field strength satisfies the chirality condition $\Gamma^{11}F=\pm F\Gamma^{11}$, where $\Gamma^{11}$ is a product of gamma matrices in $D=10$ dimensional space-time. The sign $+$ corresponds to type IIA while sign $-$ corresponds to type IIB superstring theory. Consequently, type IIA theory contains only even rank tensors $F_{(k)}$, while type IIB contains only odd rank tensors. For type IIA the independent tensors are $F_{(0)}$, $F_{(2)}$ and $F_{(4)}$, while independent tensors for type IIB are $F_{(1)}$, $F_{(3)}$ and self-dual part of $F_{(5)}$. 2.2 Fixing the chiral gauge invariance The fermionic part of the action (2.1) has the form of the first order theory. We want to eliminate the fermionic momenta and work with the action expressed in terms of coordinates and their derivatives. So, on the equations of motion for fermionic momenta $\pi_{\alpha}$ and $\bar{\pi}_{\alpha}$, $$\pi_{\alpha}=-\frac{\kappa}{2}\partial_{+}\left(\bar{\theta}^{\beta}+\bar{\Psi}^{\beta}_{\mu}x^{\mu}\right)(P^{-1})_{\beta\alpha}\,,\quad\bar{\pi}_{\alpha}=\frac{\kappa}{2}(P^{-1})_{\alpha\beta}\partial_{-}\left(\theta^{\beta}+\Psi^{\beta}_{\mu}x^{\mu}\right)\,,$$ (2.18) the action gets the form $$\displaystyle S(\partial_{\pm}x,\partial_{-}\theta,\partial_{+}\bar{\theta})=\kappa\int_{\Sigma}d^{2}\xi\partial_{+}x^{\mu}\Pi_{+\mu\nu}\partial_{-}x^{\nu}+\frac{1}{4\pi}\int_{\Sigma}d^{2}\xi\Phi R^{(2)}$$ $$\displaystyle+$$ $$\displaystyle\frac{\kappa}{2}\int_{\Sigma}d^{2}\xi\partial_{+}\left(\bar{\theta}^{\alpha}+\bar{\Psi}^{\alpha}_{\mu}x^{\mu}\right)(P^{-1})_{\alpha\beta}\partial_{-}\left(\theta^{\beta}+\Psi^{\beta}_{\nu}x^{\nu}\right)$$ $$\displaystyle=$$ $$\displaystyle\kappa\int_{\Sigma}d^{2}\xi\partial_{+}x^{\mu}\left[\Pi_{+\mu\nu}+\frac{1}{2}\bar{\Psi}^{\alpha}_{\mu}(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\nu}\right]\partial_{-}x^{\nu}+\frac{1}{4\pi}\int_{\Sigma}d^{2}\xi\Phi R^{(2)}$$ $$\displaystyle+$$ $$\displaystyle\frac{\kappa}{2}\int_{\Sigma}d^{2}\xi\left[\partial_{+}\bar{\theta}^{\alpha}(P^{-1})_{\alpha\beta}\partial_{-}\theta^{\beta}+\partial_{+}\bar{\theta}^{\alpha}(P^{-1}\Psi)_{\alpha\mu}\partial_{-}x^{\nu}+\partial_{+}x^{\mu}(\bar{\Psi}P^{-1})_{\mu\alpha}\partial_{-}\theta^{\alpha}\right]\,.$$ In the above action $\theta^{\alpha}$ appears only in the form $\partial_{-}\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ in the form $\partial_{+}\bar{\theta}^{\alpha}$. This means that the theory has a local symmetry $$\delta\theta^{\alpha}=\varepsilon^{\alpha}(\sigma^{+})\,,\quad\delta\bar{\theta}^{\alpha}=\bar{\varepsilon}^{\alpha}(\sigma^{-})\,,\quad(\sigma^{\pm}=\tau\pm\sigma)\,.$$ (2.20) We will treat this symmetry within BRST formalism. The corresponding BRST transformations are $$s\theta^{\alpha}=c^{\alpha}(\sigma^{+})\,,\quad s\bar{\theta}^{\alpha}=\bar{c}^{\alpha}(\sigma^{-})\,,$$ (2.21) where for each gauge parameter $\varepsilon^{\alpha}(\sigma^{+})$ and $\bar{\varepsilon}^{\alpha}(\sigma^{-})$ we introduced the ghost fields $c^{\alpha}(\sigma^{+})$ and $\bar{c}^{\alpha}(\sigma^{-})$, respectively. Here $s$ is BRST nilpotent operator. To fix gauge freedom we introduce gauge fermion with ghost number $-1$ $$\Psi=\frac{\kappa}{2}\int d^{2}\xi\left[\bar{C}_{\alpha}\left(\partial_{+}\theta^{\alpha}+\frac{\alpha^{\alpha\beta}}{2}b_{+\beta}\right)+\left(\partial_{-}\bar{\theta}^{\alpha}+\frac{1}{2}\bar{b}_{-\beta}\alpha^{\beta\alpha}\right)C_{\alpha}\right]\,,$$ (2.22) where $\alpha^{\alpha\beta}$ is arbitrary non singular matrix, $\bar{C}_{\alpha}$ and $C_{\alpha}$ are antighost fields, while $b_{+\alpha}$ and $\bar{b}_{-\alpha}$ are Nakanishi-Lautrup auxillary fields which satisfy $$sC_{\alpha}=b_{+\alpha}\,,\quad s\bar{C}_{\alpha}=\bar{b}_{-\alpha}\,,\quad sb_{+\alpha}=0\quad s\bar{b}_{-\alpha}=0\,.$$ (2.23) BRST transformation of gauge fermion $\Psi$ produces the gauge fixed and Fadeev-Popov action $$\displaystyle s\Psi=S_{gf}+S_{FP}\,,$$ $$\displaystyle S_{gf}=\frac{\kappa}{2}\int d^{2}\xi\left[\bar{b}_{-\alpha}\partial_{+}\theta^{\alpha}+\partial_{-}\bar{\theta}^{\alpha}b_{+\alpha}+\bar{b}_{-\alpha}\alpha^{\alpha\beta}b_{+\beta}\right]\,,$$ $$\displaystyle S_{FD}=\frac{\kappa}{2}\int d^{2}\xi\left[\bar{C}_{\alpha}\partial_{+}c^{\alpha}+(\partial_{-}\bar{c}^{\alpha})C_{\alpha}\right]\,.$$ (2.24) The Fadeev-Popov action is decoupled from the rest and, consequently, it can be omitted in further analysis. On the equations of motion for $b$-fields $$b_{+\alpha}=-(\alpha^{-1})_{\alpha\beta}\partial_{+}\theta^{\alpha}\,,\quad\bar{b}_{-\alpha}=-\partial_{-}\bar{\theta}^{\beta}(\alpha^{-1})_{\beta\alpha}\,,$$ (2.25) we obtain the final form of the BRST gauge fixed action $$S_{gf}=-\frac{\kappa}{2}\int d^{2}\xi\partial_{-}\bar{\theta}^{\alpha}(\alpha^{-1})_{\alpha\beta}\partial_{+}\theta^{\beta}\,.$$ (2.26) 2.3 Fermionic T-duality We will perform fermionic T-duality using fermionic version of Buscher procedure similarly to Refs.[20] where we worked without chiral gauge fixing. After introducing $S_{gf}$ the action still has a global shift symmetry in $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ directions. We introduce gauge fields $v^{\alpha}_{\pm}$ and $\bar{v}^{\alpha}_{\pm}$ and replace ordinary world-sheet derivatives with covariant ones $$\partial_{\pm}\theta^{\alpha}\to D_{\pm}\theta^{\alpha}\equiv\partial_{\pm}\theta^{\alpha}+v_{\pm}^{\alpha}\,,\quad\partial_{\pm}\bar{\theta}^{\alpha}\to D_{\pm}\bar{\theta}^{\alpha}\equiv\partial_{\pm}\bar{\theta}^{\alpha}+\bar{v}_{\pm}^{\alpha}\,.$$ (2.27) In order to make the fields $v_{\pm}^{\alpha}$ and $\bar{v}_{\pm}^{\alpha}$ to be unphysical we add the following terms in the action $$S_{gauge}(\vartheta,v_{\pm},\bar{\vartheta},\bar{v}_{\pm})=\frac{1}{2}\kappa\int_{\Sigma}d^{2}\xi\bar{\vartheta}_{\alpha}(\partial_{+}v_{-}^{\alpha}-\partial_{-}v^{\alpha}_{+})+\frac{1}{2}\kappa\int_{\Sigma}d^{2}\xi(\partial_{+}\bar{v}_{-}^{\alpha}-\partial_{-}\bar{v}^{\alpha}_{+})\vartheta_{\alpha}\,,$$ (2.28) where $\vartheta_{\alpha}$ and $\bar{\vartheta}_{\alpha}$ are Lagrange multipliers. The full gauge invariant action is of the form $$S_{inv}(x,\theta,\bar{\theta},\vartheta,\bar{\vartheta},v_{\pm},\bar{v}_{\pm})=S(\partial_{\pm}x,D_{-}\theta,D_{+}\bar{\theta})+S_{gf}(D_{-}\theta,D_{+}\bar{\theta})+S_{gauge}(\vartheta,\bar{\vartheta},v_{\pm},\bar{v}_{\pm})\,.$$ (2.29) Fixing $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ to zero we obtain the gauge fixed action $$\displaystyle S_{fix}=\kappa\int_{\Sigma}d^{2}\xi\partial_{+}x^{\mu}\left[\Pi_{+\mu\nu}+\frac{1}{2}\bar{\Psi}^{\alpha}_{\mu}(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\nu}\right]\partial_{-}x^{\nu}+\frac{1}{4\pi}\int_{\Sigma}d^{2}\xi\Phi R^{(2)}$$ (2.30) $$\displaystyle+\frac{\kappa}{2}\int_{\Sigma}\left[\bar{v}_{+}^{\alpha}(P^{-1})_{\alpha\beta}v_{-}^{\beta}+\bar{v}_{+}^{\alpha}(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\nu}\partial_{-}x^{\nu}+\partial_{+}x^{\mu}\bar{\Psi}^{\alpha}_{\mu}(P^{-1})_{\alpha\beta}v_{-}^{\beta}-\bar{v}_{-}^{\alpha}(\alpha^{-1})_{\alpha\beta}v_{+}^{\beta}\right]$$ $$\displaystyle+\frac{\kappa}{2}\int_{\Sigma}d^{2}\xi\bar{\vartheta}_{\alpha}(\partial_{+}v_{-}^{\alpha}-\partial_{-}v^{\alpha}_{+})+\frac{\kappa}{2}\int_{\Sigma}d^{2}\xi(\partial_{+}\bar{v}_{-}^{\alpha}-\partial_{-}\bar{v}^{\alpha}_{+})\vartheta_{\alpha}\,.$$ Varying the above action with respect to the Lagrange multipliers $\vartheta_{\alpha}$ and $\bar{\vartheta}_{\alpha}$ we obtain the initial action (2.2) because $$\partial_{+}v^{\alpha}_{-}-\partial_{-}v^{\alpha}_{+}=0\Longrightarrow v_{\pm}^{\alpha}=\partial_{\pm}\theta^{\alpha}\,,\quad\partial_{+}\bar{v}^{\alpha}_{-}-\partial_{-}\bar{v}^{\alpha}_{+}=0\Longrightarrow\bar{v}_{\pm}^{\alpha}=\partial_{\pm}\bar{\theta}^{\alpha}\,.$$ (2.31) The equations of motion for $v_{\pm}^{\alpha}$ and $\bar{v}_{\pm}^{\alpha}$ give $$\bar{v}_{-}^{\alpha}=\partial_{-}\bar{\vartheta}_{\beta}\alpha^{\beta\alpha}\,,\quad\bar{v}_{+}^{\alpha}=\partial_{+}\bar{\vartheta}_{\beta}P^{\beta\alpha}-\partial_{+}x^{\mu}\bar{\Psi}^{\alpha}_{\mu}\,,$$ (2.32) $$v_{+}^{\alpha}=-\alpha^{\alpha\beta}\partial_{+}\vartheta_{\beta}\,,\quad v_{-}^{\alpha}=-P^{\alpha\beta}\partial_{-}\vartheta_{\beta}-\Psi^{\alpha}_{\mu}\partial_{-}x^{\mu}\,.$$ (2.33) Substituting these expressions in the action $S_{fix}$ we obtain the fermionic T-dual action $$\displaystyle{}^{\star}S(\partial_{\pm}x,\partial_{-}\vartheta,\partial_{+}\bar{\vartheta})=\kappa\int_{\Sigma}d^{2}\xi\partial_{+}x^{\mu}\Pi_{+\mu\nu}\partial_{-}x^{\nu}+\frac{1}{4\pi}\int_{\Sigma}d^{2}\xi\;\;{}^{\star}\Phi R^{(2)}\,,$$ (2.34) $$\displaystyle+\frac{\kappa}{2}\int_{\Sigma}d^{2}\xi\left[\partial_{+}\bar{\vartheta}_{\alpha}P^{\alpha\beta}\partial_{-}\vartheta_{\beta}-\partial_{+}x^{\mu}\bar{\Psi}^{\alpha}_{\mu}\partial_{-}\vartheta_{\alpha}+\partial_{+}\bar{\vartheta}_{\alpha}\Psi^{\alpha}_{\mu}\partial_{-}x^{\mu}-\partial_{-}\bar{\vartheta}_{\alpha}\alpha^{\alpha\beta}\partial_{+}\vartheta_{\beta}\right]\,.$$ It should be in the form of the initial action (2.2) $$\displaystyle{}^{\star}S=\kappa\int_{\Sigma}d^{2}\xi\partial_{+}x^{\mu}\left[{}^{\star}\Pi_{+\mu\nu}+\frac{1}{2}{}^{\star}\Psi^{\alpha\mu}({}^{\star}P^{-1})^{\alpha\beta}{}^{\star}\Psi_{\beta\nu}\right]\partial_{-}x^{\nu}+\frac{1}{4\pi}\int_{\Sigma}d^{2}\xi{}^{\star}\Phi R^{(2)}$$ $$\displaystyle+$$ $$\displaystyle\frac{\kappa}{2}\int_{\Sigma}d^{2}\xi\left[\partial_{+}\bar{\vartheta}_{\alpha}({}^{\star}P^{-1})^{\alpha\beta}\partial_{-}\vartheta_{\beta}+\partial_{+}x^{\mu}({}^{\star}\bar{\Psi}{}^{\star}P^{-1})_{\mu}^{\alpha}\partial_{-}\vartheta^{\alpha}+\partial_{+}\bar{\vartheta}_{\alpha}({}^{\star}P^{-1}{}^{\star}\Psi)^{\alpha}_{\mu}\partial_{-}x^{\nu}\right]$$ $$\displaystyle-$$ $$\displaystyle\frac{\kappa}{2}\int d^{2}\xi\partial_{-}\bar{\vartheta}_{\alpha}({}^{\star}\alpha^{-1})^{\alpha\beta}\partial_{+}\vartheta_{\beta}\,,$$ (2.36) and so we get $${}^{\star}\Psi_{\alpha\mu}=(P^{-1}\Psi)_{\alpha\mu}\,,\;{}^{\star}\bar{\Psi}_{\mu\alpha}=-(\bar{\Psi}P^{-1})_{\mu\alpha}\,,$$ (2.37) $${}^{\star}P_{\alpha\beta}=(P^{-1})_{\alpha\beta}\,,\quad{}^{\star}\alpha_{\alpha\beta}=(\alpha^{-1})_{\alpha\beta}\,.$$ (2.38) From the condition $${}^{\star}\Pi_{+\mu\nu}+\frac{1}{2}{}^{\star}\bar{\Psi}_{\alpha\mu}\,({}^{\star}P^{-1})^{\alpha\beta}\,{}^{\star}\Psi_{\beta\nu}=\Pi_{+\mu\nu}\,,$$ (2.39) we read the fermionic T-dual metric and Kalb-Ramond field $$\displaystyle{}^{\star}G_{\mu\nu}=G_{\mu\nu}+\frac{1}{2}\left[(\bar{\Psi}P^{-1}\Psi)_{\mu\nu}+(\bar{\Psi}P^{-1}\Psi)_{\nu\mu}\right]\,,$$ $$\displaystyle{}^{\star}B_{\mu\nu}=B_{\mu\nu}+\frac{1}{4}\left[(\bar{\Psi}P^{-1}\Psi)_{\mu\nu}-(\bar{\Psi}P^{-1}\Psi)_{\nu\mu}\right]\,.$$ (2.40) Dilaton transformation under fermionic T-duality will be presented in the section 4. Let us note that two successive dualizations give the initial background fields. The T-dual transformation laws are connection between initial and T-dual coordinates. We can obtain them combining the different solutions of equations of motion for $v_{\pm}^{\alpha}$ and $\bar{v}_{\pm}^{\alpha}$ (2.31) and (2.32)-(2.33) $$\partial_{-}\theta^{\alpha}\cong-P^{\alpha\beta}\partial_{-}\vartheta_{\beta}-\Psi^{\alpha}_{\mu}\partial_{-}x^{\mu}\,,\quad\partial_{+}\bar{\theta}^{\alpha}\cong\partial_{+}\bar{\vartheta}_{\beta}P^{\beta\alpha}-\partial_{+}x^{\mu}\bar{\Psi}^{\alpha}_{\mu}\,,$$ (2.41) $$\partial_{+}\theta^{\alpha}\cong-\alpha^{\alpha\beta}\partial_{+}\vartheta_{\beta}\,,\quad\partial_{-}\bar{\theta}^{\alpha}\cong\partial_{-}\bar{\vartheta}_{\beta}\alpha^{\beta\alpha}\,.$$ (2.42) Here the symbol $\cong$ denotes the T-duality relation. From these relations we can obtain inverse transformation rules $$\partial_{-}\vartheta_{\alpha}\cong-(P^{-1})_{\alpha\beta}\partial_{-}\theta^{\beta}-(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\mu}\partial_{-}x^{\mu}\,,\quad\partial_{+}\bar{\vartheta}_{\alpha}\cong\partial_{+}\bar{\theta}^{\beta}(P^{-1})_{\beta\alpha}+\partial_{+}x^{\mu}\bar{\Psi}^{\beta}_{\mu}(P^{-1})_{\beta\alpha}\,,$$ (2.43) $$\partial_{+}\vartheta_{\alpha}\cong-(\alpha^{-1})_{\alpha\beta}\partial_{+}\theta^{\beta}\,,\quad\partial_{-}\bar{\vartheta}_{\alpha}\cong\partial_{-}\bar{\theta}^{\beta}(\alpha^{-1})_{\beta\alpha}\,.$$ (2.44) Note that without gauge fixing in subsection 2.2, instead expressions for $\bar{v}_{-}^{\alpha}$ and $v^{\alpha}_{+}$ (first relations of (2.32) and (2.33)), we would have only constraints on the T-dual variables, $\partial_{-}\bar{\vartheta}_{\alpha}=0$ and $\partial_{+}\vartheta_{\alpha}=0$. Consequently, integration over $v_{\pm}^{\alpha}$ and $\bar{v}^{\alpha}_{\pm}$ would be singular and we would lose the part of T-dual transformations (2.42) and (2.44). 3 Fermionic T-dualization in fermionic double space In this section we will extend the meaning of the double space. We will introduce double fermionic space adding to the fermionic coordinates, $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$, the fermionic T-dual ones, $\vartheta_{\alpha}$ and $\bar{\vartheta}_{\alpha}$. Then we will show that fermionic T-dualization can be represented as permutation of the appropriate fermionic coordinates and their T-dual partners. 3.1 Transformation laws in fermionic double space In the same way as the double bosonic coordinates were introduced [4, 14, 15], we double both fermionic coordinate as $$\Theta^{A}=\left(\begin{array}[]{c}\theta^{\alpha}\\ \vartheta_{\alpha}\end{array}\right)\,,\quad\bar{\Theta}^{A}=\left(\begin{array}[]{c}\bar{\theta}^{\alpha}\\ \bar{\vartheta}_{\alpha}\end{array}\right)\,.$$ (3.1) Each double coordinate has 32 real components. In terms of the double fermionic coordinates the transformation laws, (2.41)-(2.44), can be rewritten in the form $$\partial_{-}\Theta^{A}\cong-\Omega^{AB}\left[{\cal F}_{BC}\partial_{-}\Theta^{C}+{\cal J}_{-B}\right]\,,\quad\partial_{+}\bar{\Theta}^{A}\cong\left[\partial_{+}\bar{\Theta}^{C}{\cal F}_{CB}+\bar{\cal J}_{+B}\right]\Omega^{BA}\,,$$ (3.2) $$\partial_{+}\Theta^{A}\cong-\Omega^{AB}{\cal A}_{BC}\partial_{+}\Theta^{C}\,,\quad\partial_{-}\bar{\Theta}^{A}\cong\partial_{-}\bar{\Theta}^{C}{\cal A}_{CB}\Omega^{BA}\,,$$ (3.3) where ”fermionic generalized metric” ${\cal F}_{AB}$ has the form $${\cal F}_{AB}=\left(\begin{array}[]{cc}(P^{-1})_{\alpha\beta}&0\\ 0&P^{\gamma\delta}\end{array}\right)\,,$$ (3.4) and $${\cal A}_{AB}=\left(\begin{array}[]{cc}(\alpha^{-1})_{\alpha\beta}&0\\ 0&\alpha^{\gamma\delta}\end{array}\right)\,.$$ (3.5) ${\cal F}_{AB}$ is bosonic variable but we put the name fermionic because it appears in the case of fermionic T-duality. The double currents, $\bar{\cal J}_{+A}$ and ${\cal J}_{-A}$, are fermionic variables of the form $$\bar{\cal J}_{+A}=\left(\begin{array}[]{c}(\bar{\Psi}P^{-1})_{\mu\alpha}\partial_{+}x^{\mu}\\ -\bar{\Psi}^{\alpha}_{\mu}\partial_{+}x^{\mu}\end{array}\right)\,,\quad{\cal J}_{-A}=\left(\begin{array}[]{c}(P^{-1}\Psi)_{\alpha\mu}\partial_{-}x^{\mu}\\ \Psi^{\alpha}_{\mu}\partial_{-}x^{\mu}\end{array}\right)\,,$$ (3.6) while the matrix $\Omega^{AB}$ is constant $$\Omega^{AB}=\left(\begin{array}[]{cc}0&1\\ 1&0\end{array}\right)\,,$$ (3.7) where identity matrix is $16\times 16$. By straightforward calculation we can prove the relations $$\Omega^{2}=1\,,\quad\det{{\cal F}_{AB}}=1\,.$$ (3.8) Consistency of the transformation laws (3.2) produces $$(\Omega{\cal F})^{2}=1\,,\quad{\cal J}_{-}={\cal F}\Omega{\cal J}_{-}\,,\quad\bar{\cal J}_{+}=-\bar{\cal J}_{+}\Omega{\cal F}\,.$$ (3.9) 3.2 Double action It is well known that equations of motion of initial theory are Bianchi identities in T-dual picture and vice versa. As a consequence of the identities $$\partial_{+}\partial_{-}\Theta^{A}-\partial_{-}\partial_{+}\Theta^{A}=0\,,\quad\partial_{+}\partial_{-}\bar{\Theta}^{A}-\partial_{-}\partial_{+}\bar{\Theta}^{A}=0\,,$$ (3.10) known as Bianchi identities, and relations (3.2) and (3.3), we obtain the consistency conditions $$\partial_{+}({\cal F}_{AB}\partial_{-}\Theta^{B}+J_{-A})-\partial_{-}({\cal A}_{AB}\partial_{+}\Theta^{B})=0\,,$$ (3.11) $$\partial_{-}(\partial_{+}\bar{\Theta}^{B}{\cal F}_{BA}+\bar{J}_{+A})-\partial_{+}(\partial_{-}\bar{\Theta}^{B}{\cal A}_{BA})=0\,.$$ (3.12) The equations (3.11) and (3.12) are equations of motion of the following action $$\displaystyle S_{double}(\Theta,\bar{\Theta})=$$ $$\displaystyle=$$ $$\displaystyle\frac{\kappa}{2}\int d^{2}\xi\left[\partial_{+}\bar{\Theta}^{A}{\cal F}_{AB}\partial_{-}\Theta^{B}+\bar{\cal J}_{+A}\partial_{-}\Theta^{A}+\partial_{+}\bar{\Theta}^{A}{\cal J}_{-A}-\partial_{-}\bar{\Theta}^{A}{\cal A}_{AB}\partial_{+}\Theta^{B}+L(x)\right]\,,$$ where $L(x)$ is arbitrary functional of the bosonic coordinates. 3.3 Fermionic T-dualization of type II superstring theory as permutation of fermionic coordinates in double space In order to exchange $\theta^{\alpha}$ with $\vartheta_{\alpha}$ and $\bar{\theta}$ with $\bar{\vartheta}_{\alpha}$, let us introduce the permutation matrix $${\cal T}^{A}{}_{B}=\left(\begin{array}[]{cc}0&1\\ 1&0\end{array}\right)\,,$$ (3.14) so that double T-dual coordinates are $${}^{\star}\Theta^{A}={\cal T}^{A}{}_{B}\Theta^{B}\,,\quad{}^{\star}\bar{\Theta}^{A}={\cal T}^{A}{}_{B}\bar{\Theta}^{B}\,.$$ (3.15) We demand that T-dual transformation laws for double T-dual coordinates ${}^{\star}\Theta^{A}$ and ${}^{\star}\bar{\Theta}^{A}$ have the same form as for initial ones $\Theta^{A}$ and $\bar{\Theta}^{A}$ (3.2) and (3.3) $$\partial_{-}{}^{\star}\Theta^{A}\cong-\Omega^{AB}\left[{}^{\star}{\cal F}_{BC}\partial_{-}{}^{\star}\Theta^{C}+{}^{\star}{\cal J}_{-B}\right]\,,\quad\partial_{+}{}^{\star}\bar{\Theta}^{A}\cong\left[\partial_{+}{}^{\star}\bar{\Theta}^{C}{}^{\star}{\cal F}_{CB}+{}^{\star}\bar{\cal J}_{+B}\right]\Omega^{BA}\,,$$ (3.16) $$\partial_{+}{}^{\star}\Theta^{A}\cong-\Omega^{AB}{{}^{\star}\cal A}_{BC}\partial_{+}{}^{\star}\Theta^{C}\,,\quad\partial_{-}{}^{\star}\bar{\Theta}^{A}\cong\partial_{-}{}^{\star}\bar{\Theta}^{C}{{}^{\star}\cal A}_{CB}\Omega^{BA}\,.$$ (3.17) Then the fermionic T-dual ”generalized metric” ${}^{\star}{\cal F}_{AB}$ and T-dual currents, ${}^{\star}\bar{\cal J}_{+A}$ and ${}^{\star}{\cal J}_{-A}$, with the help of (3.15) and (3.2), can be expressed in terms of initial ones $${}^{\star}{\cal F}_{AB}={\cal T}_{A}{}^{C}{\cal F}_{CD}{\cal T}^{D}{}_{B}\,,\quad{}^{\star}\bar{\cal J}_{+A}={\cal T}_{A}{}^{B}\bar{\cal J}_{+B}\,,\quad{}^{\star}{\cal J}_{-A}={\cal T}_{A}{}^{B}{\cal J}_{-B}\,.$$ (3.18) The matrix ${\cal A}_{AB}$ transforms as $${}^{\star}{\cal A}_{AB}={\cal T}_{A}{}^{C}{\cal A}_{CD}{\cal T}^{D}{}_{B}=({\cal A}^{-1})_{AB}\,.$$ (3.19) Note that, as well as bosonic case, double space action (LABEL:eq:doubleS) has global symmetry under transformations (3.15) if the conditions (3.18) are satisfied. From the first relation in (3.18) we obtain the form of the fermionic T-dual R-R background field $${}^{\star}P_{\alpha\beta}=(P^{-1})_{\alpha\beta}\,,$$ (3.20) while from the second and third equation we obtain the form of the fermionic T-dual NS-R background fields $${}^{\star}\Psi_{\alpha\mu}=(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\mu}\,,\quad{}^{\star}\bar{\Psi}_{\alpha\mu}=-\bar{\Psi}^{\beta}_{\mu}(P^{-1})_{\beta\alpha}\,.$$ (3.21) The non singular matrix $\alpha^{\alpha\beta}$ transforms as $$({}^{\star}\alpha)_{\alpha\beta}=(\alpha^{-1})_{\alpha\beta}\,.$$ (3.22) The expressions (3.20)-(3.22) are in full agreement with the relations (2.37) and (2.38) obtained by the standard fermionic Buscher procedure. Consequently, we showed that permutation of fermionic coordinates defined in (3.14) and (3.15) completely reproduces fermionic T-dual R-R and NS-R background fields. 3.4 Fermionic T-dual metric ${}^{\star}G_{\mu\nu}$ and Kalb-Ramond field ${}^{\star}B_{\mu\nu}$ The expression $\Pi_{+\mu\nu}+\frac{1}{2}\Psi^{\alpha}_{\mu}(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\nu}$ appears in the action (2.2) coupled with $\partial_{\pm}x^{\mu}$, along which we do not T-dualize. It is an analogue of $ij$-term of Refs.[9, 10] where $x^{i}$ coordinates are not T-dualized, and $\alpha\beta$-term in [22] where fermionic directions are undualized. Taking into account the form of the doubled action (LABEL:eq:doubleS) we suppose that term $L(x)$ has the form $$L(x)=2\partial_{+}x^{\mu}\left(\Pi_{+\mu\nu}+{}^{\star}\Pi_{+\mu\nu}\right)\partial_{-}x^{\nu}\equiv\mathcal{L}+{}^{\star}\mathcal{L}\,,$$ (3.23) where $\Pi_{+\mu\nu}$ is defined in (2.15) and ${}^{\star}\Pi_{+\mu\nu}$ is fermionic T-dual which we are going to find. This term should be invariant under T-dual transformation $${}^{\star}{\mathcal{L}}={\mathcal{L}}+\Delta{\mathcal{L}}\,.$$ (3.24) Using the fact that two successive T-dualization are identity transformation, we obtain $${\mathcal{L}}={}^{\star}{\mathcal{L}}+{}^{\star}\Delta{\mathcal{L}}\,.$$ (3.25) Combining last two relations we get $${}^{\star}\Delta{\mathcal{L}}=-\Delta{\mathcal{L}}\,.$$ (3.26) If $\Delta{\mathcal{L}}=2\partial_{+}x^{\mu}\Delta_{\mu\nu}\partial_{-}x^{\nu}$, we obtain the condition for $\Delta_{\mu\nu}$ $${}^{\star}\Delta_{\mu\nu}=-\,\Delta_{\mu\nu}\,.$$ (3.27) Using the relations (2.37) and (2.38) we realize that, up to multiplication constant, combination $$\Delta_{\mu\nu}=\bar{\Psi}^{\alpha}_{\mu}(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\nu}\,,$$ (3.28) satisfies the condition (3.27). So, we conclude that $${}^{\star}\Pi_{+\mu\nu}=\Pi_{+\mu\nu}+c\bar{\Psi}^{\alpha}_{\mu}(P^{-1})_{\alpha\beta}\Psi^{\beta}_{\nu}\,,$$ (3.29) where $c$ is an arbitrary constant. For $c=\frac{1}{2}$ we obtain the relations (2.3). So, in double space formulation the fermionic T-dual NS-NS background fields can be obtained up to an arbitrary constant under assumption that two successive T-dualizations produce initial action. 4 Dilaton field in double fermionic space Dilaton field transformation under fermionic T-duality is considered [16]. Here we will discuss some new features of dilaton transformation under fermionic T-duality as well as the dilaton field in fermionic double space. Because the dilaton transformation has quantum origin we start with the path integral for the gauge fixed action given in Eq.(2.30) $$Z=\int d\bar{v}_{+}^{\alpha}d\bar{v}_{-}^{\alpha}dv_{+}^{\alpha}dv_{-}^{\alpha}d\bar{\vartheta}_{\alpha}d\vartheta_{\alpha}e^{i\;S_{fix}(v_{\pm},\bar{v}_{\pm},\partial_{\pm}\vartheta,\partial_{\pm}\bar{\vartheta})}\,.$$ (4.1) For constant background case, after integration over the fermionic gauge fields $\bar{v}_{\pm}^{\alpha}$ and $v_{\pm}^{\alpha}$, we obtain the generating functional $Z$ in the form $$Z=\int d\bar{\vartheta}_{\alpha}d\vartheta_{\alpha}\det{\left[(P^{-1}\alpha^{-1})_{\alpha\beta}\right]}e^{i\;{}^{\star}S(\vartheta,\bar{\vartheta})}\,,$$ (4.2) where ${}^{\star}S(\vartheta,\bar{\vartheta})$ is T-dual action given in Eq.(2.34). We are able to perform such integration thank to the facts that we fixed the gauge in subsection 2.2. Note that here we multiplied with determinants of $P^{-1}$ and $\alpha^{-1}$ because we integrate over Grassman fields $v_{\pm}^{\alpha}$ and $\bar{v}_{\pm}^{\alpha}$. We can choose that $\det\alpha=1$, and the generating functional gets the form $$Z=\int d\bar{\vartheta}_{\alpha}d\vartheta_{\alpha}\det{\left[(P^{-1})_{\alpha\beta}\right]}e^{i\;{}^{\star}S(\vartheta,\bar{\vartheta})}\,.$$ (4.3) This produces the fermionic T-dual transformation of dilaton field $${}^{\star}\Phi=\Phi+\ln{\det{\left[(P^{-1})_{\alpha\beta}\right]}}=\Phi-\ln\det P^{\alpha\beta}\,.$$ (4.4) Let us calculate $\det P^{\alpha\beta}$ using the expression $$(P{P_{s}^{-1}}P^{T})^{\alpha\beta}=P_{s}^{\alpha\beta}-P_{a}^{\alpha\gamma}(P_{s}^{-1})_{\gamma\delta}P_{a}^{\delta\beta}\,,$$ (4.5) where we introduce the symmetric and antisymmetric parts for initial background fields $$P_{s}^{\alpha\beta}=\frac{1}{2}\left(P^{\alpha\beta}+P^{\beta\alpha}\right)\,,\quad P_{a}^{\alpha\beta}=\frac{1}{2}\left(P^{\alpha\beta}-P^{\beta\alpha}\right)\,,$$ (4.6) and similar expressions for T-dual background fields, ${}^{\star}P^{s}_{\alpha\beta}$ and ${}^{\star}P^{a}_{\alpha\beta}$. Taking into account that $$(P\cdot{}^{\star}P)^{\alpha}{}_{\beta}=\delta^{\alpha}{}_{\beta}\,,$$ (4.7) we obtain $$P_{s}^{\alpha\gamma}\;\;{}^{\star}P^{s}_{\gamma\beta}+P_{a}^{\alpha\gamma}\;\;{}^{\star}P^{a}_{\gamma\beta}=\delta^{\alpha}{}_{\beta}\,,\quad P_{s}^{\alpha\gamma}{}^{\star}P^{a}_{\gamma\beta}+P_{a}^{\alpha\gamma}{}^{\star}P^{s}_{\gamma\beta}=0\,.$$ (4.8) From these two equations we obtain $${}^{\star}P^{s}_{\alpha\beta}=\left[(P_{s}-P_{a}P_{s}^{-1}P_{a})^{-1}\right]_{\alpha\beta}\,,$$ (4.9) and, consequently, we have $$(PP_{s}^{-1}P^{T})^{\alpha\beta}=\left[({}^{\star}P_{s})^{-1}\right]^{\alpha\beta}\,.$$ (4.10) Taking determinant of the left and right-hand side of the above equation we get $$\det P^{\alpha\beta}=\sqrt{\frac{\det P_{s}}{\det{}^{\star}P_{s}}}\,,$$ (4.11) which produces $${}^{\star}\Phi=\Phi-\ln\sqrt{\frac{\det P_{s}}{\det{}^{\star}P_{s}}}\,.$$ (4.12) Using the fact that $P^{\alpha\beta}=e^{\frac{\Phi}{2}}F^{\alpha\beta}$ and ${}^{\star}P^{\alpha\beta}=e^{\frac{{}^{\star}\Phi}{2}}{}^{\star}F^{\alpha\beta}$, fermionic T-dual transformation law for dilaton takes the form $${}^{\star}\Phi=\Phi-\ln\sqrt{e^{8\left(\Phi-{}^{\star}\Phi\right)}\frac{\det F_{s}}{\det{}^{\star}F_{s}}}\,,$$ (4.13) and finally we have $${}^{\star}\Phi=\Phi+\frac{1}{6}\ln\frac{\det F_{s}}{\det{}^{\star}F_{s}}\,.$$ (4.14) It is obvious that two successive T-dualizations act as identity transformation $${}^{\star}{}^{\star}\Phi=\Phi\,.$$ (4.15) We can conclude that only symmetric parts of the R-R field strengths give contribution to the transformation of dilaton field under fermionic T-duality. In type IIA superstring theory R-R field strength $F^{\alpha\beta}$ contains tensors $F^{A}_{0}$, $F^{A}_{\mu\nu}$ and $F^{A}_{\mu\nu\rho\lambda}$, while in type IIB $F^{\alpha\beta}$ contains $F^{B}_{\mu}$, $F^{B}_{\mu\nu\rho}$ and self dual part of $F^{B}_{\mu\nu\rho\lambda\omega}$. Using the conventions for gamma matrices from the appendix of the first reference in [1] (see Appendix A), we conclude that symmetric part of $F^{\alpha\beta}$ in type IIA contains scalar $F^{A}_{0}$ and 2-rank tensor $F^{A}_{\mu\nu}$, while in type IIB superstring theory it contains 1-rank $F^{B}_{\mu}$ and self dual part of 5-rank tensor $F^{B}_{\mu\nu\rho\lambda\omega}$. Let us write the path integral for double action (LABEL:eq:doubleS) $$Z_{double}=\int d\Theta^{A}d\bar{\Theta}^{A}e^{iS_{double}(\Theta,\bar{\Theta})}\,.$$ (4.16) Because $\det{\cal F}=1$ and $\det{\cal A}=1$ we obtain that dilaton field in double space is invariant under fermionic T-duality. Consequently, a new dilaton should be introduced (see [14, 15]), invariant under T-duality transformations. Because of the relation (4.15) we define the T-duality invariant dilaton as $$\Phi_{inv}=\frac{1}{2}\left({}^{\star}\Phi+\Phi\right)=\Phi+\frac{1}{12}\ln\frac{\det F_{s}}{\det{}^{\star}F_{s}}\,,\quad{}^{\star}\Phi_{inv}=\Phi_{inv}\,.$$ (4.17) 5 Concluding remarks In this article we considered the fermionic T-duality of the type II superstring theory using the double space approach. We used the action of the type II superstring theory in pure spinor formulation neglecting ghost terms and keeping all terms up to the quadratic ones which means that all background fields are constant. Using equations of motion with respect to the fermionic momenta $\pi_{\alpha}$ and $\bar{\pi}_{\alpha}$ we eliminated them from the action. We obtained the action expressed in terms of the derivatives $\partial_{\pm}x^{\mu}$, $\partial_{-}\theta^{\alpha}$ and $\partial_{+}\bar{\theta}^{\alpha}$, where $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ are fermionic coordinates. Because $\theta^{\alpha}$ appears in the action in the form $\partial_{-}\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ in the form $\partial_{+}\bar{\theta}^{\alpha}$, there is a local chiral gauge symmetry with parameters depending on $\sigma^{\pm}=\tau\pm\sigma$. We fixed this gauge invariance using BRST approach. Using the Buscher approach we performed fermionic T-duality procedure and obtained the form of the fermionic T-dual background fields. It is obvious that two successive fermionic T-dualization produces initial theory i.e. they are equivalent to the identity transformation. In the central point of the article we generalize the idea of double space and show that fermionic T-duality can be represented as permutation in fermionic double space. In the bosonic case double space spanned by coordinates $Z^{M}=(x^{\mu},y_{\mu})$ can be obtained adding T-dual coordinates $y_{\mu}$ to the initial ones $x^{\mu}$. Using analogy with the bosonic case we introduced double fermionic space doubling the initial coordinates $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$ with their fermionic T-duals, $\vartheta_{\alpha}$ and $\bar{\vartheta}_{\alpha}$. Double fermionic space is spanned by the coordinates $\Theta^{A}=(\theta^{\alpha},\vartheta_{\alpha})$ and $\bar{\Theta}^{A}=(\bar{\theta}^{\alpha},\bar{\vartheta}_{\alpha})$. T-dual transformation laws and their inverse ones are rewritten in fermionic double space by single relation introducing the fermionic generalized metric ${\cal F}_{AB}$ and currents ${\cal J}_{-A}$ and $\bar{\cal J}_{+A}$. Demanding that transformation laws for fermionic T-dual double coordinates, ${}^{\star}\Theta^{A}={\cal T}^{A}{}_{B}\Theta^{B}$ and ${}^{\star}\bar{\Theta}^{A}={\cal T}^{A}{}_{B}\bar{\Theta}^{B}$, are of the same form as those for $\Theta^{A}$ and $\bar{\Theta}^{A}$, we obtained fermionic T-dual generalized metric ${}^{\star}{\cal F}_{AB}$ and currents ${}^{\star}{\cal J}_{-A}$ and ${}^{\star}\bar{\cal J}_{+A}$. These transformations act as symmetry transformations of the double action (LABEL:eq:doubleS). They produce the form of the fermionic T-dual NS-R and R-R background fields which are in full accordance with the results obtained by standard Buscher procedure. The expressions for T-dual metric ${}^{\star}G_{\mu\nu}$ and Kalb-Ramond field ${}^{\star}B_{\mu\nu}$ cannot be found from double space formalism because they do not appear in the T-dual transformation laws. These expressions, up to arbitrary constant, are obtained assuming that two successive T-dualization act as identity transformation. We considered transformation of dilaton field under fermionic T-duality. We derived the transformation law for dilaton field and concluded that just symmetric parts of R-R field strengths, $F_{s}^{\alpha\beta}$ and ${}^{\star}F^{s}_{\alpha\beta}$, affected the dilaton transformation law. This means that in the case of type IIA scalar and 2-rank tensor have influence on the dilaton transformation, while in the case of type IIB 1-rank tensor and self-dual part of 5-rank tensor take that role. Therefore, we extended T-dualization in double space to the fermionic case. We proved that permutation of fermionic coordinates with corresponding T-dual ones in double space is equivalent to the fermionic T-duality along initial coordinates $\theta^{\alpha}$ and $\bar{\theta}^{\alpha}$. Appendix A Gamma matrices In the appendix of the first reference of [1] one specific representation of gamma matrices is given. Here we will calculate the transpositions of basis matrices $(C\Gamma_{(k)})^{\alpha\beta}$ for $k=1,2,3,4,5$, where $C$ is charge conjugation operator. The charge conjugation operator is antisymmetric matrix, $C^{T}=-C$, and it acts on gamma matrices as $$C\Gamma^{\mu}C^{-1}=-(\Gamma^{\mu})^{T}\,.$$ (A.1) Now we have $$(C\Gamma^{\mu})^{T}=(\Gamma^{\mu})^{T}C^{T}=-(\Gamma^{\mu})^{T}C=C\Gamma^{\mu}C^{-1}C=C\Gamma^{\mu}\,,$$ (A.2) $$(C\Gamma^{\mu}\Gamma^{\nu})^{T}=C\Gamma^{\mu}\Gamma^{\nu}\Longrightarrow(C\Gamma^{[\mu\nu]})^{T}=C\Gamma^{[\mu\nu]}\,,$$ (A.3) $$(C\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho})^{T}=-C\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho}\Longrightarrow(C\Gamma^{[\mu\nu\rho]})^{T}=-C\Gamma^{[\mu\nu\rho]}\,,$$ (A.4) $$(C\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho}\Gamma^{\lambda})^{T}=-C\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho}\Gamma^{\lambda}\Longrightarrow(C\Gamma^{[\mu\nu\rho\lambda]})^{T}=-C\Gamma^{[\mu\nu\rho\lambda]}\,,$$ (A.5) $$(C\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho}\Gamma^{\lambda}\Gamma^{\omega})^{T}=C\Gamma^{\mu}\Gamma^{\nu}\Gamma^{\rho}\Gamma^{\lambda}\Gamma^{\omega}\Longrightarrow(C\Gamma^{[\mu\nu\rho\lambda\omega]})^{T}=C\Gamma^{[\mu\nu\rho\lambda\omega]}\,.$$ (A.6) References [1] J. Polchinski, String theory - Volume II, Cambridge University Press, 1998; K. Becker, M. Becker and J. H. Schwarz, String Theory and M-Theory - A Modern Introduction, Cambridge University Press, 2007. [2] E. Alvarez, L. Alvarez-Gaume, Y. Lozano, An Introduction to T-Duality in String Theory, arxiv: hep-th/9410237; A. Giveon, M. Porrati and E. Rabinovici, “Target space duality in string theory”, Phys. Rept. 244 (1994), 77-202, arXiv:hep-th/9401139; I. Bandos and B. Julia, JHEP 08 (2003) 032; D. Luest, JHEP 12 (2010) 084. [3] T. H. Buscher, Phys. Lett. B194 (1987) 59; T. H. Buscher, Phys. Lett. B201 (1988) 466. [4] M. Duff, Nucl. Phys. B 335 (1990) 610. [5] A. A. Tseytlin, Phys.Lett. B 242 (1990) 163. [6] A. A. Tseytlin, Nucl. Phys. B 350 (1991) 395. [7] W. Siegel, Phys.Rev. D 48 (1993) 2826. [8] W. Siegel, Phys.Rev. D 47 (1993) 5453. [9] C. M. Hull, JHEP 10 (2005) 065. [10] C. M. Hull, JHEP 10 (2007) 057; 07 (2007) 080. [11] D. S. Berman, M. Cederwall and M. J. Perry, JHEP 09 (2014) 066; D. S. Berman, C. D. A. Blair, E. Malek and M. J. Perry, Int.J.Mod.Phys. A29 (2014) 15, 1450080; C. D. A. Blair, E. Malek and A. J. Routh, Class.Quant.Grav. 31 (2014) 20, 205011. [12] C.M. Hull and R.A. Reid-Edwards, JHEP 09 (2009) 014. [13] O. Hohm and B. Zwiebach, JHEP 11 (2014) 075. [14] B. Sazdović, T-duality as coordinates permutation in double space, arxiv: 1501.01024. [15] B. Sazdović, JHEP 08 (2015) 055. [16] N. Berkovits and J. Maldacena, JHEP 09 (2008) 062; I. Bakhmatov and D. S. Berman, Nucl. Phys. B832 (2010) 89-108; K. Sfetsos, K. Siampos and D. C. Thompson, QMUL-PH-10-08, arXiv:1007.5142; I. Bakhmatov, Fermionic T-duality and U-duality in type II supergravity, arXiv:1112.1983. [17] N. Beisert, R. Ricci, A. A. Tseytlin and M. Wolf, Phys.Rev. D78 (2008) 126004; R. Ricci, A. A. Tseytlin and M. Wolf, JHEP 12 (2007) 082. [18] M. Hatsuda, K. Kamimura, W. Siegel, JHEP 06 (2014) 039. [19] W. Siegel, Phys.Rev. D50 (1994) 2799-2805. [20] B. Nikolić and B. Sazdović, Phys. Rev. D 84 (2011) 065012; JHEP 06 (2012) 101. [21] R. Benichou, G. Policastro, J. Troost, Phys. Lett. B661 (2008) 192-195. [22] B. Nikolić, B. Sazdović, arXiv:1505.06044. [23] C. M. Hull, JHEP 07 (1998) 021. [24] N.  Berkovits, hep-th/0209059; P.  A.  Grassi, G.  Policastro and P.  van  Nieuwenhuizen, JHEP 10 (2002) 054; P.  A.  Grassi, G.  Policastro and P.  van  Nieuwenhuizen, JHEP 11 (2002) 004; P.  A.  Grassi, G.  Policastro and P.  van  Nieuwenhuizen, Adv. Theor. Math. Phys. 7 (2003) 499; P.  A.  Grassi, G.  Policastro and P.  van  Nieuwenhuizen, Phys. Lett. B553 (2003) 96. [25] J.  de  Boer, P.  A.  Grassi and P.  van  Nieuwenhuizen, Phys. Lett. B574 (2003) 98. [26] B.  Nikolić and B.  Sazdović, Phys. Lett. B666 (2008) 400. [27] B. Nikolić and B. Sazdović, JHEP 08 (2010) 037. [28] B. Nikolić and B. Sazdović, Nucl. Phys. B 836 (2010) 100-126. [29] P. A. Grassi, L. Tamassia, JHEP 07 (2004) 071. [30] Lj. Davidović, B. Sazdović, Eur. Phys. J. C 74 (2014) 2683. [31] Lj. Davidović, B. Nikolić, B. Sazdović, Eur. Phys. J. C 74 (2014) 2734. [32] N. Berkovits, P. Howe, Nucl. Phys. B635 (2002) 75-105. [33] M. J. Duff, arxiv:hep-th/9912164v2.
Automatic Quality Assessment for Speech Translation Using Joint ASR and MT Features Ngoc-Tien Le Firstname Lastname Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, France Building IMAG, 700 Centrale, 38401 Saint Martin d’Hères Tel.: +33 457421454 22email: [email protected]    Benjamin Lecouteux Firstname Lastname Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, France Building IMAG, 700 Centrale, 38401 Saint Martin d’Hères Tel.: +33 457421454 22email: [email protected]    Laurent Besacier Firstname Lastname Laboratoire d’Informatique de Grenoble, University of Grenoble Alpes, France Building IMAG, 700 Centrale, 38401 Saint Martin d’Hères Tel.: +33 457421454 22email: [email protected] (Received: date / Accepted: date) Abstract This paper addresses automatic quality assessment of spoken language translation (SLT). This relatively new task is defined and formalized as a sequence labeling problem where each word in the SLT hypothesis is tagged as $good$ or $bad$ according to a large feature set. We propose several word confidence estimators (WCE) based on our automatic evaluation of transcription (ASR) quality, translation (MT) quality, or both (combined ASR+MT). This research work is possible because we built a specific corpus which contains 6.7k utterances for which a quintuplet containing: ASR output, verbatim transcript, text translation, speech translation and post-edition of translation is built. The conclusion of our multiple experiments using joint ASR and MT features for WCE is that MT features remain the most influent while ASR feature can bring interesting complementary information. Our robust quality estimators for SLT can be used for re-scoring speech translation graphs or for providing feedback to the user in interactive speech translation or computer-assisted speech-to-text scenarios. Keywords:Quality estimation Word confidence estimation (WCE) Spoken Language Translation (SLT) Joint Features Feature Selection ∎ 1 Introduction Automatic quality assessment of spoken language translation (SLT), also named confidence estimation (CE), is an important topic because it allows to know if a system produces (or not) user-acceptable outputs. In interactive speech to speech translation, CE helps to judge if a translated turn is uncertain (and ask the speaker to rephrase or repeat). For speech-to-text applications, CE may tell us if output translations are worth being corrected or if they require retranslation from scratch. Moreover, an accurate CE can also help to improve SLT itself through a second-pass N-best list re-ranking or search graph re-decoding, as it has already been done for text translation in bach11 and luong:hal-00953719 , or for speech translation in besacier-asru-2015 . Consequently, building a method which is capable of pointing out the correct parts as well as detecting the errors in a speech translated output is crucial to tackle above issues. Given signal $x_{f}$ in the source language, spoken language translation (SLT) consists in finding the most probable target language sequence $\hat{e}=(e_{1},e_{2},...,e_{N})$ so that $$\hat{e}=\underset{e}{\operatorname{argmax}}\{p(e/x_{f},f)\}$$ (1) where $f=(f_{1},f_{2},...,f_{M})$ is the transcription of $x_{f}$. Now, if we perform confidence estimation at the “words” level, the problem is called Word-level Confidence Estimation (WCE) and we can represent this information as a sequence $q$ (same length $N$ of $\hat{e}$) where $q=(q_{1},q_{2},...,q_{N})$ and $q_{i}\in\{good,bad\}$111$q_{i}$ could be also more than 2 labels, or even scores but this paper only deals with error detection (binary set of labels). Then, integrating automatic quality assessment in our SLT process can be done as following: $$\displaystyle\hat{e}$$ $$\displaystyle=$$ $$\displaystyle\underset{e}{\operatorname{argmax}}\sum_{q}{p(e,q/x_{f},f)}$$ (2) $$\displaystyle\hat{e}$$ $$\displaystyle=$$ $$\displaystyle\underset{e}{\operatorname{argmax}}\sum_{q}{p(q/x_{f},f,e)*p(e/x_% {f},f)}$$ (3) $$\displaystyle\hat{e}$$ $$\displaystyle\approx$$ $$\displaystyle\underset{e}{\operatorname{argmax}}\{\underset{q}{\operatorname{% max}}\{p(q/x_{f},f,e)*p(e/x_{f},f)\}\}$$ (4) In the product of (4), the SLT component $p(e/x_{f},f)$ and the WCE component $p(q/x_{f},f,e)$ contribute together to find the best translation output $\hat{e}$. In the past, WCE has been treated separately in ASR or MT contexts and we propose here a joint estimation of word confidence for a spoken language translation (SLT) task involving both ASR and MT. This journal paper is an extended version of a paper published at ASRU 2015 last year besacier-asru-2015 but we focus more on the WCE component and on the best approaches to estimate $p(q/x_{f},f,e)$ accurately. Contributions The main contributions of this journal paper are the following: • A corpus (distributed to the research community222https://github.com/besacier/WCE-SLT-LIG) dedicated to WCE for SLT was initially published in besacier14 . We present, in this paper, its extension from 2643 to 6693 speech utterances. • While our previous work on quality assessment was based on two separate WCE classifiers (one for quality assessment in ASR and one for quality assessment in MT), we propose here a unique joint model based on different feature types (ASR and MT features). • This joint model allows us to operate feature selection and analyze which features (from ASR or MT) are the most efficient for quality assessment in speech translation. • We also experiment with two ASR systems that have different performance in order to analyze the behavior of our SLT quality assessment algorithms at different levels of word error rate (WER). Outline The outline of this paper goes simply as follows: section 2 reviews the state-of-the-art on confidence estimation for ASR and MT. Our WCE system using multiple features is then described in section  3. The experimental setup (notably our specific WCE corpus) is presented in section 4 while section 5 evaluates our joint WCE system. Feature selection for quality assessment in speech translation is analyzed in section 6 and finally, section 7 concludes this work and gives some perspectives. 2 Related work on confidence estimation for ASR and MT Several previous works tried to propose effective confidence measures in order to detect errors on ASR outputs. Confidence measures are introduced for Out-Of-Vocabulary (OOV) detection by Asadi1990 . Young1994 extends the previous work and introduces the use of word posterior probability (WPP) as a confidence measure for speech recognition. Posterior probability of a word is most of the time computed using the hypothesis word graph Kemp1997 . Also, more recent approaches Lecouteux2009 for confidence measure estimation use side-information extracted from the recognizer: normalized likelihoods (WPP), the number of competitors at the end of a word (hypothesis density), decoding process behavior, linguistic features, acoustic features (acoustic stability, duration features) and semantic features. In parallel, the Workshop on Machine Translation (WMT) introduced in 2013 a WCE task for machine translation. han13 luong13b employed the Conditional Random Fields (CRF) lafferty01 model as their machine learning method to address the problem as a sequence labelling task. Meanwhile, bicici13 extended their initial proposition by dynamic training with adaptive weight updates in their neural network classifier. As far as prediction indicators are concerned, bicici13 proposed seven word feature types and found among them the “common cover links” (the links that point from the leaf node containing this word to other leaf nodes in the same subtree of the syntactic tree) the most outstanding. han13 focused only on various n-gram combinations of target words. Inheriting most of previously-recognized features, luong13b integrated a number of new indicators relying on graph topology, pseudo reference, syntactic behavior (constituent label, distance to the semantic tree root) and polysemy characteristic. The estimation of the confidence score uses mainly classifiers like Conditional Random Fields han13 ; luong_wmt14 , Support Vector Machines langlois12 or Perceptron bicici13 . Some investigations were also conducted to determine which features seem to be the most relevant. langlois12 proposed to filter features using a forward-backward algorithm to discard linearly correlated features. Using Boosting as learning algorithm, luong15 was able to take advantage of the most significant features. Finally, several toolkits for WCE were recently proposed: TranscRater for ASR333https://github.com/hlt-mt/TranscRater, Marmot for MT444https://github.com/qe-team/marmot as well as WCE-LIG servan-toolkit-2015 555https://github.com/besacier/WCE-LIG that will be used to extract MT features in the experiments of this journal paper. To our knowledge, the first attempt to design WCE for speech translation, using both ASR and MT features, is our own work besacier14 ; besacier-asru-2015 which is further extended in this journal paper submission. 3 Building an efficient quality assessment (WCE) system The WCE component solves the equation: $$\hat{q}=\underset{q}{\operatorname{argmax}}\{p_{SLT}(q/x_{f},f,e)\}$$ (5) where $q=(q_{1},q_{2},...,q_{N})$ is the sequence of quality labels on the target language. This is a sequence labelling task that can be solved with several machine learning techniques such as Conditional Random Fields (CRF) lafferty01 . However, for that, we need a large amount of training data for which a quadruplet $(x_{f},f,e,q)$ is available. In this work, we will use a corpus extended from besacier14 which contains 6.7k utterances. We will investigate if this amount of data is enough to evaluate and test a joint model $p_{SLT}(q/x_{f},f,e)$. As it is much easier to obtain data containing either the triplet $(x_{f},f,q)$ (automatically transcribed speech with manual references and quality labels infered from word error rate estimation) or the triplet $(f,e,q)$ (automatically translated text with manual post-editions and quality labels infered using tools such as TERpA snover08 ) we can also recast the WCE problem with the following equation: $$\hat{q}=\underset{q}{\operatorname{argmax}}\{p_{ASR}(q/x_{f},f)^{\alpha}*p_{MT% }(q/e,f)^{1-\alpha}\}$$ (6) where $\alpha$ is a weight giving more or less importance to $WCE_{ASR}$ (quality assesment on transcription) compared to $WCE_{MT}$ (quality assesment on translation). It is important to note that $p_{ASR}(q/x_{f},f)$ corresponds to the quality estimation of the words in the target language based on features calculated on the source language (ASR). For that, what we do is projecting source quality scores to the target using word-alignment information between $e$ and $f$ sequences. This alternative approach (equation 6) will be also evaluated in this work. In both approaches – joint ($p_{SLT}(q/x_{f},f,e)$) and combined ($p_{ASR}(q/x_{f},f)$ + $p_{MT}(q/e,f)$) – some features need to be extracted from ASR and MT modules. They are more precisely detailed in next subsections. 3.1 WCE features for speech transcription (ASR) In this work, we extract several types of features, which come from the ASR graph, from language model scores and from a morphosyntactic analysis. These features are listed below (more details can be found in besacier14 ): • Acoustic features: word duration (F-dur). • Graph features (extracted from the ASR word confusion networks): number of alternative (F-alt) paths between two nodes; word posterior probability (F-post). • Linguistic features (based on probabilities by the language model): word itself (F-word), 3-gram probability (F-3g), log probability (F-log), back-off level of the word (F-back), as proposed in JulienFayolle2010 , • Lexical Features: Part-Of-Speech (POS) of the word (F-POS), • Context Features: Part-Of-Speech tags in the neighborhood of a given word (F-context). For each word in the ASR hypothesis, we estimate the 9 features (F-Word; F-3g; F-back; F-log; F-alt; F-post; F-dur; F-POS; F-context) previously described. In a preliminary experiment, we will evaluate these features for quality assessment in ASR only ($WCE_{ASR}$ task). Two different classifiers will be used: a variant of boosting classification algorithm called bonzaiboost AntoineLaurent2014 (implementing the boosting algorithm Adaboost.MH over deeper trees) and the Conditional Random Fields lafferty01 . 3.2 WCE features for machine translation (MT) A number of knowledge sources are employed for extracting features, in a total of 24 major feature types, see Table 1. It is important to note that we extract features regarding tokens in the machine translation (MT) hypothesis sentence. In other words, one feature is extracted for each token in the MT output. So, in the Table 1, target refers to the feature coming from the MT hypothesis and source refers to a feature extracted from the source word aligned to the considered target word. More details on some of these features are given in the next subsections. 3.2.1 Internal Features These features are given by the Machine Translation system, which outputs additional data like $N$-best list. Word Posterior Probability (WPP) and Nodes features are extracted from a confusion network, which comes from the output of the machine translation $N$-best list. WPP Exact is the WPP value for each word concerned at the exact same position in the graph. WPP Any extracts the same information at any position in the graph. WPP Min gives the smallest WPP value concerned by the transition and WPP Max its maximum. 3.2.2 External Features Below is the list of the external features used: • Proper Name: indicates if a word is a proper name (same binary features are extracted to know if a token is Numerical, Punctuation or Stop Word). • Unknown Stem: informs whether the stem of the considered word is known or not. • Number of Word/Stem Occurrences: counts the occurrences of a word/stem in the sentence. • Alignment context features: these features (#11-13 in Table 1) are based on collocations and proposed by bach11 . Collocations could be an indicator for judging if a target word is generated by a particular source word. We also apply the reverse, the collocations regarding the source side (#7 in Table 1 - simply called Alignment Features): $\bullet$ Source alignment context features: the combinations of the target word, the source word (with which it is aligned), and one source word before and one source word after (left and right contexts, respectively). $\bullet$ Target alignment context features: the combinations of the source word, the target word (with which it is aligned), and one target word before and one target word after. • Longest Target (or Source) $N$-gram Length: we seek to get the length ($n+1$) of the longest left sequence ($w_{i-n}$) concerned by the current word ($w_{i}$) and known by the language model (LM) concerned (source and target sides). For example, if the longest left sequence $w_{i-2},w_{i-1},w_{i}$ appears in the target LM, the longest target n-gram value for $w_{i}$ will be 3. This value ranges from 0 to the max order of the LM concerned. We also extract a redundant feature called Backoff Behavior Target. • The target word’s constituent label (Constituent Label) and its depth in the constituent tree (Distance to Root) are extracted using a syntactic parser. • Target Polysemy Count: we extract the polysemy count, which is the number of meanings of a word in a given language. • Occurences in Google Translate and Occurences in Bing Translator: in the translation hypothesis, we (optionally) test the presence of the target word in on-line translations given respectively by Google Translate and Bing Translator666Using this kind of feature is controversial, however we observed that such features are available in general use case scenarios, so we decided to include them in our experiments. Contrastive results without these 2 features will be also given later on.. A very similar feature set was used for a simple $WCE_{MT}$ task (English - Spanish MT, WMT 2013, 2014 quality estimation shared task) and obtained very good performances luong13 . This preliminary experience in participating to the WCE shared task in 2013 and 2014 lead us to the following observation: while feature processing is very important to achieve good performance, it requires to call a set of heterogeneous NLP tools (for lexical, syntactic, semantic analyses). Thus, we recently proposed to unify the feature processing, together with the call of machine learning algorithms, in order to facilitate the design of confidence estimation systems. The open-source toolkit proposed (written in Python and made available on github777http://github.com/besacier/WCE-LIG) integrates some standard as well as in-house features that have proven useful for WCE (based on our experience in WMT 2013 and 2014). In this paper, we will use only Conditional Random Fields lafferty01 (CRFs) as our machine learning method, with WAPITI toolkit lavergne10 , to train our WCE estimator based on MT features. 4 Experimental setup 4.1 Dataset 4.1.1 Starting point: an existing MT Post-edition corpus For a French-English translation task, we used our SMT system to obtain the translation hypothesis for 10,881 source sentences taken from news corpora of the WMT (Workshop on Machine Translation) evaluation campaign (from 2006 to 2010). Post-editions were obtained from non professional translators using a crowdsourcing platform. More details on the baseline SMT system used can be found in potet10 and more details on the post-edited corpus can be found in potet12 . It is worth mentionning, however, that a sub-set (311 sentences) of these collected post-editions was assessed by a professional translator and 87.1% of post-editions were judged to improve the hypothesis Then, the word label setting for WCE was done using TERp-A toolkit snover08 . Table 2 illustrates the labels generated by TERp-A for one hypothesis and post-edition pair. Each word or phrase in the hypothesis is aligned to a word or phrase in the post-edition with different types of edit: “I” (insertions), “S” (substitutions), “T” (stem matches), “Y” (synonym matches), and “P” (phrasal substitutions). The lack of a symbol indicates an exact match and will be replaced by “E” thereafter. We do not consider the words marked with “D” (deletions) since they appear only in the reference. However, later on, we will have to train binary classifiers ($good$/$bad$) so we re-categorize the obtained 6-label set into binary set: The E, T and Y belong to the $good$ (G), whereas the S, P and I belong to the $bad$ (B) category. 4.1.2 Extending the corpus with speech recordings and transcripts The dev set and tst set of this corpus were recorded by french native speakers. Each sentence was uttered by 3 speakers, leading to 2643 and 4050 speech recordings for dev set and tst set, respectively. For each speech utterance, a quintuplet containing: ASR output ($f_{hyp}$), verbatim transcript ($f_{ref}$), English text translation output ($e_{hyp_{mt}}$), speech translation output ($e_{hyp_{slt}}$) and post-edition of translation ($e_{ref}$), was made available. This corpus is available on a github repository888https://github.com/besacier/WCE-SLT-LIG/. More details are given in table 3. The total length of the dev and tst speech corpus obtained are 16h52, since some utterances were pretty long. 4.2 ASR Systems To obtain the speech transcripts ($f_{hyp}$), we built a French ASR system based on KALDI toolkit Povey_ASRU2011 . Acoustic models are trained using several corpora (ESTER, REPERE, ETAPE and BREF120) representing more than 600 hours of french transcribed speech. The baseline GMM system is based on mel-frequency cepstral coefficient (MFCC) acoustic features (13 coefficients expanded with delta and double delta features and energy : 40 features) with various feature transformations including linear discriminant analysis (LDA), maximum likelihood linear transformation (MLLT), and feature space maximum likelihood linear regression (fMLLR) with speaker adaptive training (SAT). The GMM acoustic model makes initial phoneme alignments of the training data set for the following DNN acoustic model training. The speech transcription process is carried out in two passes: an automatic transcript is generated with a GMM-HMM model of 43182 states and 250000 Gaussians. Then word graphs outputs obtained during the first pass are used to compute a fMLLR-SAT transform on each speaker. The second pass is performed using DNN acoustic model trained on acoustic features normalized with the fMLLR matrix. CD-DNN-HMM acoustic models are trained (43 182 context-dependent states) using GMM-HMM topology. We propose to use two 3-gram language models trained on French ESTER corpus Galliano06corpusdescription as well as on French Gigaword (vocabulary size are respectively 62k and 95k). The ASR systems LM weight parameters are tuned through WER on the dev corpus. Details on these two language models can be found in table 4. In our experiments we propose two ASR systems based on the previously described language models. The first system ($ASR1$) uses the small language model allowing a fast ASR system (about 2x Real Time), while in the second system lattices are rescored with a big language model (about 10x Real Time) during a third pass. Table 5 presents the performances obtained by two above ASR systems. These WER may appear as rather high according to the task (transcribing read news). A deeper analysis shows that these news contain a lot of foreign named entities, especially in our dev set. This part of the data is extracted from French medias dealing with european economy in EU. This could also explain why the scores are significantly different between dev and test sets. In addition, automatic post-processing is applied to ASR output in order to match requirements of standard input for machine translation. 4.3 SMT System We used moses phrase-based translation toolkit koehn07 to translate French ASR into English ($e_{hyp}$). This medium-size system was trained using a subset of data provided for IWSLT 2012 evaluation iwslt2012_campaign : Europarl, Ted and News-Commentary corpora. The total amount is about 60M words. We used an adapted target language model trained on specific data (News Crawled corpora) similar to our evaluation corpus (see potet10 ). This standard SMT system will be used in all experiments reported in this paper. 4.4 Obtaining quality assessment labels for SLT After building an ASR system, we have a new element of our desired quintuplet: the ASR output $f_{hyp}$. It is the noisy version of our already available verbatim transcripts called $f_{ref}$. This ASR output ($f_{hyp}$) is then translated by the exact same SMT system potet10 already mentionned in subsection 4.3. This new output translation is called $e_{hyp_{slt}}$ and it is a degraded version of $e_{hyp_{mt}}$ (translation of $f_{ref}$). At this point, a strong assumption we made has to be revealed: we re-used the post-editions obtained from the text translation task (called $e_{ref}$), to infer the quality (G, B) labels of our speech translation output $e_{hyp_{slt}}$. The word label setting for WCE is also done using TERp-A toolkit snover08 between $e_{hyp_{slt}}$ and $e_{ref}$. This assumption, and the fact that initial MT post-edition can be also used to infer labels of a SLT task, is reasonnable regarding results (later presented in table 8 and table 9) where it is shown that there is not a huge difference between the MT and SLT performance (evaluated with BLEU). The remark above is important and this is what makes the value of this corpus. For instance, other corpora such as the TED corpus compiled by LIUM999http://www-lium.univ-lemans.fr/fr/content/corpus-ted-lium contain also a quintuplet with ASR output, verbatim transcript, MT output, SLT output and target translation. But there are 2 main differences: first, the target translation is a manual translation of the prior subtitles so this is not a post-edition of an automatic translation (and we have no guarantee that the $good$/$bad$ labels extracted from this will be reliable for WCE training and testing); secondly, in our corpus, each sentence is uttered by 3 different speakers which introduces speaker variability in the database and allows us to deal with different ASR outputs for a single source sentence. 4.5 Final corpus statistics The final corpus obtained is summarized in table 6, where we also clarify how the WCE labels were obtained. For the test set, we now have all the data needed to evaluate WCE for 3 tasks: • ASR: extract $good$/$bad$ labels by calculating WER between $f_{hyp}$ and $f_{ref}$, • MT: extract $good$/$bad$ labels by calculating TERp-A between $e_{hyp_{mt}}$ and $e_{ref}$, • SLT: extract $good$/$bad$ labels by calculating TERp-A between $e_{hyp_{slt}}$ and $e_{ref}$. Table 7 gives an example of the quintuplet available in our corpus. One transcript ($f_{hyp1}$) has 1 error while the other one ($f_{hyp2}$) has 4. This leads to respectively 2 B labels ($e_{hyp_{slt1}}$) and 4 B labels ($e_{hyp_{slt2}}$) in the speech translation output, while $e_{hyp_{mt}}$ has only one B label. Table 8 and table 9 summarize baseline ASR, MT and SLT performances obtained on our corpora, as well as the distribution of good (G) and bad (B) labels inferred for both tasks. Logically, the percentage of (B) labels increases from MT to SLT task in the same conditions. 5 Experiments on WCE for SLT 5.1 SLT quality assessment using only MT or ASR features We first report in Table 10 the baseline WCE results obtained using MT or ASR features separately. In short, we evaluate the performance of 4 WCE systems for different tasks: • The first and second systems (WCE for ASR / ASR feat.) use ASR features described in section  3.1 with two different classifiers (CRF or Boosting). • The third system (WCE for SLT / MT feat.) uses only MT features described in section  3.2 with CRF classifier. • The fourth system (WCE for SLT / ASR feat.) uses only ASR features described in section  3.1 with CRF classifier (so this is predicting SLT output confidence using only ASR confidence features!). Word alignment information between $f_{hyp}$ and $e_{hyp}$ is used to project the WCE scores coming from ASR, to the SLT output, In all experiments reported in this paper, we evaluate the performance of our classifiers by using the average between the F-measure for $good$ labels and the F-measure for $bad$ labels that are calculated by the common evaluation metrics: Precision, Recall and F-measure for $good$/$bad$ labels. Since two ASR systems are available, F-mes1 is obtained for SLT based on $ASR1$ whereas F-mes2 is obtained for SLT based on $ASR2$. For the results of Table 10, the classifier is evaluated on the tst part of our corpus and trained on the dev part. Concerning WCE for ASR, we observe that Fmeasure decreases when ASR WER is lower (F-mes2$<$F-mes1 while $WER_{ASR2}<WER_{ASR1}$). So quality assessment in ASR seems to become harder as the ASR system improves. This could be due to the fact that the ASR1 errors recovered by bigger LM in ASR2 system were easier to detect. Anyway, this conclusion should be considered with caution since both results (F-mes1 and F-mes2) are not directly comparable because they are evaluated on different references (proportion of $good$/$bad$ labels differ as ASR system differ). The effect of the classifier (CRF or Boosting) is not conclusive since CRF is better for F-mes1 and worse for F-mes2. Anyway, we decide to use CRF for all our future experiments since this is the classifier integrated in WCE-LIG servan-toolkit-2015 toolkit. Concerning WCE for SLT, we observe that Fmeasure is better using MT features rather than ASR features (quality assessment for SLT more dependent of MT features than ASR features). Again, Fmeasure decreases when ASR WER is lower (F-mes2$<$F-mes1 while $WER_{ASR2}<WER_{ASR1}$). For MT features, removing OccurInGoogleTranslate and OccurInBingTranslate features lead to 59.40% and 58.11% for F-mes1 and F-mes2 respectively. In the next subsection, we try to see if the use of both MT and ASR features improves quality assessment for SLT. 5.2 SLT quality assessment using both MT and ASR features We now report in Table 12 WCE for SLT results obtained using both MT and ASR features. More precisely we evaluate two different approaches (combination and joint): • The first system (WCE for SLT / MT+ASR feat.) combines the output of two separate classifiers based on ASR and MT features. In this approach, ASR-based confidence score of the source is projected to the target SLT output and combined with the MT-based confidence score as shown in equation 6 (we did not tune the $\alpha$ coefficient and set it a priori to 0.5). • The second system (joint feat.) trains a single WCE system for SLT (evaluating $p(q/x_{f},f,e)$ as in equation 5 using joint ASR features and MT features. All ASR features are projected to the target words using automatic word alignments. However, a problem occur when a target word does not have any source word aligned to it. In this case, we decide to duplicate the ASR features of its previous target word. Another problem occur when a target word is aligned to more than one source word. In that case, there are several strategies to infer the 9 ASR features: average or max over numerical values, selection or concatenation over symbolic values (for F-word and F-POS), etc. Three different variants of these strategies (shown in Table 11) are evaluated here. The results of Table 12 show that joint ASR and MT features do not improve WCE performance: F-mes1 and F-mes2 are slightly worse than those of table 9 (WCE for SLT / MT features only). We also observe that simple combination (MT+ASR) degrades the WCE performance. This latter observation may be due to different behaviors of $WCE_{MT}$ and $WCE_{ASR}$ classifiers which makes the weighted combination ineffective. Moreover, the disappointing performance of our joint classifier may be due to an insufficient training set (only 2683 utterances in dev!). Finally, removing OccurInGoogleTranslate and OccurInBingTranslate features for Joint lowered F-mes between 1% and 1.5%. These observations lead us to investigate the behaviour of our WCE approaches for a large range of $good$/$bad$ decision threshold and with a new protocol where we reverse dev and tst. So, in the next experiments of this subsection, we will report WCE evaluation results obtained on dev (2683 utt.) with classifiers trained on tst (4050 utt.). Finally, the different strategies used to project ASR features when a target word is aligned to more than one source word do not lead to very different performance: we will use strategy joint 1 in the future. While the previous tables provided WCE performance for a single point of interest ($good$/$bad$ decision threshold set to 0.5), the curves of figures 1 and 2 show the full picture of our WCE systems (for SLT) using speech transcriptions systems $ASR1$ and $ASR2$, respectively. We observe that the classifier based on ASR features has a very different behaviour than the classifier based on MT features which explains why their simple combination (MT+ASR) does not work very well for the default decision threshold (0.5). However, for threshold above 0.5, the use of both ASR and MT features is beneficial. This is interesting because higher thresholds improves the Fmeasure on $bad$ labels (so improves error detection). Both curves are similar whatever the ASR system used. These results suggest that with enough development data for appropriate threshold tuning (which we do not have for this very new task), the use of both ASR and MT features should improve error detection in speech translation (blue and red curves are above the green curve for higher decision threshold101010Corresponding to optimization of the Fmeasure on $bad$ labels (errors)). We also analyzed the $Fmeasure$ curves for $bad$ and $good$ labels separately111111Not reported here due to space constraints.: if we consider, for instance $ASR1$ system, for decision threshold equals to 0.75, the Fmeasure on $bad$ labels is equivalent (60%) for 3 systems (Joint, MT+ASR and MT) while the Fmeasure on $good$ labels is 61% when using MT features only, 66% when using Joint features and 68% when using MT+ASR features. In other words, for a fixed performance on $bad$ labels, the Fmeasure on $good$ labels is improved using all information available (ASR and MT features). Finally, if we focus on Joint versus MT+ASR, we notice that the range of the threshold where performance are stable is larger for Joint than for MT+ASR. 6 Feature Selection In this section, we try to better understand the contribution of each (ASR or MT) feature by applying feature selection on our joint WCE classifier. In these experiments, we decide to keep OccurInGoogleTranslate and OccurInBingTranslate features. We choose the Sequential Backward Selection (SBS) algorithm which is a top-down algorithm starting from a feature set noted $Y_{k}$ (which denotes the set of all features) and sequentially removing the most irrelevant one ($x$) that maximizes the Mean F-Measure, $MF(Y_{k}-x)$. In our work, we examine until the set $Y_{k}$ contains only one remaining feature. Algorithm 1 summarizes the whole process. The results of the SBS algorithm can be found in table 13 which ranks all joint features used in WCE for SLT by order of importance after applying the algorithm on dev. We can see that the SBS algorithm is not very stable and is clearly influenced by the ASR system ($ASR1$ or $ASR2$) considered in SLT. Anyway, if we focus on the features that are in the top-10 best in both cases, we find that the most relevant ones are: • Occur in Google Translate and Occur in Bing Translate (diagnostic from other MT systems), • Longest Source N-gram Length, Target Backoff Behaviour (source or target N-gram features) • Stem Context Alignment (source-target alignment feature) We also observe that the most relevant ASR features (in bold in table 13) are F-3g, F-POS and F-back (lexical and linguistic features) whereas ASR acoustic and graph based features are among the worst (F-post, F-alt, F-dur). So, in our experimental setting, it seems that MT features are more influent than ASR features. Another surprising result is the relatively low rank of word posterior probability (WPP) features whereas we were expecting to see them among the top features (as shown in luong15 where WPP Any is among the best features for WCE in MT). Figure 3 and Figure 4 present the evolution of WCE performance for dev and tst corpora when feature selection using SBS algorithm is made on dev, for $ASR1$ and $ASR2$ systems, respectively. In other words, for these two figures, we apply our SBS algorithm on dev which means that feature selection is done on dev with classifiers trained on tst. After that, the best feature subsets (using 33, 32, 31 until 1 feature only) are applied on tst corpus (with classifiers trained on dev)1212123 data sets would have been needed to (a) train classifiers, (b) apply feature selection, (c) evaluate WCE performance. Since we only have a dev and a tst set, we found this procedure acceptable. On both figures, we observe that half of the features only contribute to the WCE process since best performances are observed with 10 to 15 features only. We also notice that optimal WCE performance is not necessarily obtained with the full feature set but it can be obtained with a subset of it. 7 Conclusion 7.1 Main contributions In this paper, we introduced a new quality assessment task: word confidence estimation (WCE) for spoken language translation (SLT). A specific corpus, distributed to the research community131313https://github.com/besacier/WCE-SLT-LIG was built for this purpose. We formalized WCE for SLT and proposed several approaches based on several types of features: machine translation (MT) based features, automatic speech recognition (ASR) based features, as well as combined or joint features using ASR and MT information. The proposition of a unique joint classifier based on different feature types (ASR and MT features) allowed us to operate feature selection and analyze which features (from ASR or MT) are the most efficient for quality assessment in speech translation. Our conclusion is that MT features remain the most influential while ASR feature can bring interesting complementary information. In all our experiments, we systematically evaluated with two ASR systems that have different performance in order to analyze the behavior of our quality assessment algorithms at different levels of word error rate (WER). This allowed us to observe that WCE performance decreases as ASR system improves. For reproducible research, most features141414MT features already available, ASR features available soon and algorithms used in this paper are available through our toolkit called WCE-LIG. This package is made available on a GitHub repository151515https://github.com/besacier/WCE-LIG under the licence GPL V3. We hope that the availability of our corpus and toolkit could lead, in a near future, to a new shared task dedicated to quality estimation for speech translation. Such a shared task could be proposed in avenues such as IWSLT (International Workshop on Spoken Language Translation) or WMT (Workshop on Machine Translation) for instance. 7.2 SLT redecoding using WCE A direct application of this work is the use of WCE labels to re-decode speech translation graphs and (hopefully) improve speech translation performance. Preliminary results were already obtained and recently published by the authors of this paper besacier-asru-2015 . The main idea is to carry a second speech translation pass by considering every word and its quality assessment label, as shown in equation 4. The speech translation graph is redecoded following the following principle: words labeled as $good$ in the search graph should be “rewarded” by reducing their cost; on the contrary, those labeled as $bad$ should be “penalized”. To illustrate this direct application of our work, we present examples of speech translation hypotheses (SLT) obtained with or without graph re-decoding in table 14 (table taken from besacier-asru-2015 ). Example 1 illustrates a first case where re-decoding allows slightly improving the translation hypothesis. Analysis of the labels from the confidence estimator indicates that the words a (start of sentence) and penalty were labeled as $bad$ here. Thus, a better hypothesis arised from the second pass, although the transcription error could not be recovered. In example 2, the confidence estimator labeled as $bad$ the following word sequences: it has, speech that was and post route. Better translation hypothesis is found after re-decoding (correct pronoun, better quality at the end of sentence). Finally, example 3 shows a case where, this time, the end of the first pass translation deteriorated after re-decoding. Analysis of confidence estimator output shows that the phrase to open was (correctly) labeled as $bad$, but the re-decoding gave rise to an even worse hypothesis. The reason is that the system could not recover the named entity opel since this word was not in the speech translation graph. 7.3 Other perspectives In addition to re-decode SLT graphs, our quality assessment system can be used in interactive speech translation scenarios such as news or lectures subtitling, to improve human translator productivity by giving him/her feedback on automatic transcription and translation quality. Another application would be the adaptation of our WCE system to interactive speech-to-speech translation scenarios where feedback on transcription and translation modules is needed to improve communication. On these latter subjects, it would also be nice to move from a binary ($good$ or $bad$ labels) to a 3-class decision problem (good, asr-error, mt-error). The outcome material of this paper (corpus, toolkit) can be definitely used to address such a new problem. References (1) Asadi, A., Schwartz, R., Makhoul, J.: Automatic detection of new words in a large vocabulary continuous speech recognition system. Proc. of International Conference on Acoustics, Speech and Signal Processing (1990) (2) Bach, N., Huang, F., Al-Onaizan, Y.: Goodness: A method for measuring machine translation confidence. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pp. 211–219. Portland, Oregon (2011) (3) Besacier, L., Lecouteux, B., Luong, N.Q., Hour, K., Hadjsalah, M.: Word confidence estimation for speech translation. In: Proceedings of The International Workshop on Spoken Language Translation (IWSLT). Lake Tahoe, USA (2014) (4) Besacier, L., Lecouteux, B., Luong, N.Q., Le, N.T.: Spoken language translation graphs re-decoding using automatic quality assessment. In: IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU). Scotsdale, Arizona, United States (2015). DOI 10.1109/ASRU.2015.7404804. URL https://hal.archives-ouvertes.fr/hal-01289158 (5) Bicici, E.: Referential translation machines for quality estimation. In: Proceedings of the Eighth Workshop on Statistical Machine Translation, pp. 343–351. Association for Computational Linguistics, Sofia, Bulgaria (2013). URL http://www.aclweb.org/anthology/W13-2242 (6) Fayolle, J., Moreau, F., Raymond, C., Gravier, G., Gros, P.: Crf-based combination of contextual features to improve a posteriori word-level confidence measures. In: Interspeech (2010) (7) Federico, M., Cettolo, M., Bentivogli, L., Paul, M., Stüker, S.: Overview of the IWSLT 2012 evaluation campaign. In: In proceedings of the 9th International Workshop on Spoken Language Translation (IWSLT) (2012) (8) Galliano, S., Geoffrois, E., Gravier, G., Bonastre, J.F., Mostefa, D., Choukri, K.: Corpus description of the ester evaluation campaign for the rich transcription of french broadcast news. In: In Proceedings of the 5th international Conference on Language Resources and Evaluation (LREC 2006), pp. 315–320 (2006) (9) Han, A.L.F., Lu, Y., Wong, D.F., Chao, L.S., He, L., Xing, J.: Quality estimation for machine translation using the joint method of evaluation criteria and statistical modeling. In: Proceedings of the Eighth Workshop on Statistical Machine Translation, pp. 365–372. Association for Computational Linguistics, Sofia, Bulgaria (2013). URL http://www.aclweb.org/anthology/W13-2245 (10) Kemp, T., Schaaf, T.: Estimating confidence using word lattices. Proc. of European Conference on Speech Communication Technology pp. 827–830 (1997) (11) Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., Herbst, E.: Moses: Open source toolkit for statistical machine translation. In: Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics, pp. 177–180. Prague, Czech Republic (2007) (12) Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: Probabilistic models for segmenting et labeling sequence data. In: Proceedings of ICML-01, pp. 282–289 (2001) (13) Langlois, D., Raybaud, S., Smaïli, K.: Loria system for the wmt12 quality estimation shared task. In: Proceedings of the Seventh Workshop on Statistical Machine Translation, pp. 114–119. Baltimore, Maryland USA (2012) (14) Laurent, A., Camelin, N., Raymond, C.: Boosting bonsai trees for efficient features combination : application to speaker role identification. In: Interspeech (2014) (15) Lavergne, T., Cappé, O., Yvon, F.: Practical very large scale crfs. In: Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pp. 504–513 (2010) (16) Lecouteux, B., Linarès, G., Favre, B.: Combined low level and high level features for out-of-vocabulary word detection. INTERSPEECH (2009) (17) Luong, N.Q., Besacier, L., Lecouteux, B.: Word confidence estimation and its integration in sentence quality estimation for machine translation. In: Proceedings of The Fifth International Conference on Knowledge and Systems Engineering (KSE 2013). Hanoi, Vietnam (2013) (18) Luong, N.Q., Besacier, L., Lecouteux, B.: LIG System for Word Level QE task at WMT14. In: Proceedings of the Ninth Workshop on Statistical Machine Translation, pp. 335–341. Baltimore, Maryland USA (2014) (19) Luong, N.Q., Besacier, L., Lecouteux, B.: Word Confidence Estimation for SMT N-best List Re-ranking. In: Proceedings of the Workshop on Humans and Computer-assisted Translation (HaCaT) during EACL. Gothenburg, Suède (2014). URL http://hal.inria.fr/hal-00953719 (20) Luong, N.Q., Besacier, L., Lecouteux, B.: Towards accurate predictors of word quality for machine translation: Lessons learned on french - english and english - spanish systems. Data and Knowledge Engineering p. 11 (2015) (21) Luong, N.Q., Lecouteux, B., Besacier, L.: LIG system for WMT13 QE task: Investigating the usefulness of features in word confidence estimation for MT. In: Proceedings of the Eighth Workshop on Statistical Machine Translation, pp. 396–391. Association for Computational Linguistics, Sofia, Bulgaria (2013) (22) Potet, M., Besacier, L., Blanchon, H.: The lig machine translation system for wmt 2010. In: A. Workshop (ed.) Proceedings of the joint fifth Workshop on Statistical Machine Translation and Metrics MATR (WMT2010). Uppsala, Sweden (2010) (23) Potet, M., Emmanuelle E, R., Besacier, L., Blanchon, H.: Collection of a large database of french-english smt output corrections. In: Proceedings of the eighth international conference on Language Resources and Evaluation (LREC). Istanbul, Turkey (2012) (24) Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., Hannemann, M., Motlicek, P., Qian, Y., Schwarz, P., Silovsky, J., Stemmer, G., Vesely, K.: The kaldi speech recognition toolkit. In: IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. IEEE Signal Processing Society (2011). IEEE Catalog No.: CFP11SRW-USB (25) Servan, C., Le, N.T., Luong, N.Q., Lecouteux, B., Besacier, L.: An Open Source Toolkit for Word-level Confidence Estimation in Machine Translation. In: The 12th International Workshop on Spoken Language Translation (IWSLT’15). Da Nang, Vietnam (2015). URL https://hal.archives-ouvertes.fr/hal-01244477 (26) Snover, M., Madnani, N., Dorr, B., Schwartz, R.: Terp system description. In: MetricsMATR workshop at AMTA (2008) (27) Young, S.R.: Recognition confidence measures: Detection of misrecognitions and out-of-vocabulary words. Proc. of International Conference on Acoustics, Speech and Signal Processing pp. 21–24 (1994)
Conditional Gradient Method for Stochastic Submodular Maximization: Closing the Gap Abstract In this paper, we study the problem of constrained and stochastic continuous submodular maximization. Even though the objective function is not concave (nor convex) and is defined in terms of an expectation, we develop a variant of the conditional gradient method, called Stochastic Continuous Greedy, which achieves a tight approximation guarantee. More precisely, for a monotone and continuous DR-submodular function and subject to a general convex body constraint, we prove that Stochastic Continuous Greedy achieves a $[(1-1/e)\text{OPT}-\epsilon]$ guarantee (in expectation) with $\mathcal{O}{(1/\epsilon^{3})}$ stochastic gradient computations. This guarantee matches the known hardness results and closes the gap between deterministic and stochastic continuous submodular maximization. By using stochastic continuous optimization as an interface, we also provide the first $(1-1/e)$ tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a general matroid constraint. 1 Introduction Many procedures in statistics and artificial intelligence require solving non-convex problems, including clustering (Abbasi and Younis, 2007), training deep neural networks (Bengio et al., 2006), and performing Bayesian optimization (Snoek et al., 2012), to name a few. Historically, the focus has been to convexify non-convex objectives; in recent years, there has been significant progress to optimize non-convex functions directly. This direct approach has led to provably good guarantees, for specific problem instances. Examples include latent variable models (Anandkumar et al., 2014), non-negative matrix factorization (Arora et al., 2012), robust PCA (Netrapalli et al., 2014), matrix completions (Ge et al., 2016), and training certain specific forms of neural networks (Mei et al., 2016). However, it is well known that in general finding the global optimum of a non-convex optimization problem is NP-hard (Murty and Kabadi, 1987). This computational barrier has mainly shifted the goal of non-convex optimization towards two directions: a) finding an approximate local minimum by avoiding saddle points (Ge et al., 2015; Anandkumar and Ge, 2016; Jin et al., 2017; Paternain et al., 2017), or b) characterizing general conditions under which the underlying non-convex optimization is tractable (Hazan et al., 2016). In this paper, we consider a broad class of non-convex optimization problems that possess special combinatorial structures. More specifically, we focus on constrained maximization of stochastic continuous submodular functions (CSF) that demonstrate diminishing returns, i.e., continuous DR-submodular functions, $$\max_{{\mathbf{x}}\in{\mathcal{C}}}\ F({\mathbf{x}})\doteq\max_{{\mathbf{x}}% \in{\mathcal{C}}}\ \mathbb{E}_{{\mathbf{z}}\sim P}[{\tilde{F}({\mathbf{x}},{% \mathbf{z}})}].$$ (1) Here, the functions $\tilde{F}:{\mathcal{X}}\times\mathcal{Z}\to{\mathbb{R}}_{+}$ are stochastic where ${\mathbf{x}}\in{\mathcal{X}}$ is the optimization variable, ${\mathbf{z}}\in\mathcal{Z}$ is a realization of the random variable ${\mathbf{Z}}$ drawn from a distribution $P$, and ${\mathcal{X}}\in{\mathbb{R}}^{n}_{+}$ is a compact set. Our goal is to maximize the expected value of the random functions $\tilde{F}({\mathbf{x}},{\mathbf{z}})$ over the convex body ${\mathcal{C}}\subseteq{\mathbb{R}}^{n}_{+}$. Note that we only assume that $F({\mathbf{x}})$ is DR-submodular, and not necessarily the stochastic functions $\tilde{F}({\mathbf{x}},{\mathbf{z}})$. We also consider situations where the distribution $P$ is either unknown (e.g., when the objective is given as an implicit stochastic model) or the domain of the random variable ${\mathbf{Z}}$ is very large (e.g., when the objective is defined in terms of an empirical risk) which makes the cost of computing the expectation very high. In these regimes, stochastic optimization methods, which operate on computationally cheap estimates of gradients, arise as natural solutions. In fact, very recently, it was shown in (Hassani et al., 2017) that stochastic gradient methods achieve a $(1/2)$ approximation guarantee to Problem (1). The authors also showed that current versions of the conditional gradient method (a.k.a., Frank-Wolfe), such as continuous greedy (Vondrák, 2008) or its close variant (Bian et al., 2017), can perform arbitrarily poorly in stochastic continuous submodular maximization settings. Our contributions. We provide the first tight $(1-1/e)$ approximation guarantee for Problem (1) when the continuous function $F$ is monotone, smooth, DR-submodular, and the constraint set ${\mathcal{C}}$ is a bounded convex body. To this end, we develop a novel conditional gradient method, called Stochastic Continuous Greedy (SCG), that produces a solution with an objective value larger than $((1-1/e)\text{OPT}-\epsilon)$ after $O\left({1}/{\epsilon^{3}}\right)$ iterations while only having access to unbiased estimates of the gradients (here OPT denotes the optimal value of Problem (1)). SCG is also memory efficient in the following sense: in contrast to previously proposed conditional gradient methods in stochastic convex (Hazan and Luo, 2016) and non-convex (Reddi et al., 2016) settings, SCG does not require using a minibatch in each step. Instead it simply averages over the stochastic estimates of the previous gradients. Connection to Discrete Problems. Even though submodularity has been mainly studied in discrete domains (Fujishige, 2005), many efficient methods for optimizing such submodular set functions rely on continuous relaxations either through a multi-linear extension (Vondrák, 2008) (for maximization) or Lovas extension (Lovász, 1983) (for minimization). In fact, Problem (1) has a discrete counterpart, recently considered in (Hassani et al., 2017; Karimi et al., 2017): $$\max_{S\in{\mathcal{I}}}f(S)\doteq\max_{S\in{\mathcal{I}}}{\mathbb{E}}_{{% \mathbf{z}}\sim P}[\tilde{f}(S,{\mathbf{z}})],$$ (2) where the functions $\tilde{f}:2^{V}\times\mathcal{Z}\rightarrow{\mathbb{R}}_{+}$ are stochastic, $S$ is the optimization set variable defined over a ground set $V$, ${\mathbf{z}}\in\mathcal{Z}$ is the realization of a random variable ${\mathbf{Z}}$ drawn from the distribution $P$, and ${\mathcal{I}}$ is a general matroid constraint. Since $P$ is unknown, problem (2) cannot be directly solved using the current state-of-the-art techniques. Instead, Hassani et al. (2017) showed that by lifting the problem to the continuous domain (via multi-linear relaxation) and using stochastic gradient methods on a continuous relaxation to reach a solution that is within a factor $(1/2)$ of the optimum. Contemporarily, (Karimi et al., 2017) used a concave relaxation technique to provide a $(1-1/e)$ approximation for the class of submodular coverage functions. Our work also closes the gap for maximizing the stochastic submodular set maximization, namely, Problem (2), by providing the first tight $(1-1/e)$ approximation guarantee for general monotone submodular set functions subject to a matroid constraint. Notation.  Lowercase boldface ${\mathbf{v}}$ denotes a vector and uppercase boldface ${\mathbf{A}}$ a matrix. We use $\|{\mathbf{v}}\|$ to denote the Euclidean norm of vector ${\mathbf{v}}$. The $i$-th element of the vector ${\mathbf{v}}$ is written as $v_{i}$ and the element on the i-$th$ row and $j$-th column of the matrix ${\mathbf{A}}$ is denoted by $A_{i,j}$. 2 Related Work Maximizing a deterministic submodular set function has been extensively studied. The celebrated result of Nemhauser et al. (1978) shows that a greedy algorithm achieves a $(1-1/e)$ approximation guarantee for a monotone function subject to a cardinality constraint. It is also known that this result is tight under reasonable complexity-theoretic assumptions (Feige, 1998). Recently, variants of the greedy algorithm have been proposed to extend the above result to non-monotone and more general constraints (Feige et al., 2011; Buchbinder et al., 2015, 2014; Feldman et al., 2017). While discrete greedy algorithms are fast, they usually do not provide the tightest guarantees for many classes of feasibility constraints. This is why continuous relaxations of submodular functions, e.g., the multilinear extension, have gained a lot of interest (Vondrák, 2008; Calinescu et al., 2011; Chekuri et al., 2014; Feldman et al., 2011; Gharan and Vondrák, 2011; Sviridenko et al., 2015). In particular, it is known that the continuous greedy algorithm achieves a $(1-1/e)$ approximation guarantee for monotone submodular functions under a general matroid constraint (Calinescu et al., 2011). An improved $((1-e^{-c})/c)$-approximation guarantee can be obtained if $f$ has curvature $c$ (Vondrák, 2010). Continuous submodularity naturally arises in many learning applications such as robust budget allocation (Staib and Jegelka, 2017; Soma et al., 2014), online resource allocation (Eghbali and Fazel, 2016), learning assignments (Golovin et al., 2014), as well as Adwords for e-commerce and advertising (Devanur and Jain, 2012; Mehta et al., 2007). Maximizing a deteministic continuous submodular function dates back to the work of Wolsey (1982). More recently, Chekuri et al. (2015) proposed a multiplicative weight update algorithm that achieves $(1-1/e-\epsilon)$ approximation guarantee after $\tilde{O}(n/\epsilon^{2})$ oracle calls to gradients of a monotone smooth submodular function $F$ (i.e., twice differentiable DR-submodular) subject to a polytope constraint. A similar approximation factor can be obtained after $\mathcal{O}(n/\epsilon)$ oracle calls to gradients of $F$ for monotone DR-submodular functions subject to a down-closed convex body using the continuous greedy method (Bian et al., 2017). However, such results require exact computation of the gradients $\nabla F$ which is not feasible in Problem (1). An alternative approach is then to modify the current algorithms by replacing gradients $\nabla F({\mathbf{x}}_{t})$ by their stochastic estimates $\nabla\tilde{F}({\mathbf{x}}_{t},{\mathbf{z}}_{t})$; however, this modification may lead to arbitrarily poor solutions as demonstrated in (Hassani et al., 2017). Another alternative is to estimate the gradient by averaging over a (large) mini-batch of samples at each iteration. While this approach can potentially reduce the noise variance, it increases the computational complexity of each iteration and is not favorable. The work by Hassani et al. (2017) is perhaps the first attempt to solve Problem (1) only by executing stochastic estimates of gradients (without using a large batch). They showed that the stochastic gradient ascent method achieves a $(1/2-\epsilon)$ approximation guarantee after $O(1/\epsilon^{2})$ iterations. Although this work opens the door for maximizing stochastic CSFs using computationally cheap stochastic gradients, it fails to achieve the optimal $(1-1/e)$ approximation. To close the gap, we propose in this paper Stochastic Continuous Greedy which outputs a solution with function value at least $((1-1/e)\text{OPT}-\epsilon)$ after $O(1/\epsilon^{3})$ iterations. Notably, our result only requires the expected function $F$ to be monotone and DR-submodular and the stochastic functions $\tilde{F}$ need not be monotone nor DR-submodular. Moreover, in contrast to the result in (Bian et al., 2017), which holds only for down-closed convex constraints, our result holds for any convex constraints. Our result also has important implications for Problem (2); that is, maximizing a stochastic discrete submodular function subject to a matroid constraint. Since the proposed SCG method works in stochastic settings, we can relax the discrete objective function $f$ in Problem (2) to a continuous function $F$ through the multi-linear extension (note that expectation is a linear operator). Then we can maximize $F$ within a $(1-1/e-\epsilon)$ approximation to the optimum value by using only ${\mathcal{O}}(1/\epsilon^{3})$ oracle calls to the stochastic gradients of $F$. Finally, a proper rounding scheme (such as the contention resolution method (Chekuri et al., 2014)) results in a feasible set whose value is a $(1-1/e)$ approximation to the optimum set in expectation. The focus of our paper is on the maximization of stochastic submodular functions. However, there are also very interesting results for minimization of such functions (Staib and Jegelka, 2017; Ene et al., 2017; Chakrabarty et al., 2017; Iyer et al., 2013). 3 Continuous Submodularity We begin by recalling the definition of a submodular set function: A function $f:2^{V}\rightarrow{\mathbb{R}}_{+}$, defined on the ground set $V$, is called submodular if for all subsets $A,B\subseteq V$, we have $$f(A)+f(B)\geq f(A\cap B)+f(A\cup B).$$ The notion of submodularity goes beyond the discrete domain (Wolsey, 1982; Vondrák, 2007; Bach, 2015). Consider a continuous function $F:{\mathcal{X}}\to{\mathbb{R}}_{+}$ where the set ${\mathcal{X}}$ is of the form ${\mathcal{X}}=\prod_{i=1}^{n}{\mathcal{X}}_{i}$ and each ${\mathcal{X}}_{i}$ is a compact subset of ${\mathbb{R}}_{+}$. We call the continuous function $F$ submodular if for all ${\mathbf{x}},{\mathbf{y}}\in{\mathcal{X}}$ we have $$\displaystyle F({\mathbf{x}})+F({\mathbf{y}})\geq F({\mathbf{x}}\vee{\mathbf{y% }})+F({\mathbf{x}}\wedge{\mathbf{y}}),$$ (3) where ${\mathbf{x}}\vee{\mathbf{y}}:=\max({\mathbf{x}},{\mathbf{y}})$ (component-wise) and ${\mathbf{x}}\wedge{\mathbf{y}}:=\min({\mathbf{x}},{\mathbf{y}})$ (component-wise). In this paper, our focus is on differentiable continuous submodular functions with two additional properties: monotonicity and diminishing returns. Formally, a submodular function $F$ is monotone (on the set ${\mathcal{X}}$) if $$\displaystyle{\mathbf{x}}\leq{\mathbf{y}}\quad\Longrightarrow\quad F({\mathbf{% x}})\leq F({\mathbf{y}}),$$ (4) for all ${\mathbf{x}},{\mathbf{y}}\in{\mathcal{X}}$. Note that ${\mathbf{x}}\leq{\mathbf{y}}$ in (4) means that $x_{i}\leq y_{i}$ for all $i=1,\dots,n$. Furthermore, a differentiable submodular function $F$ is called DR-submodular (i.e., shows diminishing returns) if the gradients are antitone, namely, for all ${\mathbf{x}},{\mathbf{y}}\in{\mathcal{X}}$ we have $$\displaystyle{\mathbf{x}}\leq{\mathbf{y}}\quad\Longrightarrow\quad\nabla F({% \mathbf{x}})\geq\nabla F({\mathbf{y}}).$$ (5) When the function $F$ is twice differentiable, submodularity implies that all cross-second-derivatives are non-positive (Bach, 2015), i.e., $$\text{for all}\ i\neq j,\ \ \text{for all}\ {\mathbf{x}}\in{\mathcal{X}},~{}~{% }\frac{\partial^{2}F({\mathbf{x}})}{\partial x_{i}\partial x_{j}}\leq 0,$$ (6) and DR-submodularity implies that all second-derivatives are non-positive (Bian et al., 2017), i.e., $$\text{for all}\ i,j,\ \ \text{for all}\ {\mathbf{x}}\in{\mathcal{X}},~{}~{}% \frac{\partial^{2}F({\mathbf{x}})}{\partial x_{i}\partial x_{j}}\leq 0.$$ (7) 4 Stochastic Continuous Greedy In this section, we introduce our main algorithm, Stochastic Continuous Greedy (SCG), which is a stochastic variant of the continuous greedy method to to solve Problem (1). We only assume that the expected objective function $F$ is monotone and DR-submodular and the stochastic functions $\tilde{F}({\mathbf{x}},{\mathbf{z}})$ may not be monotone nor submodular. Since the objective function $F$ is monotone and DR-submodular, continuous greedy algorithm (Bian et al., 2017; Calinescu et al., 2011) can be used in principle to solve Problem (1). Note that each update of continuous greedy requires computing the gradient of $F$, i.e., $\nabla F({\mathbf{x}}):=\mathbb{E}[{\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})}]$. However, if we only have access to the (computationally cheap) stochastic gradients ${\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})}$, then the continuous greedy method will not be directly usable (Hassani et al., 2017). This limitation is due to the non-vanishing variance of gradient approximations. To resolve this issue, we introduce stochastic version of the continuous greedy algorithm which reduces the noise of gradient approximations via a common averaging technique in stochastic optimization (Ruszczyński, 1980, 2008; Yang et al., 2016; Mokhtari et al., 2017). Let $t\in\mathbf{N}$ be a discrete time index and $\rho_{t}$ a given stepsize which approaches zero as $t$ approaches infinity. Our proposed estimated gradient ${\mathbf{d}}_{t}$ is defined by the following recursion $${\mathbf{d}}_{t}=(1-\rho_{t}){\mathbf{d}}_{t-1}+\rho_{t}\nabla\tilde{F}({% \mathbf{x}}_{t},{\mathbf{z}}_{t}),$$ (8) where the initial vector is defined as ${\mathbf{d}}_{0}={\mathbf{0}}$. It can be shown that the averaging technique in (8) reduces the noise of gradient approximation as time increases. More formally, the expected noise of gradient estimation ${\mathbb{E}}\left[\|{\mathbf{d}}_{t}-\nabla F({\mathbf{x}}_{t})\|^{2}\right]$ approaches zero asymptotically (Lemma 2). This property implies that the gradient estimate ${\mathbf{d}}_{t}$ is a better candidate for approximating the gradient $\nabla F({\mathbf{x}}_{t})$ comparing to the the unbiased gradient estimate $\nabla\tilde{F}({\mathbf{x}}_{t},{\mathbf{z}}_{t})$ that suffers from a high variance approximation. We therefore define the ascent direction ${\mathbf{v}}_{t}$ of our proposed SCG method as follows $${\mathbf{v}}_{t}=\operatornamewithlimits{argmax}_{{\mathbf{v}}\in{\mathcal{C}}% }\{{\mathbf{d}}_{t}^{T}{\mathbf{v}}\},$$ (9) which is a linear objective maximization over the convex set ${\mathcal{C}}$. Indeed, if instead of the gradient estimate ${\mathbf{d}}_{t}$ we use the exact gradient $\nabla F({\mathbf{x}}_{t})$ for the updates in (9), the continuous greedy update will be recovered. Here, as in continuous greedy, the initial decision vector is the null vector, ${\mathbf{x}}_{0}={\mathbf{0}}$. Further, the stepsize for updating the iterates is equal to $1/T$, and the variable ${\mathbf{x}}_{t}$ is updated as $${\mathbf{x}}_{t+1}={\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t}.$$ (10) The stepsize $1/T$ and the initialization ${\mathbf{x}}_{0}={\mathbf{0}}$ ensure that after $T$ iterations the variable ${\mathbf{x}}_{T}$ ends up in the convex set ${\mathcal{C}}$. We should highlight that the convex body ${\mathcal{C}}$ may not be down-closed or contain ${\mathbf{0}}$. Nonetheless, the solution ${\mathbf{x}}_{T}$ returned by SCG will be a feasible point in ${\mathcal{C}}$. The steps of the proposed SCG method are outlined in Algorithm 1. 5 Convergence Analysis In this section, we study the convergence properties of our proposed SCG method for solving Problem (1). To do so, we first assume that the following conditions hold. Assumption 1 The Euclidean norm of the elements in the constraint set $\mathcal{C}$are uniformly bounded, i.e., for all ${\mathbf{x}}\in{\mathcal{C}}$ we can write $$\|{\mathbf{x}}\|\leq D.$$ (11) Assumption 2 The function $F$ is DR-submodular and monotone. Further, its gradients are $L$-Lipschitz continuous over the set ${\mathcal{X}}$, i.e., for all ${\mathbf{x}},{\mathbf{y}}\in{\mathcal{X}}$ $$\|\nabla F({\mathbf{x}})-\nabla F({\mathbf{y}})\|\leq L\|{\mathbf{x}}-{\mathbf% {y}}\|.$$ (12) Assumption 3 The variance of the unbiased stochastic gradients $\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})$ is bounded above by $\sigma^{2}$, i.e., for any vector ${\mathbf{x}}\in{\mathcal{X}}$ we can write $${\mathbb{E}}\left[\|\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})-\nabla F({% \mathbf{x}})\|^{2}\right]\leq\sigma^{2},$$ (13) where the expectation is with respect to the randomness of ${\mathbf{z}}\sim P$. Due to the initialization step of SCG (i.e., starting from ${\mathbf{0}}$) we need a bound on the furthest feasible solution from ${\mathbf{0}}$ that we can end up with; and such a bound is guaranteed by Assumption 1. The condition in Assumption 2 ensures that the objective function $F$ is smooth. Note again that $\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})$ may or may not be Lipschitz continuous. Finally, the required condition in Assumption 3 guarantees that the variance of stochastic gradients $\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})$ is bounded by a finite constant $\sigma^{2}<\infty$ which is customary in stochastic optimization. To study the convergence of SCG, we first derive an upper bound for the expected error of gradient approximation (i.e., $\mathbb{E}[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}]$) in the following lemma. Lemma 1 Consider Stochastic Continuous Greedy (SCG) outlined in Algorithm 1. If Assumptions 1-3 are satisfied, then the sequence of expected squared gradient errors ${\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}\right]$ for the iterates generated by SCG satisfies $$\displaystyle{\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|% ^{2}\right]\leq\left(1-\frac{\rho_{t}}{2}\right){\mathbb{E}}\left[\|\nabla F({% \mathbf{x}}_{t-1})-{\mathbf{d}}_{t-1}\|^{2}\right]+\rho_{t}^{2}\sigma^{2}+% \frac{L^{2}D^{2}}{T^{2}}+\frac{2L^{2}D^{2}}{\rho_{t}T^{2}}.$$ (14) Proof  See Section 9.1.   The result in Lemma 1 showcases that the expected squared error of gradient approximation ${\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}\right]$ decreases at each iteration by the factor $(1-\rho_{t}/2)$ if the remaining terms on the right hand side of (14) are negligible relative to the term $(1-\rho_{t}/2)\mathbb{E}[\|\nabla F({\mathbf{x}}_{t-1})-{\mathbf{d}}_{t-1}\|^{% 2}]$. This condition can be satisfied, if the parameters $\{\rho_{t}\}$ are chosen properly. We formalize this claim in the following lemma and show that the expected error $\mathbb{E}[{\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}}]$ converges to zero at a sublinear rate of $\mathcal{O}(t^{-2/3})$. Lemma 2 Consider Stochastic Continuous Greedy (SCG) outlined in Algorithm 1. If Assumptions 1-3 are satisfied and $\rho_{t}=\frac{4}{(t+8)^{2/3}}$, then for $t=0,\dots,T$ we have $$\displaystyle{\mathbb{E}}\left[{\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}% \|^{2}}\right]$$ $$\displaystyle\leq\frac{Q}{(t+9)^{2/3}},$$ (15) where $Q:=\max\{\|\nabla F({\mathbf{x}}_{0})-{\mathbf{d}}_{0}\|^{2}9^{2/3},16\sigma^{% 2}+3L^{2}D^{2}\}.$ Proof  See Section 9.2.   Let us now use the result in Lemma 2 to show that the sequence of iterates generated by SCG reaches a $(1-1/e)$ approximation for Problem (1). Theorem 3 Consider Stochastic Continuous Greedy (SCG) outlined in Algorithm 1. If Assumptions 1-3 are satisfied and $\rho_{t}=\frac{4}{(t+8)^{2/3}}$, then the expected objective function value for the iterates generated by SCG satisfies the inequality $$\displaystyle{\mathbb{E}}\left[F({\mathbf{x}}_{T})\right]\geq(1-1/e)\text{OPT}% -\frac{2DQ^{1/2}}{T^{1/3}}-\frac{LD^{2}}{2T},$$ (16) where $\text{OPT}\>=\max_{{\mathbf{x}}\in{\mathcal{C}}}\ F({\mathbf{x}})$. Proof  Let ${\mathbf{x}}^{*}$ be the global maximizer within the constraint set $\mathcal{C}$. Based on the smoothness of the function $F$ with constant $L$ we can write $$\displaystyle F({\mathbf{x}}_{t+1})$$ $$\displaystyle\geq F({\mathbf{x}}_{t})+\langle\nabla F({\mathbf{x}}_{t}),{% \mathbf{x}}_{t+1}-{\mathbf{x}}_{t}\rangle-\frac{L}{2}||{\mathbf{x}}_{t+1}-{% \mathbf{x}}_{t}||^{2}$$ $$\displaystyle=F({\mathbf{x}}_{t})+\frac{1}{T}\langle\nabla F({\mathbf{x}}_{t})% ,{\mathbf{v}}_{t}\rangle-\frac{L}{2T^{2}}||{\mathbf{v}}_{t}||^{2},$$ (17) where the equality follows from the update in (10). Since ${\mathbf{v}}_{t}$ is in the set ${\mathcal{C}}$, it follows from Assumption 1 that the norm $\|{\mathbf{v}}_{t}\|^{2}$ is bounded above by $D^{2}$. Apply this substitution and add and subtract the inner product $\langle{\mathbf{d}}_{t},{\mathbf{v}}_{t}\rangle$ to the right hand side of (5) to obtain $$\displaystyle F({\mathbf{x}}_{t+1})$$ $$\displaystyle\geq F({\mathbf{x}}_{t})+\frac{1}{T}\langle{\mathbf{v}}_{t},{% \mathbf{d}}_{t}\rangle+\frac{1}{T}\langle{\mathbf{v}}_{t},\nabla F({\mathbf{x}% }_{t})-{\mathbf{d}}_{t}\rangle-\frac{LD^{2}}{2T^{2}}$$ $$\displaystyle\geq F({\mathbf{x}}_{t})+\frac{1}{T}\langle{\mathbf{x}}^{*},{% \mathbf{d}}_{t}\rangle+\frac{1}{T}\langle{\mathbf{v}}_{t},\nabla F({\mathbf{x}% }_{t})-{\mathbf{d}}_{t}\rangle-\frac{LD^{2}}{2T^{2}}.$$ (18) Note that the second inequality in (5) holds since based on (9) we can write $\langle{\mathbf{x}}^{*},{\mathbf{d}}_{t}\rangle\leq\max_{v\in{\mathcal{X}}}\{% \langle{\mathbf{v}},{\mathbf{d}}_{t}\rangle\}=\langle{\mathbf{v}}_{t},{\mathbf% {d}}_{t}\rangle$. Now add and subtract the inner product $\langle{\mathbf{x}}^{*},\nabla F({\mathbf{x}}_{t})\rangle/T$ to the RHS of (5) to get $$\displaystyle F({\mathbf{x}}_{t+1})$$ $$\displaystyle\geq F({\mathbf{x}}_{t})+\frac{1}{T}\langle{\mathbf{x}}^{*},% \nabla F({\mathbf{x}}_{t})\rangle+\frac{1}{T}\langle{\mathbf{v}}_{t}-{\mathbf{% x}}^{*},\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\rangle-\frac{LD^{2}}{2T^{2% }}.$$ (19) We further have $\langle{\mathbf{x}}^{*},\nabla F({\mathbf{x}}_{t})\rangle\geq F({\mathbf{x}}^{% *})-F({\mathbf{x}}_{t})$; this follows from monotonicity of $F$ as well as concavity of $F$ along positive directions; see, e.g., (Calinescu et al., 2011). Moreover, by Young’s inequality we can show that the inner product $\langle{\mathbf{v}}_{t}-{\mathbf{x}}^{*},\nabla F({\mathbf{x}}_{t})-{\mathbf{d% }}_{t}\rangle$ is lower bounded by $-(\beta_{t}/2)||{\mathbf{v}}_{t}-{\mathbf{x}}^{*}||^{2}-(1/2\beta_{t}){||% \nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}||^{2}}$ for any $\beta_{t}>0$. By applying these substitutions into (19) we obtain $$\displaystyle F({\mathbf{x}}_{t+1})\geq F({\mathbf{x}}_{t})+\frac{1}{T}(F({% \mathbf{x}}^{*})-F({\mathbf{x}}_{t}))-\frac{LD^{2}}{2T^{2}}-\frac{1}{2T}\left(% \beta_{t}||{\mathbf{v}}_{t}-{\mathbf{x}}^{*}||^{2}+\frac{||\nabla F({\mathbf{x% }}_{t})-{\mathbf{d}}_{t}||^{2}}{\beta_{t}}\right).$$ (20) Replace $||{\mathbf{v}}_{t}-{\mathbf{x}}^{*}||^{2}$ by its upper bound $4D^{2}$ and compute the expected value of (20) to write $$\displaystyle{\mathbb{E}}\left[F({\mathbf{x}}_{t+1})\right]\geq{\mathbb{E}}% \left[F({\mathbf{x}}_{t})\right]+\frac{1}{T}{\mathbb{E}}\left[F({\mathbf{x}}^{% *})-F({\mathbf{x}}_{t}))\right]-\frac{1}{2T}\left[4\beta_{t}D^{2}+\frac{{% \mathbb{E}}\left[||\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}||^{2}\right]}{% \beta_{t}}\right]-\frac{LD^{2}}{2T^{2}}.$$ (21) Substitute ${\mathbb{E}}\left[||\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}||^{2}\right]$ by its upper bound ${Q}/({(t+9)^{2/3}})$ according to the result in (15). Further, set $\beta_{t}=(Q^{1/2})/(2D(t+9)^{1/3})$ and regroup the resulted expression to obtain $$\displaystyle{\mathbb{E}}\left[F({\mathbf{x}}^{*})-F({\mathbf{x}}_{t+1})\right]$$ $$\displaystyle\leq\left(1-\frac{1}{T}\right){\mathbb{E}}\left[F({\mathbf{x}}^{*% })-F({\mathbf{x}}_{t})\right]+\frac{2DQ^{1/2}}{(t+9)^{1/3}T}+\frac{LD^{2}}{2T^% {2}}.$$ (22) By applying the inequality in (22) recursively for $t=0,\dots,T-1$ we obtain $$\displaystyle{\mathbb{E}}\left[F({\mathbf{x}}^{*})-F({\mathbf{x}}_{T})\right]% \leq\left(1-\frac{1}{T}\right)^{T}({F({\mathbf{x}}^{*})-F({\mathbf{x}}_{0})})+% \sum_{t=0}^{T-1}\frac{2DQ^{1/2}}{(t+9)^{1/3}T}+\sum_{t=0}^{T-1}\frac{LD^{2}}{2% T^{2}}.$$ (23) Simplifying the terms on the right hand side (23) leads to the expression $$\displaystyle{\mathbb{E}}\left[F({\mathbf{x}}^{*})-F({\mathbf{x}}_{T})\right]% \leq\frac{1}{e}({F({\mathbf{x}}^{*})-F({\mathbf{x}}_{0})})+\frac{2DQ^{1/2}}{T^% {1/3}}+\frac{LD^{2}}{2T}.$$ (24) Here, we use the fact that $F({\mathbf{x}}_{0})\geq 0$, and hence the expression in (24) can be simplified to $${\mathbb{E}}\left[F({\mathbf{x}}_{T})\right]\geq(1-1/e)F({\mathbf{x}}^{*})-% \frac{2DQ^{1/2}}{T^{1/3}}-\frac{LD^{2}}{2T},$$ (25) and the claim in (16) follows.   The result in Theorem 3 shows that the sequence of iterates generated by SCG, which only has access to a noisy unbiased estimate of the gradient at each iteration, is able to achieve the optimal approximation bound $(1-1/e)$, while the error term vanishes at a sublinear rate of $\mathcal{O}(T^{-1/3})$. 6 Discrete Submodular Maximization According to the results in Section 5, the SCG method achieves in expectation a $(1-1/e)$-optimal solution for Problem (1). The focus of this section is on extending this result into the discrete domain and showing that SCG can be applied for maximizing a stochastic submodular set function $f$, namely Problem (2), through the multilinear extension of the function $f$. To be more precise, in lieu of solving the program in (2) one can solve the continuous optimization problem $$\displaystyle\max_{{\mathbf{x}}\in{\mathcal{C}}}\ F({\mathbf{x}}),$$ (26) where $F$ is the multilinear extension of the function $f$ defined as $$F({\mathbf{x}})=\sum_{S\subset V}f(S)\prod_{i\in S}x_{i}\prod_{j\notin S}(1-x_% {j}),$$ (27) and the convex set ${\mathcal{C}}=\text{conv}\{1_{I}:I\in{\mathcal{I}}\}$ is the matroid polytope (Calinescu et al., 2011). Note that in (27), $x_{i}$ denotes the $i$-th element of the vector ${\mathbf{x}}$. Indeed, the continuous greedy algorithm is able to solve the program in (26); however, each iteration of the method is computationally costly due to gradient $\nabla F({\mathbf{x}})$ evaluations. Instead, Badanidiyuru and Vondrák (2014) and Chekuri et al. (2015) suggested approximating the gradient using a sufficient number of samples from $f$. This mechanism still requires access to the set function $f$ multiple times at each iteration, and hence is not feasible for solving Problem (2). The idea is then to use a stochastic (unbiased) estimate for the gradient $\nabla F$. In Appendix 9.3, we provide a method to compute an unbiased estimate of the gradient using $n$ samples from $\tilde{f}(S_{i},{\mathbf{z}})$, where ${\mathbf{z}}\sim P$ and $S_{i}$’s, $i=1,\cdots,n$, are carefully chosen sets. Indeed, the stochastic gradient ascent method proposed in (Hassani et al., 2017) can be used to solve the multilinear extension problem in (26) using unbiased estimates of the gradient at each iteration. However, the stochastic gradient ascent method fails to achieve the optimal $(1-1/e)$ approximation. Further, the work in (Karimi et al., 2017) achieves a $(1-1/e)$ approximation solution only when each $\tilde{f}(\cdot,{\mathbf{z}})$ is a coverage function. Here, we show that SCG achieves the first $(1-1/e)$ tight approximation guarantee for the discrete stochastic submodular Problem (2). More precisely, we show that SCG finds a solution for (26), with an expected function value that is at least $(1-1/e)\text{OPT}-\epsilon$, in $\mathcal{O}(1/\epsilon^{3})$ iterations. To do so, we first show in the following lemma that the difference between any coordinates of gradients of two consecutive iterates generated by SCG, i.e., $\nabla_{j}F({\mathbf{x}}_{t+1})-\nabla_{j}F({\mathbf{x}}_{t})$ for $j\in\{1,\dots,n\}$, is bounded by $\|{\mathbf{x}}_{t+1}-{\mathbf{x}}_{t}\|$ multiplied by a factor which is independent of the problem dimension $n$. Lemma 4 Consider Stochastic Continuous Greedy (SCG) outlined in Algorithm 1 with iterates ${\mathbf{x}}_{t}$, and recall the definition of the multilinear extension function $F$ in (27). If we define $r$ as the rank of the matroid ${\mathcal{I}}$ and $m_{f}\triangleq\max_{i\in\{1,\cdots,n\}}f(i)$, then $$\displaystyle\left|\nabla_{j}F({\mathbf{x}}_{t+1})-\nabla_{j}F({\mathbf{x}}_{t% })\right|\leq m_{f}\sqrt{r}\|{\mathbf{x}}_{t+1}-{\mathbf{x}}_{t}\|,$$ (28) holds for $j=1,\dots,n$. Proof  See Section 9.4.   The result in Lemma 4 states that in an ascent direction of SCG, the gradient is $m_{f}\sqrt{r}$-Lipschitz continuous. Here, $m_{f}$ is the maximum marginal value of the function $f$ and $r$ is the rank for the matroid. Using the result of Lemma 4 and a coordinate-wise analysis, the bounds in Theorem 3 can be improved and specified for the case of multilinear extension maximization problem as we show in the following theorem. Theorem 5 Consider Stochastic Continuous Greedy (SCG) outlined in Algorithm 1. Recall the definition of the multilinear extension function $F$ in (27) and the definitions of $r$ and $m_{f}$ in Lemma 4. Further, set the averaging parameter as $\rho_{t}=4/(t+8)^{2/3}$. If Assumptions 1 and 3 hold, then the iterate ${\mathbf{x}}_{T}$ generated by SCG satisfies the inequality $$\displaystyle{\mathbb{E}}\left[F({\mathbf{x}}_{T})\right]\geq(1-1/e)OPT-\frac{% 2DK}{T^{1/3}}-\frac{m_{f}\sqrt{r}D^{2}}{2T},$$ (29) where $K:=\max\{\|\nabla F({\mathbf{x}}_{0})-{\mathbf{d}}_{0}\|9^{1/3},4\sigma+\sqrt{% 3r}m_{f}D\}.$ Proof  The proof is similar to the proof of Theorem  3. The main difference is to write the analysis coordinate-wise and replace $L$ by ${m_{f}\sqrt{r}}$, as shown in Lemma 4. For more details, check Section 9.5 in the supplementary material.   The result in Theorem 5 indicates that the sequence of iterates generated by SCG achieves a $(1-1/e)\text{OPT}-\epsilon$ approximation guarantee. Note that the constants on the right hand side of (29) are independent of $n$, except $K$ which depends on $\sigma$. It can be shown that, in the worst case, the variance $\sigma$ depends on the size of the ground set $n$ and the variance of the stochastic functions $\tilde{f}(\cdot,{\mathbf{z}})$. Let us now explain how the variance of the stochastic gradients of $F$ relates to the variance of the marginal values of $f$. Consider a generic submodular set function $g$ and its multilinear extension $G$. It is easy to show that $$\nabla_{j}G({\mathbf{x}})=G({\mathbf{x}};x_{j}=1)-G({\mathbf{x}};x_{j}=0).$$ (30) Hence, from submodularity we have $\nabla_{j}G({\mathbf{x}})\leq g(\{j\})$. Using this simple fact we can deduce that $${\mathbb{E}}\left[\|\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})-\nabla F({% \mathbf{x}})\|^{2}\right]\leq n\max_{j\in[n]}\mathbb{E}[\tilde{f}(\{j\},% \mathbf{z})^{2}].$$ (31) Therefore, the constat $\sigma$ can be upper bounded by $$\sigma\leq\sqrt{n}\max_{j\in[n]}\mathbb{E}[\tilde{f}(\{j\},\mathbf{z})^{2}]^{1% /2}.$$ (32) As a result, we have the following guarantee for SCG in the case of multilinear functions. Corollary 6 Consider Stochastic Continuous Greedy (SCG) outlined in Algorithm 1. Suppose the conditions in Theorem 5 are satisfied. Then, the sequence of iterates generated by SCG achieves a $(1-1/e)OPT-\epsilon$ solution after $\mathcal{O}({n^{3/2}}/{\epsilon^{3}})$ iterations. Proof  According to the result in Theorem 5, SCG reaches a $(1-1/e)OPT-\mathcal{O}(n^{1/2}/T^{1/3})$ solution after $T$ iterations. Therefore, to achieve a $((1-1/e)OPT-\epsilon)$ approximation, $\mathcal{O}(n^{3/2}/\epsilon^{3})$ iterations are required.   7 Numerical Experiments In our experiments, we consider a movie recommendation application (Stan et al., 2017) consisting of $N$ users and $n$ movies. Each user $i$ has a user-specific utility function $f(\cdot,i)$ for evaluating sets of movies. The goal is to find a set of $k$ movies such that in expectation over users’ preferences it provides the highest utility, i.e., $\max_{|S|\leq k}f(S)$, where $f(S)\doteq\mathbb{E}_{i\sim P}[f(S,i)]$. This is an instance of the (discrete) stochastic submodular maximization problem defined in (2). For simplicity, we assume $f$ has the form of an empirical objective function, i.e. $f(S)=\frac{1}{N}\sum_{i=1}^{N}f(S,i)$. In other words, the distribution $P$ is assumed to be uniform on the integers between $1$ and $N$. The continuous counterpart of this problem is to consider the the multilinear extension $F(\cdot,i)$ of any function $f(\cdot,i)$ and solve the problem in the continuous domain as follows. Let $F({\mathbf{x}})=\mathbb{E}_{i\sim\mathcal{D}}[F({\mathbf{x}},i)]$ for $x\in[0,1]^{n}$ and define the constraint set $\mathcal{C}=\{{\mathbf{x}}\in[0,1]^{N}:\sum_{i=1}^{n}x_{i}\leq k\}$. The discrete and continuous optimization formulations lead to the same optimal value (Calinescu et al., 2011): $$\max_{S:|S|\leq k}f(S)=\max_{{\mathbf{x}}\in\mathcal{C}}F({\mathbf{x}}).$$ (33) Therefore, by running SCG we can find a solution in the continuous domain that is at least $1-1/e$ approximation to the optimal value. By rounding that fractional solution (for instance via randomized Pipage rounding (Calinescu et al., 2011)) we obtain a set whose utility is at least $1-1/e$ of the optimum solution set of size $k$. We note that randomized Pipage rounding does not need access to the value of $f$. We also remark that each iteration of SCG can be done very efficiently in $O(n)$ time (the $\operatornamewithlimits{argmax}$ step reduces to finding the largest $k$ elements of a vector of length $n$). Therefore, such approach easily scales to big data scenarios where the size of the data set $N$ (e.g. number of users) or the number of items $n$ (e.g. number of movies) are very large. In our experiments, we consider the following baselines: (i) Stochastic Continuous Greedy (SCG) with $\rho_{t}=\frac{1}{2}t^{-2/3}$ and mini-batch size $B$. The details for computing an unbiased estimator for the gradient of $F$ are given in Section 9.3 in the supplementary material. (ii) Stochastic Gradient Ascent (SGA) of (Hassani et al., 2017): with stepsize $\mu_{t}=c/\sqrt{t}$ and mini-batch size $B$. (iii) Frank-Wolfe (FW) variant of (Bian et al., 2017; Calinescu et al., 2011): with parameter $T$ for the total number of iterations and batch size $B$ (we further let $\alpha=1,\delta=0$, see Algorithm 1 in (Bian et al., 2017) or the continuous greedy method of (Calinescu et al., 2011) for more details). (iv) Batch-mode Greedy (Greedy): by running the vanilla greedy algorithm (in the discrete domain) in the following way. At each round of the algorithm (for selecting a new element), $B$ random users are picked and the function $f$ is estimated by the average over of the $B$ selected users. To run the experiments we use the MovieLens data set. It consists of 1 million ratings (from 1 to 5) by $N=6041$ users for $n=4000$ movies. Let $r_{i,j}$ denote the rating of user $i$ for movie $j$ (if such a rating does not exist we assign $r_{i,j}$ to 0). In our experiments, we consider two well motivated objective functions. The first one is called “facility location ” where the valuation function by user $i$ is defined as $f(S,i)=\max_{j\in S}r_{i,j}$. In words, the way user $i$ evaluates a set $S$ is by picking the highest rated movie in $S$. Thus, the objective function is $$f_{\rm fac}(S)=\frac{1}{N}\sum_{i=1}^{N}\max_{j\in S}r_{i,j}.$$ (34) In our second experiment, we consider a different user-specific valuation function which is a concave function composed with a modular function, i.e., $f(S,i)=(\sum_{j\in S}r_{i,j})^{1/2}.$ Again, by considering the uniform distribution over the set of users, we obtain $$f_{\rm con}(S)=\frac{1}{N}\sum_{i=1}^{N}\Big{(}\sum_{j\in S}r_{i,j}\Big{)}^{1/% 2}.$$ (35) Figure 1 depicts the performance of different algorithms for the two proposed objective functions. As Figures 0(a) and 0(c) show, the FW algorithm needs a higher mini-batch size to be comparable to SCG. Note that a smaller batch size leads to less computational effort (under the same value for $B,T$, the computational complexity of FW, SGA, SCG is almost the same). Figures 0(b) shows the performance of the algorithms with respect to the number of times the (simple) functions (i.e., $f(\cdot,i)$’s) are evaluated. Note that the total number of (simple) function evaluations for SGA and SGA is $nBT$, where $T$ is the number of iterations. Also, for Greedy the total number of evaluations is $nkB$. This further shows that SCG has a better computational complexity requirement w.r.t. SGA as well as the Greedy algorithm (in the discrete domain). 8 Conclusion In this paper, we provided the first tight approximation guarantee for maximizing a stochastic monotone DR-submodular function subject to a general convex body constraint. We developed Stochastic Continuous Greedy that achieves a $[(1-1/e)\text{OPT}-\epsilon]$ guarantee (in expectation) with $\mathcal{O}{(1/\epsilon^{3})}$ stochastic gradient computations. We also demonstrated that our continuous algorithm can be used to provide the first $(1-1/e)$ tight approximation guarantee for maximizing a monotone but stochastic submodular set function subject to a general matroid constraint. We believe that our results provide an important step towards unifying discrete and continuous submodular optimization in stochastic settings. 9 Appendix 9.1 Proof of Lemma 1 Use the definition ${\mathbf{d}}_{t}:=(1-\rho_{t}){\mathbf{d}}_{t-1}+\rho_{t}\nabla\tilde{F}({% \mathbf{x}}_{t},{\mathbf{z}}_{t})$ to write $\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}$ as $$\displaystyle\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}=\|\nabla F({% \mathbf{x}}_{t})-(1-\rho_{t}){\mathbf{d}}_{t-1}-\rho_{t}\nabla\tilde{F}({% \mathbf{x}}_{t},{\mathbf{z}}_{t})\|^{2}.$$ (36) Add and subtract the term $(1-\rho_{t})\nabla F({\mathbf{x}}_{t-1})$ to the right hand side of (36), regroup the terms to obtain $$\displaystyle\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}$$ $$\displaystyle=\|\rho_{t}(\nabla F({\mathbf{x}}_{t})-\nabla\tilde{F}({\mathbf{x% }}_{t},{\mathbf{z}}_{t}))+(1-\rho_{t})(\nabla F({\mathbf{x}}_{t})-\nabla F({% \mathbf{x}}_{t-1}))+(1-\rho_{t})(\nabla F({\mathbf{x}}_{t-1})-{\mathbf{d}}_{t-% 1})\|^{2}.$$ (37) Define ${\mathcal{F}}_{t}$ as a sigma algebra that measures the history of the system up until time $t$. Expanding the square and computing the conditional expectation ${\mathbb{E}}\left[\cdot\mid{\mathcal{F}}_{t}\right]$ of the resulted expression yield $$\displaystyle{\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|% ^{2}\mid{\mathcal{F}}_{t}\right]=\rho_{t}^{2}{\mathbb{E}}\left[\|\nabla F({% \mathbf{x}}_{t})-\nabla\tilde{F}({\mathbf{x}}_{t},{\mathbf{z}}_{t})\|^{2}\mid{% \mathcal{F}}_{t}\right]+(1-\rho_{t})^{2}\|\nabla F({\mathbf{x}}_{t-1})-{% \mathbf{d}}_{t-1}\|^{2}$$ $$\displaystyle\quad+(1-\rho_{t})^{2}\|\nabla F({\mathbf{x}}_{t})-\nabla F({% \mathbf{x}}_{t-1})\|^{2}+2(1-\rho_{t})^{2}\langle\nabla F({\mathbf{x}}_{t})-% \nabla F({\mathbf{x}}_{t-1}),\nabla F({\mathbf{x}}_{t-1})-{\mathbf{d}}_{t-1}\rangle.$$ (38) The term ${\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-\nabla\tilde{F}({\mathbf{x}}_{t% },{\mathbf{z}}_{t})\|^{2}\mid{\mathcal{F}}_{t}\right]$ can be bounded above by $\sigma^{2}$ according to Assumption 3. Based on Assumptions 1 and 2, we can also show that the squared norm $\|\nabla F({\mathbf{x}}_{t})-\nabla F({\mathbf{x}}_{t-1})\|^{2}$ is upper bounded by $L^{2}D^{2}/T^{2}$. Moreover, the inner product $2\langle\nabla F({\mathbf{x}}_{t})\!-\!\nabla F({\mathbf{x}}_{t-1}),\nabla F({% \mathbf{x}}_{t-1})-{\mathbf{d}}_{t-1}\rangle$ can be upper bounded by $\beta_{t}\|\nabla F({\mathbf{x}}_{t-1})-{\mathbf{d}}_{t-1}\|^{2}+(1/\beta_{t})% L^{2}D^{2}/T^{2}$ using Young’s inequality (i.e., $2\langle{\mathbf{a}},{\mathbf{b}}\rangle\leq\beta\|{\mathbf{a}}\|^{2}+\|{% \mathbf{b}}\|^{2}/\beta$ for any ${\mathbf{a}},{\mathbf{b}}\in{\mathbb{R}}^{n}$ and $\beta>0$) and the conditions in Assumptions 1 and 2, where $\beta_{t}>0$ is a free scalar. Applying these substitutions into (9.1) leads to $$\displaystyle{\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|% ^{2}\mid{\mathcal{F}}_{t}\right]\leq\rho_{t}^{2}\sigma^{2}+(1-\rho_{t})^{2}(1+% \frac{1}{\beta_{t}})\frac{L^{2}D^{2}}{T^{2}}+(1-\rho_{t})^{2}(1+\beta_{t})\|% \nabla F({\mathbf{x}}_{t-1})-{\mathbf{d}}_{t-1}\|^{2}.$$ (39) Replace $(1-\rho_{t})^{2}$ by $(1-\rho_{t})$, set $\beta:=\rho_{t}/2$, and compute the expectation with respect to ${\mathcal{F}}_{0}$ to obtain $$\displaystyle{\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|% ^{2}\right]\leq\rho_{t}^{2}\sigma^{2}+\frac{L^{2}D^{2}}{T^{2}}+\frac{2L^{2}D^{% 2}}{\rho_{t}T^{2}}+\left(1-\frac{\rho_{t}}{2}\right){\mathbb{E}}\left[\|\nabla F% ({\mathbf{x}}_{t-1})-{\mathbf{d}}_{t-1}\|^{2}\right],$$ (40) and the claim in (14) follows. 9.2 Proof of Lemma 2 Define $a_{t}:={\mathbb{E}}\left[{\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}\|^{2}}\right]$. Also, assume $\rho_{t}=\frac{4}{(t+s)^{2/3}}$ where $s$ is a fixed scalar and satisfies the condition $8\leq s\leq T$ (so the proof is slightly more general). Apply these substitutions into(14) to obtain $$\displaystyle a_{t}\leq\left(1-\frac{2}{(t+s)^{2/3}}\right)a_{t-1}+\frac{16% \sigma^{2}}{(t+s)^{4/3}}+\frac{L^{2}D^{2}}{T^{2}}+\frac{L^{2}D^{2}(t+s)^{2/3}}% {2T^{2}}.$$ (41) Now use the conditions $s\leq T$ and $t\leq T$ to replace $1/T$ in (41) by its upper bound $2/(t+s)$. Applying this substitution leads to $$\displaystyle a_{t}\leq\left(1-\frac{2}{(t+s)^{2/3}}\right)a_{t-1}+\frac{16% \sigma^{2}}{(t+s)^{4/3}}+\frac{4L^{2}D^{2}}{(t+s)^{2}}+\frac{2L^{2}D^{2}}{(t+s% )^{4/3}}.$$ (42) Since $t+s\geq 8$ we can write $(t+s)^{2}=(t+s)^{4/3}(t+s)^{2/3}\geq(t+s)^{4/3}8^{2/3}\geq 4(t+s)^{4/3}$. Replacing the term $(t+s)^{2}$ in (42) by $4(t+s)^{4/3}$ and regrouping the terms lead to $$\displaystyle a_{t}\leq\left(1-\frac{2}{(t+s)^{2/3}}\right)a_{t-1}+\frac{16% \sigma^{2}+3L^{2}D^{2}}{(t+s)^{4/3}}$$ (43) Now we prove by induction that for $t=0,\dots,T$ we can write $$a_{t}\leq\frac{Q}{(t+s+1)^{2/3}},$$ (44) where $Q:=\max\{a_{0}(s+1)^{2/3},16\sigma^{2}+3L^{2}D^{2}\}$. First, note that $Q\geq a_{0}(s+1)^{2/3}$ and therefore $a_{0}\leq Q/(s+1)^{2/3}$ and the base step of the induction holds true. Now assume that the condition in (44) holds for $t=k-1$, i.e., $$a_{k-1}\leq\frac{Q}{(k+s)^{2/3}}.$$ (45) The goal is to show that (44) also holds for $t=k$. To do so, first set $t=k$ in the expression in (43) to obtain $$\displaystyle a_{k}\leq\left(1-\frac{2}{(k+s)^{2/3}}\right)a_{k-1}+\frac{16% \sigma^{2}+3L^{2}D^{2}}{(k+s)^{4/3}}.$$ (46) According to the definition of $Q$, we know that $Q\geq 16\sigma^{2}+3L^{2}D^{2}$. Moreover, based on the induction hypothesis it holds that $a_{k-1}\leq\frac{Q}{(k+s)^{2/3}}$. Using these inequalities and the expression in (46) we can write $$\displaystyle a_{k}\leq\left(1-\frac{2}{(k+s)^{2/3}}\right)\frac{Q}{(k+s)^{2/3% }}+\frac{Q}{(k+s)^{4/3}}.$$ (47) Pulling out $\frac{Q}{(k+s)^{2/3}}$ as a common factor and simplifying and reordering terms it follows that (47) is equivalent to $$\displaystyle a_{k}$$ $$\displaystyle\leq Q\left(\frac{(k+s)^{2/3}-1}{(k+s)^{4/3}}\right).$$ (48) Based on the inequality $$\displaystyle((k+s)^{2/3}-1)((k+s)^{2/3}+1)<(k+s)^{4/3},$$ (49) the result in (48) implies that $$\displaystyle a_{k}\leq\left(\frac{Q}{(k+s)^{2/3}+1}\right).$$ (50) Since $(k+s)^{2/3}+1\geq(k+s+1)^{2/3}$, the result in (50) implies that $$\displaystyle a_{k}\leq\left(\frac{Q}{(k+s+1)^{2/3}}\right),$$ (51) and the induction step is complete. Therefore, the result in (44) holds for all $t=0,\dots,T$. Indeed, by setting $s=8$, the claim in (15) follows. 9.3 How to Construct an Unbiased Estimator of the Gradient in Multilinear Extensions Recall that ${\mathbb{E}}_{{\mathbf{z}}\sim P}[\tilde{f}(S,{\mathbf{z}})]$. In terms of the multilinear extensions, we obtain $F({\mathbf{x}})={\mathbb{E}}_{{\mathbf{z}}\sim P}[\tilde{F}({\mathbf{x}},{% \mathbf{z}})]$, where $F$ and $\tilde{F}$ denote the multilinear extension for $f$ and $\tilde{f}$, respectively. So $\nabla\tilde{F}({\mathbf{x}},{\mathbf{z}})$ is an unbiased estimator of $\nabla F({\mathbf{x}})$ when ${\mathbf{z}}\sim P$. Note that $\tilde{F}({\mathbf{x}},{\mathbf{z}})$ is a multilinear extension. It remains to provide an unbiased estimator for the gradient of a multilinear extension. We thus consider an arbitrary submodular set function $g$ with multilinear $G$. Our goal is to provide an unbiased estimator for $\nabla G({\mathbf{x}})$. We have $G({\mathbf{x}})=\sum_{S\subseteq V}\prod_{i\in S}x_{i}\prod_{j\not\in S}(1-x_{% j})g(S)$. Now, it can easily be shown that $$\frac{\partial G}{\partial x_{i}}=G({\mathbf{x}};x_{i}\leftarrow 1)-G({\mathbf% {x}};x_{i}\leftarrow 0).$$ (52) where for example by $({\mathbf{x}};x_{i}\leftarrow 1)$ we mean a vector which has value $1$ on its $i$-th coordinate and is equal to ${\mathbf{x}}$ elsewhere. To create an unbiased estimator for $\frac{\partial G}{\partial x_{i}}$ at a point ${\mathbf{x}}$ we can simply sample a set $S$ by including each element in it independently with probability $x_{i}$ and use $g(S\cup\{i\})-g(S\setminus\{i\})$ as an unbiased estimator for the $i$-th partial derivative. We can sample one single set $S$ and use the above trick for all the coordinates. This involves $n$ function computations for $g$. Having a mini-batch size $B$ we can repeat this procedure $B$ times and then average. 9.4 Proof of Lemma 4 Based on the mean value theorem, we can write $$\displaystyle\nabla F({\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t})-\nabla F({% \mathbf{x}}_{T})=\frac{1}{T}{\mathbf{H}}({\tilde{\mathbf{x}}}_{t}){\mathbf{v}}% _{t},$$ (53) where ${\tilde{\mathbf{x}}}_{t}$ is a convex combination of ${\mathbf{x}}_{t}$ and ${\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t}$ and ${\mathbf{H}}({\tilde{\mathbf{x}}}_{t}):=\nabla^{2}F({\tilde{\mathbf{x}}}_{t})$. This expression shows that the difference between the coordinates of the vectors $\nabla F({\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t})$ and $\nabla F({\mathbf{x}}_{t})$ can be written as $$\displaystyle\nabla_{j}F({\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t})-\nabla_% {j}F({\mathbf{x}}_{t})=\frac{1}{T}\sum_{i=1}^{n}H_{j,i}({\tilde{\mathbf{x}}}_{% t})v_{i,t},$$ (54) where $v_{i,t}$ is the $i$-th element of the vector ${\mathbf{v}}_{t}$ and $H_{j,i}$ denotes the component in the $j$-th row and $i$-th column of the matrix ${\mathbf{H}}$. Hence, the norm of the difference $|\nabla_{j}F({\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t})-\nabla_{j}F({% \mathbf{x}}_{t})|$ is bounded above by $$\displaystyle|\nabla_{j}F({\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t})-\nabla% _{j}F({\mathbf{x}}_{t})|\leq\frac{1}{T}\left|\sum_{i=1}^{n}H_{j,i}({\tilde{% \mathbf{x}}}_{t})v_{i,t}\right|.$$ (55) Note here that the elements of the matrix ${\mathbf{H}}({\tilde{\mathbf{x}}}_{t})$ are less than the maximum marginal value (i.e. $\max_{i,j}|H_{i,j}({\tilde{\mathbf{x}}}_{t})|\leq\max_{i\in\{1,\cdots,n\}}f(i)% \triangleq m_{f}$). We thus get $$\displaystyle|\nabla_{j}F({\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t})-\nabla% _{j}F({\mathbf{x}}_{t})|\leq\frac{m_{f}}{T}\sum_{i=1}^{n}|v_{i,t}|.$$ (56) Note that at each round $t$ of the algorithm, we have to pick a vector ${\mathbf{v}}_{t}\in\mathcal{C}$ s.t. the inner product $\langle{\mathbf{v}}_{t},{\mathbf{d}}_{t}\rangle$ is maximized. Hence, without loss of generality we can assume that the vector ${\mathbf{v}}_{t}$ is one of the extreme points of $\mathcal{C}$, i.e. it is of the form $1_{I}$ for some $I\in\mathcal{I}$ (note that we can easily force integer vectors). Therefore by noticing that ${\mathbf{v}}_{t}$ is an integer vector with at most $r$ ones, we have $$\displaystyle|\nabla_{j}F({\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t})-\nabla% _{j}F({\mathbf{x}}_{t})|\leq\frac{m_{f}\sqrt{r}}{T}\sqrt{\sum_{i=1}^{n}|v_{i,t% }|^{2}},$$ (57) which yields the claim in (28). 9.5 Proof of Theorem 5 According to the Taylor’s expansion of the function $F$ near the point ${\mathbf{x}}_{t}$ we can write $$\displaystyle F({\mathbf{x}}_{t+1})$$ $$\displaystyle=F({\mathbf{x}}_{t})+\langle\nabla F({\mathbf{x}}_{t}),{\mathbf{x% }}_{t+1}-{\mathbf{x}}_{t}\rangle+\frac{1}{2}\langle{\mathbf{x}}_{t+1}-{\mathbf% {x}}_{t},{\mathbf{H}}({\tilde{\mathbf{x}}}_{t})({\mathbf{x}}_{t+1}-{\mathbf{x}% }_{t})\rangle$$ $$\displaystyle=F({\mathbf{x}}_{t})+\frac{1}{T}\langle\nabla F({\mathbf{x}}_{t})% ,{\mathbf{v}}_{t}\rangle+\frac{1}{2T^{2}}\langle{\mathbf{v}}_{t},{\mathbf{H}}(% {\tilde{\mathbf{x}}}_{t}){\mathbf{v}}_{t}\rangle,$$ (58) where ${\tilde{\mathbf{x}}}_{t}$ is a convex combination of ${\mathbf{x}}_{t}$ and ${\mathbf{x}}_{t}+\frac{1}{T}{\mathbf{v}}_{t}$ and ${\mathbf{H}}({\tilde{\mathbf{x}}}_{t}):=\nabla^{2}F({\tilde{\mathbf{x}}}_{t})$. Note that based on the inequality $\max_{i,j}|H_{i,j}({\tilde{\mathbf{x}}}_{t})|\leq\max_{i\in\{1,\cdots,n\}}f(i)% \triangleq m_{f}$, we can lower bound $H_{ij}$ by $-m_{f}$. Therefore, $$\displaystyle\langle{\mathbf{v}}_{t},{\mathbf{H}}({\tilde{\mathbf{x}}}_{t}){% \mathbf{v}}_{t}\rangle=\sum_{j=1}^{n}\sum_{i=1}^{n}v_{i,t}v_{j,t}H_{ij}({% \tilde{\mathbf{x}}}_{t})\geq-m_{f}\sum_{j=1}^{n}\sum_{i=1}^{n}v_{i,t}v_{j,t}=-% m_{f}\left(\sum_{i=1}^{n}v_{i,t}\right)^{2}=-m_{f}r\|{\mathbf{v}}_{t}\|^{2},$$ (59) where the last inequality is because ${\mathbf{v}}_{t}$ is a vector with $r$ ones and $n-r$ zeros (see the explanation in the proof of Lemma 4). Replace the expression $\langle{\mathbf{v}}_{t},{\mathbf{H}}({\tilde{\mathbf{x}}}_{t}){\mathbf{v}}_{t}\rangle$ in (9.5) by its lower bound in (59) to obtain $$\displaystyle F({\mathbf{x}}_{t+1})\geq F({\mathbf{x}}_{t})+\frac{1}{T}\langle% \nabla F({\mathbf{x}}_{t}),{\mathbf{v}}_{t}\rangle-\frac{m_{f}r}{2T^{2}}\|{% \mathbf{v}}_{t}\|^{2}.$$ (60) In the following lemma we derive a variant of the result in Lemma 2 for the multilinear extension setting. Lemma 7 Consider Stochastic Continuous Greedy (SCG) outlined in Algorithm 1, and recall the definitions of the function $F$ in (27), the rank $r$, and $m_{f}\triangleq\max_{i\in\{1,\cdots,n\}}f(i)$. If we set $\rho_{t}=\frac{4}{(t+8)^{2/3}}$, then for $t=0,\dots,T$ and for $j=1,\dots,n$ it holds $$\displaystyle{\mathbb{E}}\left[\|\nabla F({\mathbf{x}}_{t})-{\mathbf{d}}_{t}|^% {2}\right]$$ $$\displaystyle\leq\frac{Q}{(t+9)^{2/3}},$$ (61) where $Q:=\max\{9^{2/3}\|\nabla F({\mathbf{x}}_{0})-{\mathbf{d}}_{0}\|^{2},16\sigma^{% 2}+3m_{f}^{2}rD^{2}\}$. Proof  The proof is similar to the proof of Lemma 1. The main difference is to write the analysis for the $j$-th coordinate and replace and $L$ by ${m_{f}\sqrt{r}}$ as shown in Lemma 4. Then using the proof techniques in Lemma 2 the claim in Lemma 7 follows.   The rest of the proof is identical to the proof of Theorem 1, by following the steps from (5) to (25) and considering the bound in (61) we obtain $${\mathbb{E}}\left[F({\mathbf{x}}_{T})\right]\geq(1-1/e)F({\mathbf{x}}^{*})-% \frac{2DQ^{1/2}}{T^{1/3}}-\frac{m_{f}rD^{2}}{2T},$$ (62) where $Q:=\max\{\|\nabla F({\mathbf{x}}_{0})-{\mathbf{d}}_{0}\|^{2}9^{2/3},16\sigma^{% 2}+3rm_{f}^{2}D^{2}\}$. Therefore, the claim in Theorem 5 follows. References Abbasi and Younis (2007) Ameer Ahmed Abbasi and Mohamed Younis. A survey on clustering algorithms for wireless sensor networks. Computer communications, 30(14):2826–2841, 2007. Anandkumar and Ge (2016) Animashree Anandkumar and Rong Ge. Efficient approaches for escaping higher order saddle points in non-convex optimization. In Proceedings of the 29th Conference on Learning Theory, COLT 2016, New York, USA, June 23-26, 2016, pages 81–102, 2016. Anandkumar et al. (2014) Animashree Anandkumar, Rong Ge, Daniel Hsu, Sham M Kakade, and Matus Telgarsky. Tensor decompositions for learning latent variable models. Journal of Machine Learning Research, 15:2773–2832, 2014. Arora et al. (2012) Sanjeev Arora, Rong Ge, Ravindran Kannan, and Ankur Moitra. Computing a nonnegative matrix factorization–provably. In Proceedings of the forty-fourth annual ACM symposium on Theory of computing, pages 145–162. ACM, 2012. Bach (2015) F. Bach. Submodular functions: from discrete to continuous domains. arXiv preprint arXiv:1511.00394, 2015. Badanidiyuru and Vondrák (2014) Ashwinkumar Badanidiyuru and Jan Vondrák. Fast algorithms for maximizing submodular functions. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 1497–1514, 2014. Bengio et al. (2006) Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. Greedy layer-wise training of deep networks. In Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, pages 153–160, 2006. Bian et al. (2017) Andrew An Bian, Baharan Mirzasoleiman, Joachim M. Buhmann, and Andreas Krause. Guaranteed non-convex optimization: Submodular maximization over continuous domains. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, AISTATS 2017, 20-22 April 2017, Fort Lauderdale, FL, USA, pages 111–120, 2017. Buchbinder et al. (2014) Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. Submodular maximization with cardinality constraints. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2014, Portland, Oregon, USA, January 5-7, 2014, pages 1433–1452, 2014. Buchbinder et al. (2015) Niv Buchbinder, Moran Feldman, Joseph Naor, and Roy Schwartz. A tight linear time (1/2)-approximation for unconstrained submodular maximization. SIAM Journal on Computing, 44(5):1384–1402, 2015. Calinescu et al. (2011) Gruia Calinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a monotone submodular function subject to a matroid constraint. SIAM Journal on Computing, 40(6):1740–1766, 2011. Chakrabarty et al. (2017) Deeparnab Chakrabarty, Yin Tat Lee, Aaron Sidford, and Sam Chiu-wai Wong. Subquadratic submodular function minimization. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2017, Montreal, QC, Canada, June 19-23, 2017, pages 1220–1231, 2017. Chekuri et al. (2014) Chandra Chekuri, Jan Vondrák, and Rico Zenklusen. Submodular function maximization via the multilinear relaxation and contention resolution schemes. SIAM Journal on Computing, 43(6):1831–1879, 2014. Chekuri et al. (2015) Chandra Chekuri, TS Jayram, and Jan Vondrák. On multiplicative weight updates for concave and submodular function maximization. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 201–210. ACM, 2015. Devanur and Jain (2012) Nikhil R. Devanur and Kamal Jain. Online matching with concave returns. In Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22, 2012, pages 137–144, 2012. Eghbali and Fazel (2016) Reza Eghbali and Maryam Fazel. Designing smoothing functions for improved worst-case competitive ratio in online optimization. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 3279–3287, 2016. Ene et al. (2017) Alina Ene, Huy L Nguyen, and László A Végh. Decomposable submodular function minimization: Discrete and continuous. arXiv preprint arXiv:1703.01830, 2017. Feige (1998) Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4):634–652, 1998. Feige et al. (2011) Uriel Feige, Vahab S Mirrokni, and Jan Vondrak. Maximizing non-monotone submodular functions. SIAM Journal on Computing, 40(4):1133–1153, 2011. Feldman et al. (2011) Moran Feldman, Joseph Naor, and Roy Schwartz. A unified continuous greedy algorithm for submodular maximization. In IEEE 52nd Annual Symposium on Foundations of Computer Science, FOCS 2011, Palm Springs, CA, USA, October 22-25, 2011, pages 570–579, 2011. Feldman et al. (2017) Moran Feldman, Christopher Harshaw, and Amin Karbasi. Greed is good: Near-optimal submodular maximization via greedy optimization. In Proceedings of the 30th Conference on Learning Theory, COLT 2017, Amsterdam, The Netherlands, 7-10 July 2017, pages 758–784, 2017. Fujishige (2005) Satoru Fujishige. Submodular functions and optimization, volume 58. Elsevier, 2005. Ge et al. (2015) Rong Ge, Furong Huang, Chi Jin, and Yang Yuan. Escaping from saddle points - online stochastic gradient for tensor decomposition. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, pages 797–842, 2015. Ge et al. (2016) Rong Ge, Jason D. Lee, and Tengyu Ma. Matrix completion has no spurious local minimum. In Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 2973–2981, 2016. Gharan and Vondrák (2011) Shayan Oveis Gharan and Jan Vondrák. Submodular maximization by simulated annealing. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 1098–1116, 2011. Golovin et al. (2014) Daniel Golovin, Andreas Krause, and Matthew Streeter. Online submodular maximization under a matroid constraint with application to learning assignments. arXiv preprint arXiv:1407.1082, 2014. Hassani et al. (2017) Hamed Hassani, Mahdi Soltanolkotabi, and Amin Karbasi. Gradient methods for submodular maximization. arXiv preprint arXiv:1708.03949, 2017. Hazan and Luo (2016) Elad Hazan and Haipeng Luo. Variance-reduced and projection-free stochastic optimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1263–1271, 2016. Hazan et al. (2016) Elad Hazan, Kfir Yehuda Levy, and Shai Shalev-Shwartz. On graduated optimization for stochastic non-convex problems. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1833–1841, 2016. Iyer et al. (2013) Rishabh K. Iyer, Stefanie Jegelka, and Jeff A. Bilmes. Fast semidifferential-based submodular function optimization. In Proceedings of the 30th International Conference on Machine Learning, ICML 2013, Atlanta, GA, USA, 16-21 June 2013, pages 855–863, 2013. Jin et al. (2017) Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, and Michael I. Jordan. How to escape saddle points efficiently. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 1724–1732, 2017. Karimi et al. (2017) Mohammad Karimi, Mario Lucic, Hamed Hassani, and Andreas Krause. Stochastic submodular maximization: The case of coverage functions. In Advances in Neural Information Processing Systems, 2017. Lovász (1983) László Lovász. Submodular functions and convexity. In Mathematical Programming The State of the Art, pages 235–257. Springer, 1983. Mehta et al. (2007) Aranyak Mehta, Amin Saberi, Umesh V. Vazirani, and Vijay V. Vazirani. Adwords and generalized online matching. Journal of the ACM, 54(5), 2007. Mei et al. (2016) Song Mei, Yu Bai, and Andrea Montanari. The landscape of empirical risk for non-convex losses. arXiv preprint arXiv:1607.06534, 2016. Mokhtari et al. (2017) Aryan Mokhtari, Alec Koppel, Gesualdo Scutari, and Alejandro Ribeiro. Large-scale nonconvex stochastic optimization by doubly stochastic successive convex approximation. In Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, pages 4701–4705. IEEE, 2017. Murty and Kabadi (1987) Katta G Murty and Santosh N Kabadi. Some np-complete problems in quadratic and nonlinear programming. Mathematical programming, 39(2):117–129, 1987. Nemhauser et al. (1978) George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. An analysis of approximations for maximizing submodular set functions–I. Mathematical Programming, 14(1):265–294, 1978. Netrapalli et al. (2014) Praneeth Netrapalli, U. N. Niranjan, Sujay Sanghavi, Animashree Anandkumar, and Prateek Jain. Non-convex robust PCA. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 1107–1115, 2014. Paternain et al. (2017) Santiago Paternain, Aryan Mokhtari, and Alejandro Ribeiro. A second order method for nonconvex optimization. arXiv preprint arXiv:1707.08028, 2017. Reddi et al. (2016) Sashank J Reddi, Suvrit Sra, Barnabás Póczos, and Alex Smola. Stochastic Frank-Wolfe methods for nonconvex optimization. In 54th Annual Allerton Conference on Communication, Control, and Computing, Allerton 2016, Monticello, IL, USA, September 27-30, 2016, pages 1244–1251. IEEE, 2016. Ruszczyński (1980) Andrzej Ruszczyński. Feasible direction methods for stochastic programming problems. Mathematical Programming, 19(1):220–229, 1980. Ruszczyński (2008) Andrzej Ruszczyński. A merit function approach to the subgradient method with averaging. Optimisation Methods and Software, 23(1):161–172, 2008. Snoek et al. (2012) Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. Practical bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems 25: 26th Annual Conference on Neural Information Processing Systems 2012. Proceedings of a meeting held December 3-6, 2012, Lake Tahoe, Nevada, United States., pages 2960–2968, 2012. Soma et al. (2014) Tasuku Soma, Naonori Kakimura, Kazuhiro Inaba, and Ken-ichi Kawarabayashi. Optimal budget allocation: Theoretical guarantee and efficient algorithm. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 351–359, 2014. Staib and Jegelka (2017) Matthew Staib and Stefanie Jegelka. Robust budget allocation via continuous submodular functions. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 3230–3240, 2017. Stan et al. (2017) Serban Stan, Morteza Zadimoghaddam, Andreas Krause, and Amin Karbasi. Probabilistic submodular maximization in sub-linear time. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pages 3241–3250, 2017. Sviridenko et al. (2015) Maxim Sviridenko, Jan Vondrák, and Justin Ward. Optimal approximation for submodular and supermodular optimization with bounded curvature. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 1134–1148, 2015. Vondrák (2007) Jan Vondrák. Submodularity in combinatorial optimization. 2007. Vondrák (2008) Jan Vondrák. Optimal approximation for the submodular welfare problem in the value oracle model. In Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008, pages 67–74, 2008. Vondrák (2010) Jan Vondrák. Submodularity and curvature : The optimal algorithm (combinatorial optimization and discrete algorithms). RIMS Kokyuroku Bessatsu, B23:253–266, 2010. Wolsey (1982) Laurence A Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4):385–393, 1982. Yang et al. (2016) Yang Yang, Gesualdo Scutari, Daniel P. Palomar, and Marius Pesavento. A parallel decomposition method for nonconvex stochastic multi-agent optimization problems. IEEE Transactions on Signal Processing, 64(11):2949–2964, 2016.
A New Corpus for Low-Resourced Sindhi Language with Word Embeddings Wazir Ali, Jay Kumar, Junyu Lu, Zenglin Xu School of Computer Science and Engineering University of Electronic Science and Technology of China Abstract Representing words and phrases into dense vectors of real numbers which encode semantic and syntactic properties is a vital constituent in natural language processing (NLP). The success of neural network (NN) models in NLP largely rely on such dense word representations learned on the large unlabeled corpus. Sindhi is one of the rich morphological language, spoken by large population in Pakistan and India lacks corpora which plays an essential role of a test-bed for generating word embeddings and developing language independent NLP systems. In this paper, a large corpus of more than 61 million words is developed for low-resourced Sindhi language for training neural word embeddings. The corpus is acquired from multiple web-resources using web-scrappy. Due to the unavailability of open source preprocessing tools for Sindhi, the prepossessing of such large corpus becomes a challenging problem specially cleaning of noisy data extracted from web resources. Therefore, a preprocessing pipeline is employed for the filtration of noisy text. Afterwards, the cleaned vocabulary is utilized for training Sindhi word embeddings with state-of-the-art GloVe, Skip-Gram (SG), and Continuous Bag of Words (CBoW) word2vec algorithms. The intrinsic evaluation approach of cosine similarity matrix and WordSim-353 are employed for the evaluation of generated Sindhi word embeddings. Moreover, we compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText) word representations. Our intrinsic evaluation results demonstrate the high quality of our generated Sindhi word embeddings using SG, CBoW, and GloVe as compare to SdfastText word representations.   A New Corpus for Low-Resourced Sindhi Language with Word Embeddings  A Preprint Wazir Ali, Jay Kumar, Junyu Lu, Zenglin Xu School of Computer Science and Engineering University of Electronic Science and Technology of China December 1, 2020 1 Introduction Sindhi is a rich morphological, mutltiscript, and multidilectal language. It belongs to the Indo-Aryan language family [1], with significant cultural and historical background. Presently, it is recognized as is an official language  [2] in Sindh province of Pakistan, also being taught as a compulsory subject in Schools and colleges. Sindhi is also recognized as one of the national languages in India. Ulhasnagar, Rajasthan, Gujarat, and Maharashtra are the largest Indian regions of Sindhi native speakers. It is also spoken in other countries except for Pakistan and India, where native Sindhi speakers have migrated, such as America, Canada, Hong Kong, British, Singapore, Tanzania, Philippines, Kenya, Uganda, and South, and East Africa. Sindhi has rich morphological structure [3] due to a large number of homogeneous words. Historically, it was written in multiple writing systems, which differ from each other in terms of orthography and morphology. The Persian-Arabic is the standard script of Sindhi, which was officially accepted in 1852 by the British government111https://www.britannica.com/topic/Sindhi-language. However, the Sindhi-Devanagari is also a popular writing system in India being written in left to right direction like the Hindi language. Formerly, Khudabadi, Gujrati, Landa, Khojki, and Gurumukhi were also adopted as its writing systems. Even though, Sindhi has great historical and literal background, presently spoken by nearly 75 million people [2]. The research on SNLP was coined in 2002222”Sindhia lai Kampyutar jo Istemalu” (Use of computer for Sindhi), an article published in Sindhu yearly, Ulhasnagar. 2002, however, IT grabbed research attention after the development of its Unicode system [4]. But still, Sindhi stands among the low-resourced languages due to the scarcity of core language processing resources of the raw and annotated corpus, which can be utilized for training robust word embeddings or the use of machine learning algorithms. Since the development of annotated datasets requires time and human resources. The Language Resources (LRs) are fundamental elements for the development of high quality NLP systems based on automatic or NN based approaches. The LRs include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. The development of such resources has received great research interest for the digitization of human languages [5]. Many world languages are rich in such language processing resources integrated in their software tools including English [6] [7], Chinese [8] and other languages [9]  [10]. The Sindhi language lacks the basic computational resources [11] of a large text corpus, which can be utilized for training robust word embeddings and developing language independent NLP applications including semantic analysis, sentiment analysis, parts of the speech tagging, named entity recognition, machine translation [12], multitasking [13], [14]. Presently Sindhi Persian-Arabic is frequently used for online communication, newspapers, public institutions in Pakistan, and India  [2]. But little work has been carried out for the development of LRs such as raw corpus [15], [16], annotated corpus [17], [18],  [2],  [19]. In the best of our knowledge, Sindhi lacks the large unlabelled corpus which can be utilized for generating and evaluating word embeddings for Statistical Sindhi Language Processing (SSLP). One way to to break out this loop is to learn word embeddings from unlabelled corpora, which can be utilized to bootstrap other downstream NLP tasks. The word embedding is a new term of semantic vector space [20], distributed representations [21], and distributed semantic models. It is a language modeling approach [22] used for the mapping of words and phrases into $n$-dimensional dense vectors of real numbers that effectively capture the semantic and syntactic relationship with neighboring words in a geometric way [23] [24]. Such as “Einstein” and “Scientist” would have greater similarity compared with “Einstein” and “doctor.” In this way, word embeddings accomplish the important linguistic concept of “a word is characterized by the company it keeps”. More recently NN based models yield state-of-the-art performance in multiple NLP tasks  [25]  [26] with the word embeddings. One of the advantages of such techniques is they use unsupervised approaches for learning representations and do not require annotated corpus which is rare for low-resourced Sindhi language. Such representions can be trained on large unannotated corpora, and then generated representations can be used in the NLP tasks which uses a small amount of labelled data. In this paper, we address the problems of corpus construction by collecting a large corpus of more than 61 million words from multiple web resources using the web-scrappy framework. After the collection of the corpus, we carefully preprocessed for the filtration of noisy text, e.g., the HTML tags and vocabulary of the English language. The statistical analysis is also presented for the letter, word frequencies and identification of stop-words. Finally, the corpus is utilized to generate Sindhi word embeddings using state-of-the-art GloVe [27] SG and CBoW [28] [21] [25] algorithms. The popular intrinsic evaluation method [21] [29] [30] of calculating cosine similarity between word vectors and WordSim353 [31] are employed to measure the performance of the learned Sindhi word embeddings. We translated English WordSim353333Available online at https://rdrr.io/cran/wordspace/man/WordSim353.html word pairs into Sindhi using bilingual English to Sindhi dictionary. The intrinsic approach typically involves a pre-selected set of query terms [24] and semantically related target words, which we refer to as query words. Furthermore, we also compare the proposed word embeddings with recently revealed Sindhi fastText (SdfastText)444We denote Sindhi word representations as (SdfastText) recently revealed by fastText, available at (https://fasttext.cc/docs/en/crawl-vectors.html) trained on Common Crawl and Wikipedia corpus of Sindhi Persian-Arabic. [26] word representations. To the best of our knowledge, this is the first comprehensive work on the development of large corpus and generating word embeddings along with systematic evaluation for low-resourced Sindhi Persian-Arabic. The synopsis of our novel contributions is listed as follows: • We present a large corpus of more than 61 million words obtained from multiple web resources and reveal a list of Sindhi stop words. • We develop a text cleaning pipeline for the preprocessing of the raw corpus. • Generate word embeddings using GloVe, CBoW, and SG Word2Vec algorithms also evaluate and compare them using the intrinsic evaluation approaches of cosine similarity matrix and WordSim353. • We are the first to evaluate SdfastText word representations and compare them with our proposed Sindhi word embeddings. The remaining sections of the paper are organized as; Section 2 presents the literature survey regarding computational resources, Sindhi corpus construction, and word embedding models. Afterwards, Section 3 presents the employed methodology, Section 4 consist of statistical analysis of the developed corpus. Section 5 present the experimental setup. The intrinsic evaluation results along with comparison are given in Section 6. The discussion and future work are given in Section 7, and lastly, Section 8 presents the conclusion. 2 Related work The natural language resources refer to a set of language data and descriptions [32] in machine readable form, used for building, improving, and evaluating NLP algorithms or softwares. Such resources include written or spoken corpora, lexicons, and annotated corpora for specific computational purposes. Many world languages are rich in such language processing resources integrated in the software tools including NLTK for English [6], Stanford CoreNLP [7], LTP for Chinese [8], TectoMT for German, Russian, Arabic [9] and multilingual toolkit [10]. But Sindhi language is at an early stage for the development of such resources and software tools. The corpus construction for NLP mainly involves important steps of acquisition, preprocessing, and tokenization. Initially, [15] discussed the morphological structure and challenges concerned with the corpus development along with orthographical and morphological features in the Persian-Arabic script. The raw and annotated corpus [2] for Sindhi Persian-Arabic is a good supplement towards the development of resources, including raw and annotated datasets for parts of speech tagging, morphological analysis, transliteration between Sindhi Persian-Arabic and Sindhi-Devanagari, and machine translation system. But the corpus is acquired only form Wikipedia-dumps. A survey-based study [5] provides all the progress made in the Sindhi Natural Language Processing (SNLP) with the complete gist of adopted techniques, developed tools and available resources which show that work on resource development on Sindhi needs more sophisticated efforts. The raw corpus is utilized for word segmentation [33] of Sindhi Persian-Arabic. More recently, an initiative towards the development of resources is taken [17] by open sourcing annotated dataset of Sindhi Persian-Arabic obtained from news and social blogs. The existing and proposed work is presented in Table 1 on the corpus development, word segmentation, and word embeddings, respectively. The power of word embeddings in NLP was empirically estimated by proposing a neural language model [22] and multitask learning [13], but recently usage of word embeddings in deep neural algorithms has become integral element [34] for performance acceleration in deep NLP applications. The CBoW and SG [28] [21] popular word2vec neural architectures yielded high quality vector representations in lower computational cost with integration of character-level learning on large corpora in terms of semantic and syntactic word similarity later extended [34] [25]. Both approaches produce state-of-the-art accuracy with fast training performance, better representations of less frequent words and efficient representation of phrases as well.  [35] proposed NN based approach for generating morphemic-level word embeddings, which surpassed all the existing embedding models in intrinsic evaluation. A count-based GloVe model [27] also yielded state-of-the-art results in an intrinsic evaluation and downstream NLP tasks. The performance of Word embeddings is evaluated using intrinsic [24] [30] and extrinsic evaluation [29] methods. The performance of word embeddings can be measured with intrinsic and extrinsic evaluation approaches. The intrinsic approach is used to measure the internal quality of word embeddings such as querying nearest neighboring words and calculating the semantic or syntactic similarity between similar word pairs. A method of direct comparison for intrinsic evaluation of word embeddings measures the neighborhood of a query word in vector space. The key advantage of that method is to reduce bias and create insight to find data-driven relevance judgment. An extrinsic evaluation approach is used to evaluate the performance in downstream NLP tasks, such as parts-of-speech tagging or named-entity recognition [24], but the Sindhi language lacks annotated corpus for such type of evaluation. Moreover, extrinsic evaluation is time consuming and difficult to interpret. Therefore, we opt intrinsic evaluation method [29] to get a quick insight into the quality of proposed Sindhi word embeddings by measuring the cosine distance between similar words and using WordSim353 dataset. A study reveals that the choice of optimized hyper-parameters [36] has a great impact on the quality of pretrained word embeddings as compare to desing a novel algorithm. Therefore, we optimized the hyperparameters for generating robust Sindhi word embeddings using CBoW, SG and GloVe models. The embedding visualization is also useful to visualize the similarity of word clusters. Therefore, we use t-SNE [37] dimensionality reduction algorithm for compressing high dimensional embedding into 2-dimensional $x$,$y$ coordinate pairs with PCA [38]. The PCA is useful to combine input features by dropping the least important features while retaining the most valuable features. 3 Methodology This section presents the employed methodology in detail for corpus acquisition, preprocessing, statistical analysis, and generating Sindhi word embeddings. 3.1 Task description We initiate this work from scratch by collecting large corpus from multiple web resources. After preprocessing and statistical analysis of the corpus, we generate Sindhi word embeddings with state-of-the-art CBoW, SG, and GloVe algorithms. The generated word embeddings are evaluated using the intrinsic evaluation approaches of cosine similarity between nearest neighbors, word pairs, and WordSim-353 for distributional semantic similarity. Moreover, we use t-SNE with PCA for the comparison of the distance between similar words via visualization. 3.2 Corpus acquisition The corpus is a collection of human language text [32] built with a specific purpose. However, the statistical analysis of the corpus provides quantitative, reusable data, and an opportunity to examine intuitions and ideas about language. Therefore, the corpus has great importance for the study of written language to examine the text. In fact, realizing the necessity of large text corpus for Sindhi, we started this research by collecting raw corpus from multiple web resource using web-scrappy framwork555https://github.com/scrapy/scrapy for extraction of news columns of daily Kawish666http://kawish.asia/Articles1/index.htm and Awami Awaz777http://www.awamiawaz.com/articles/294/ Sindhi newspapers, Wikipedia dumps888https://dumps.wikimedia.org/sdwiki/20180620/, short stories and sports news from Wichaar999http://wichaar.com/news/134/, accessed in Dec-2018 social blog, news from Focus Word press blog101010https://thefocus.wordpress.com/ accessed in Dec-2018, historical writings, novels, stories, books from Sindh Salamat111111http://sindhsalamat.com/, accessed in Jan-2019 literary websites, novels, history and religious books from Sindhi Adabi Board 121212http://www.sindhiadabiboard.org/catalogue/History/Main_History.HTML and tweets regarding news and sports are collected from twitter131313https://twitter.com/dailysindhtimes. 3.3 Preprocessing The preprocessing of text corpus obtained from multiple web resources is a challenging task specially it becomes more complicated when working on low-resourced language like Sindhi due to the lack of open-source preprocessing tools such as NLTK [6] for English. Therefore, we design a preprocessing pipeline depicted in Figure 1 for the filtration of unwanted data and vocabulary of other languages such as English to prepare input for word embeddings. Whereas, the involved preprocessing steps are described in detail below the Figure 1. Moreover, we reveal the list of Sindhi stop words [39] which is labor intensive and requires human judgment as well. Hence, the most frequent and least important words are classified as stop words with the help of a Sindhi linguistic expert. The partial list of Sindhi stop words is given in 4. We use python programming language for designing the preprocessing pipeline using regex and string functions. • Input: The collected text documents were concatenated for the input in UTF-8 format. • Replacement symbols: The punctuation marks of a full stop, hyphen, apostrophe, comma, quotation, and exclamation marks replaced with white space for authentic tokenization because without replacing these symbols with white space the words were found joined with their next or previous corresponding words. • Filtration of noisy data: The text acquisition from web resources contain a huge amount of noisy data. Therefore, we filtered out unimportant data such as the rest of the punctuation marks, special characters, HTML tags, all types of numeric entities, email, and web addresses. • Normalization: In this step, We tokenize the corpus then normalize to lower-case for the filtration of multiple white spaces, English vocabulary, and duplicate words. The stop words were only filtered out for preparing input for GloVe. However, the sub-sampling approach in CBoW and SG can discard most frequent or stop words automatically. 3.4 Word embedding models The NN based approaches have produced state-of-the-art performance in NLP with the usage of robust word embedings generated from the large unlabelled corpus. Therefore, word embeddings have become the main component for setting up new benchmarks in NLP using deep learning approaches. Most recently, the use cases of word embeddings are not only limited to boost statistical NLP applications but can also be used to develop language resources such as automatic construction of WordNet [40] using the unsupervised approach. The word embedding can be precisely defined as the encoding of vocabulary $V$ into $N$ and the word $w$ from $V$ to vector $\overrightarrow{w}$ into $N$-dimensional embedding space. They can be broadly categorized into predictive and count based methods, being generated by employing co-occurrence statistics, NN algorithms, and probabilistic models. The GloVe [27] algorithm treats each word as a single entity in the corpus and generates a vector of each word. However, CBoW and SG [28] [21], later extended [34] [25], well-known as word2vec rely on simple two layered NN architecture which uses linear activation function in hidden layer and softmax in the output layer. The work2vec model treats each word as a bag-of-character n-gram. 3.5 GloVe The GloVe is a log-bilinear regression model [27] which combines two methods of local context window and global matrix factorization for training word embeddings of a given vocabulary in an unsupervised way. It weights the contexts using the harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. The Glove’s implementation represents word $\boldsymbol{w}\in\boldsymbol{V}_{\boldsymbol{w}}$ and context $\boldsymbol{c}\in\boldsymbol{V}_{c}$ in $D$-dimensional vectors $\overrightarrow{w}$ and $\overrightarrow{c}$ in a following way, $$M^{\log(\#(\boldsymbol{w},c))}\approx\boldsymbol{W}\cdot\boldsymbol{C}^{T}+% \boldsymbol{b}^{\overrightarrow{w}}+\boldsymbol{b}^{\overrightarrow{c}}$$ (1) Where, $\boldsymbol{b}^{\overrightarrow{w}}$ is row vector $\left|V_{w}\right|$ and $\boldsymbol{b}^{\overrightarrow{c}}$ is $\left|V_{c}\right|$ is column vector. 3.6 Continuous bag-of-words The standard CBoW is the inverse of SG [28] model, which predicts input word on behalf of the context. The length of input in the CBoW model depends on the setting of context window size which determines the distance to the left and right of the target word. Hence the context is a window that contain neighboring words such as by giving $w=\left\{w_{1},w_{2},\dots\dots w_{t}\right\}$ a sequence of words $T$, the objective of the CBoW is to maximize the probability of given neighboring words such as, $$\sum_{t=0}^{T}\log p\left(w_{t}|c_{t}\right)$$ (2) Where, $c_{t}$ is context of $t^{\text{th}}$ word for example with window $w_{t-c},\ldots w_{t-1},w_{t+1},\ldots w_{t+c}$ of size $2c$. 3.7 Skip gram The SG model predicts surrounding words by giving input word [21] with training objective of learning good word embeddings that efficiently predict the neighboring words. The goal of skip-gram is to maximize average log-probability of words $w=\left\{w_{1},w_{2},\dots\dots w_{t}\right\}$ across the entire training corpus, $$\mathrm{J}(\theta)\frac{1}{\mathrm{T}}\sum_{t=0}^{T}\left(\sum_{-c\leq j\leq c% ,j}\log p\left(w_{\mathrm{t}+j}|c_{t}\right)\right)$$ (3) Where, $c_{t}$ denotes the context of words indices set of nearby $w_{t}$ words in the training corpus. 3.8 Hyperparameters 3.8.1 Sub-sampling Th sub-sampling [21] approach is useful to dilute most frequent or stop words, also accelerates learning rate, and increases accuracy for learning rare word vectors. Numerous words in English, e.g., ‘the’, ‘you’, ’that’ do not have more importance, but these words appear very frequently in the text. However, considering all the words equally would also lead to over-fitting problem of model parameters [25] on the frequent word embeddings and under-fitting on the rest. Therefore, it is useful to count the imbalance between rare and repeated words. The sub-sampling technique randomly removes most frequent words with some threshold $t$ and probability $p$ of words and frequency $f$ of words in the corpus. $$P\left(w_{i}\right)=1-\sqrt{\frac{t}{f\left(w_{i}\right)}}$$ (4) Where each word$w_{i}$ is discarded with computed probability in training phase, $f(w_{i})$ is frequency of word $w_{i}$ and $t>0$ are parameters. 3.8.2 Dynamic context window The traditional word embedding models usually use a fixed size of a context window. For instance, if the window size ws=6, then the target word apart from 6 tokens will be treated similarity as the next word. The scheme is used to assign more weight to closer words, as closer words are generally considered to be more important to the meaning of the target word. The CBoW, SG and GloVe models employ this weighting scheme. The GloVe model weights the contexts using a harmonic function, for example, a context word four tokens away from an occurrence will be counted as $\frac{1}{4}$. However, CBoW and SG implementation equally consider the contexts by dividing the ws with the distance from target word, e.g. ws=6 will weigh its context by $\frac{6}{6}\frac{5}{6}\frac{4}{6}\frac{3}{6}\frac{2}{6}\frac{1}{6}$. 3.8.3 Sub-word model The sub-word model [25] can learn the internal structure of words by sharing the character representations across words. In that way, the vector for each word is made of the sum of those character $n-gram$. Such as, a vector of a word “table” is a sum of $n-gram$ vectors by setting the letter $n-gram$ size $min=3$ to $max=6$ as, $<ta,tab,tabl,table,table>,abl,able,able>,ble,ble>,le>$, we can get all sub-words of ”table” with minimum length of $minn=3$ and maximum length of $maxn=6$. The $<$ and $>$ symbols are used to separate prefix and suffix words from other character sequences. In this way, the sub-word model utilizes the principles of morphology, which improves the quality of infrequent word representations. In addition to character $n-grams$, the input word $w$ is also included in the set of character $n-gram$, to learn the representation of each word. We obtain scoring function using a input dictionary of $n-grams$ with size $K$ by giving word $w$ , where $K_{w}\subset\{1,\ldots,K\}$. A word representation $Z_{k}$ is associated to each $n-gram$ $Z$. Hence, each word is represented by the sum of character $n-gram$ representations, where, $s$ is the scoring function in the following equation, $$s(w,c)=\sum_{k\in K_{j}}z_{k}^{T}vc$$ (5) 3.8.4 Position-dependent weights The position-dependent weighting approach [41] is used to avoid direct encoding of representations for words and their positions which can lead to over-fitting problem. The approach learns positional representations in contextual word representations and used to reweight word embedding. Thus, it captures good contextual representations at lower computational cost, $$\boldsymbol{v}_{C}\boldsymbol{V}=\sum_{p\in P}\boldsymbol{d}_{p}\odot% \boldsymbol{u}_{t+p}$$ (6) Where, $p$ is individual position in context window associated with $d_{p}$ vector. Afterwards the context vector reweighted by their positional vectors is average of context words. The relative positional set is $P$ in context window and $v_{C}$ is context vector of $w_{t}$ respectively. 3.8.5 Shifted point-wise mutual information The use sparse Shifted Positive Point-wise Mutual Information (SPPMI) [42] word-context matrix in learning word representations improves results on two word similarity tasks. The CBoW and SG have $k$ (number of negatives) [28] [21] hyperparameter, which affects the value that both models try to optimize for each $(w,c):PMI(w,c)-\log k$. Parameter $k$ has two functions of better estimation of negative examples, and it performs as before observing the probability of positive examples (actual occurrence of $w,c$). 3.8.6 Deleting rare words Before creating a context window, the automatic deletion of rare words also leads to performance gain in CBoW, SG and GloVe models, which further increases the actual size of context windows. 3.9 Evaluation methods The intrinsic evaluation is based on semantic similarity [24] in word embeddings. The word similarity measure approach states [36] that the words are similar if they appear in the similar context. We measure word similarity of proposed Sindhi word embeddings using dot product method and WordSim353. 3.9.1 Cosine similarity The cosine similarity between two non-zero vectors is a popular measure that calculates the cosine of the angle between them which can be derived by using the Euclidean dot product method. The dot product is a multiplication of each component from both vectors added together. The result of a dot product between two vectors isn’t another vector but a single value or a scalar. The dot product for two vectors can be defined as: $\overrightarrow{a}=\left(a_{1},a_{2},a_{3},\dots,a_{n}\right)$ and $\overrightarrow{b}=\left({b}_{1},{b}_{2},{b}_{3},\ldots,{b}_{n}\right)$ where $a_{n}$ and $b_{n}$ are the components of the vector and $n$ is dimension of vectors such as, $$\overrightarrow{{a}}\cdot\overrightarrow{b}=\sum_{i=1}^{n}a_{i}{b}_{i=}a_{1}{b% }_{1}+{a}_{2}{b}_{2}+\ldots\ldots\ldots\ldots a_{n}{b}_{n}$$ (7) However, the cosine of two non-zero vectors can be derived by using the Euclidean dot product formula, $$\overrightarrow{a}\cdot\overrightarrow{b}=\|\overrightarrow{a}\|\|% \overrightarrow{b}\|\cos({\theta})$$ (8) Given $a_{i}$ two vectors of attributes $a$ and $b$, the cosine similarity, $\cos({\theta})$, is represented using a dot product and magnitude as, $$\text{ similarity }=\cos(\theta)=\frac{\bar{a}\cdot\vec{b}}{\|\vec{a}\|\|\vec{% b}\|}\frac{\sum_{i=1}^{n}a_{i}b_{i}}{\sqrt{\sum_{i=1}^{n}a_{1}^{2}}\sqrt{\sum_% {i=1}^{n}b_{1}^{2}}}$$ (9) where $a_{i}$ and $b_{i}$ are components of vector $\overrightarrow{a}$ and $\overrightarrow{b}$, respectively. 3.9.2 WordSim353 The WordSim353 [43] is popular for the evaluation of lexical similarity and relatedness. The similarity score is assigned with 13 to 16 human subjects with semantic relations [31] for 353 English noun pairs. Due to the lack of annotated datasets in the Sindhi language, we translated WordSim353 using English to Sindhi bilingual dictionary141414http://dic.sindhila.edu.pk/index.php?txtsrch= for the evaluation of our proposed Sindhi word embeddings and SdfastText. We use the Spearman correlation coefficient for the semantic and syntactic similarity comparison which is used to used to discover the strength of linear or nonlinear relationships if there are no repeated data values. A perfect Spearman’s correlation of $+1$ or $-1$ discovers the strength of a link between two sets of data (word-pairs) when observations are monotonically increasing or decreasing functions of each other in a following way, $$r_{s}=1-\frac{6\sum_{i}^{n}d_{i}^{2}}{n\left(n^{2}-1\right)}$$ (10) where $r_{s}$ is the rank correlation coefficient, $n$ denote the number of observations, and $d^{i}$ is the rank difference between $i^{th}$ observations. 4 Statistical analysis of corpus The large corpus acquired from multiple resources is rich in vocabulary. We present the complete statistics of collected corpus (see Table 2) with number of sentences, words and unique tokens. 4.1 Letter occurrences The frequency of letter occurrences in human language is not arbitrarily organized but follow some specific rules which enable us to describe some linguistic regularities. The Zipf’s law [44] suggests that if the frequency of letter or word occurrence ranked in descending order such as, $$F_{r}=\frac{a}{r^{b}}$$ (11) Where, $F_{r}$ is the letter frequency of r${}^{th}$ rank, $a$ and $b$ are parameters of input text. The comparative letter frequency in the corpus is the total number of occurrences of a letter divided by the total number of letters present in the corpus. The letter frequencies in our developed corpus are depicted in Figure 2; however, the corpus contains 187,620,276 total number of the character set. Sindhi Persian-Arabic alphabet consists of 52 letters but in the vocabulary 59 letters are detected, additional seven letters are modified uni-grams and standalone honorific symbols. 4.2 Letter n-grams frequency We denote the combination of letter occurrences in a word as n-grams, where each letter is a gram in a word. The letter n-gram frequency is carefully analyzed in order to find the length of words which is essential to develop NLP systems, including learning of word embeddings such as choosing the minimum or maximum length of sub-word for character-level representation learning [25]. We calculate the letter n-grams in words along with their percentage in the developed corpus (see Table 3). The bi-gram words are most frequent, mostly consists of stop words and secondly, 4-gram words have a higher frequency. 4.3 Word Frequencies The word frequency count is an observation of word occurrences in the text. The commonly used words are considered to be with higher frequency, such as the word “the” in English. Similarly, the frequency of rarely used words to be lower. Such frequencies can be calculated at character or word-level. We calculate word frequencies by counting a word $w$ occurrence in the corpus $c$, such as, $$\text{ freq}(w)=\sum_{\mathrm{k}=0}^{\mathrm{k}}w_{k}\in c$$ (12) Where the frequency of $w$ is the sum of every occurrence $k$ of $w$ in $c$. 4.4 Stop words The most frequent and least important words in NLP are often classified as stop words. The removal of such words can boost the performance of the NLP model [39], such as sentiment analysis and text classification. But the construction of such words list is time consuming and requires user decisions. Firstly, we determined Sindhi stop words by counting their term frequencies using Eq. 12, and secondly, by analysing their grammatical status with the help of Sindhi linguistic expert because all the frequent words are not stop words (see Figure 3). After determining the importance of such words with the help of human judgment, we placed them in the list of stop words. The total number of detected stop words is 340 in our developed corpus. The partial list of most frequent Sindhi stop words is depicted in Table 4 along with their frequency. The filtration of stop words is an essential preprocessing step for learning GloVe [27] word embeddings; therefore, we filtered out stop words for preparing input for the GloVe model. However, the sub-sampling approach  [34] [25] is used to discard such most frequent words in CBoW and SG models. 5 Experiments and results Hyperparameter optimization [24]is more important than designing a novel algorithm. We carefully choose to optimize the dictionary and algorithm-based parameters of CBoW, SG and GloVe algorithms. Hence, we conducted a large number of experiments for training and evaluation until the optimization of most suitable hyperparameters depicted in Table 5 and discussed in Section 5.1. The choice of optimized hyperparameters is based on The high cosine similarity score in retrieving nearest neighboring words, the semantic, syntactic similarity between word pairs, WordSim353, and visualization of the distance between twenty nearest neighbours using t-SNE respectively. All the experiments are conducted on GTX 1080-TITAN GPU. 5.1 Hyperparameter optimization The state-of-the-art SG, CBoW [28] [34] [21] [25] and Glove [27] word embedding algorithms are evaluated by parameter tuning for development of Sindhi word embeddings. These parameters can be categories into dictionary and algorithm based, respectively. The integration of character n-gram in learning word representations is an ideal method especially for rich morphological languages because this approach has the ability to compute rare and misspelled words. Sindhi is also a rich morphological language. Therefore more robust embeddings became possible to train with the hyperparameter optimization of SG, CBoW and GloVe algorithms. We tuned and evaluated the hyperparameters of three algorithms individually which are discussed as follows: • Number of Epochs: Generally, more epochs on the corpus often produce better results but more epochs take long training time. Therefore, we evaluate $10$, $20$, $30$ and $40$ epochs for each word embedding model, and $40$ epochs constantly produce good results. • Learning rate (lr): We tried lr of $0.05$, $0.1$, and $0.25$, the optimal lr $(0.25)$ gives the better results for training all the embedding models. • Dimensions ($D$): We evaluate and compare the quality of $100-D$, $200-D$, and $300-D$ using WordSim353 on different $ws$, and the optimal $300-D$ are evaluated with cosine similarity matrix for querying nearest neighboring words and calculating the similarity between word pairs. The embedding dimensions have little affect on the quality of the intrinsic evaluation process. However, the selection of embedding dimensions might have more impact on the accuracy in certain downstream NLP applications. The lower embedding dimensions are faster to train and evaluate. • Character n-grams: The selection of minimum (minn) and the maximum (maxn) length of character $n-grams$ is an important parameter for learning character-level representations of words in CBoW and SG models. Therefore, the n-grams from $3-9$ were tested to analyse the impact on the accuracy of embedding. We optimized the length of character n-grams from $minn=2$ and $maxn=7$ by keeping in view the word frequencies depicted in Table 3. • Window size (ws): The large ws means considering more context words and similarly less ws means to limit the size of context words. By changing the size of the dynamic context window, we tried the ws of 3, 5, 7 the optimal ws=7 yield consistently better performance. • Negative Sampling (NS): : The more negative examples yield better results, but more negatives take long training time. We tried 10, 20, and 30 negative examples for CBoW and SG. The best negative examples of 20 for CBoW and SG significantly yield better performance in average training time. • Minimum word count (minw): We evaluated the range of minimum word counts from 1 to 8 and analyzed that the size of input vocabulary is decreasing at a large scale by ignoring more words similarly the vocabulary size was increasing by considering rare words. Therefore, by ignoring words with a frequency of less than 4 in CBoW, SG, and GloVe consistently yields better results with the vocabulary of 200,000 words. • Loss function (ls): we use hierarchical softmax (hs) for CBoW, negative sampling (ns) for SG and default loss function for GloVe [27]. • The recommended verbosity level, number of buckets, sampling threshold, number of threads are used for training CBoW, SG [25], and GloVe [27]. 6 Word similarity comparison of Word Embeddings 6.1 Nearest neighboring words The cosine similarity matrix [36] is a popular approach to compute the relationship between all embedding dimensions of their distinct relevance to query word. The words with similar context get high cosine similarity and geometrical relatedness to Euclidean distance, which is a common and primary method to measure the distance between a set of words and nearest neighbors. Each word contains the most similar top eight nearest neighboring words determined by the highest cosine similarity score using Eq. 9. We present the English translation of both query and retrieved words also discuss with their English meaning for ease of relevance judgment between the query and retrieved words.To take a closer look at the semantic and syntactic relationship captured in the proposed word embeddings, Table 6 shows the top eight nearest neighboring words of five different query words Friday, Spring, Cricket, Red, Scientist taken from the vocabulary. As the first query word Friday returns the names of days Saturday, Sunday, Monday, Tuesday, Wednesday, Thursday in an unordered sequence. The SdfastText returns five names of days Sunday, Thursday, Monday, Tuesday and Wednesday respectively. The GloVe model also returns five names of days. However, CBoW and SG gave six names of days except Wednesday along with different writing forms of query word Friday being written in the Sindhi language which shows that CBoW and SG return more relevant words as compare to SdfastText and GloVe. The CBoW returned Add and GloVe returns Honorary words which are little similar to the querry word but SdfastText resulted two irrelevant words Kameeso (N) which is a name (N) of person in Sindhi and Phrase is a combination of three Sindhi words which are not tokenized properly. Similarly, nearest neighbors of second query word Spring are retrieved accurately as names and seasons and semantically related to query word Spring by CBoW, SG and Glove but SdfastText returned four irrelevant words of Dilbahar (N), Pharase, Ashbahar (N) and Farzana (N) out of eight. The third query word is Cricket, the name of a popular game. The first retrieved word in CBoW is Kabadi (N) that is a popular national game in Pakistan. Including Kabadi (N) all the returned words by CBoW, SG and GloVe are related to Cricket game or names of other games. But the first word in SdfastText contains a punctuation mark in retrieved word Gone.Cricket that are two words joined with a punctuation mark (.), which shows the tokenization error in preprocessing step, sixth retrieved word Misspelled is a combination of three words not related to query word, and Played, Being played are also irrelevant and stop words. Moreover, fourth query word Red gave results that contain names of closely related to query word and different forms of query word written in the Sindhi language. The last returned word Unknown by SdfastText is irrelevant and not found in the Sindhi dictionary for translation. The last query word Scientist also contains semantically related words by CBoW, SG, and GloVe, but the first Urdu word given by SdfasText belongs to the Urdu language which means that the vocabulary may also contain words of other languages. Another unknown word returned by SdfastText does not have any meaning in the Sindhi dictionary. More interesting observations in the presented results are the diacritized words retrieved from our proposed word embeddings and The authentic tokenization in the preprocessing step presented in Figure 1. However, SdfastText has returned tri-gram words of Phrase in query words Friday, Spring, a Misspelled word in Cricket and Scientist query words. Hence, the overall performance of our proposed SG, CBoW, and GloVe demonstrate high semantic relatedness in retrieving the top eight nearest neighbor words. 6.2 Word pair relationship Generally, closer words are considered more important to a word’s meaning. The word embeddings models have the ability to capture the lexical relations between words. Identifying such relationship that connects words is important in NLP applications. We measure that semantic relationship by calculating the dot product of two vectors using Eq. 9. The high cosine similarity score denotes the closer words in the embedding matrix, while less cosine similarity score means the higher distance between word pairs. We present the cosine similarity score of different semantically or syntactically related word pairs taken from the vocabulary in Table 7 along with English translation, which shows the average similarity of 0.632, 0.650, 0.591 yields by CBoW, SG and GloVe respectively. The SG model achieved a high average similarity score of 0.650 followed by CBoW with a 0.632 average similarity score. The GloVe also achieved a considerable average score of 0.591 respectively. However, the average similarity score of SdfastText is 0.388 and the word pair Microsoft-Bill Gates is not available in the vocabulary of SdfastText. This shows that along with performance, the vocabulary in SdfastText is also limited as compared to our proposed word embeddings. Moreover, the average semantic relatedness similarity score between countries and their capitals is shown in Table 8 with English translation, where SG also yields the best average score of 0.663 followed by CBoW with 0.611 similarity score. The GloVe also yields better semantic relatedness of 0.576 and the SdfastText yield an average score of 0.391. The first query word China-Beijing is not available the vocabulary of SdfastText. However, the similarity score between Afghanistan-Kabul is lower in our proposed CBoW, SG, GloVe models because the word Kabul is the name of the capital of Afghanistan as well as it frequently appears as an adjective in Sindhi text which means able. 6.3 Comparison with WordSim353 We evaluate the performance of our proposed word embeddings using the WordSim353 dataset by translation English word pairs to Sindhi. Due to vocabulary differences between English and Sindhi, we were unable to find the authentic meaning of six terms, so we left these terms untranslated. So our final Sindhi WordSim353 consists of 347 word pairs. Table 9 shows the Spearman correlation results using Eq. 10 on different dimensional embeddings on the translated WordSim353. The Table 9 presents complete results with the different ws for CBoW, SG and GloVe in which the ws=7 subsequently yield better performance than ws of 3 and 5, respectively. The SG model outperforms CBoW and GloVe in semantic and syntactic similarity by achieving the performance of 0.629 with ws=7. In comparison with English [28] achieved the average semantic and syntactic similarity of 0.637, 0.656 with CBoW and SG, respectively. Therefore, despite the challenges in translation from English to Sindhi, our proposed Sindhi word embeddings have efficiently captured the semantic and syntactic relationship. 6.4 Visualization We use t-Distributed Stochastic Neighboring (t-SNE) dimensionality [37] reduction algorithm with PCA [38] for exploratory embeddings analysis in 2-dimensional map. The t-SNE is a non-linear dimensionality reduction algorithm for visualization of high dimensional datasets. It starts the probability calculation of similar word clusters in high-dimensional space and calculates the probability of similar points in the corresponding low-dimensional space. The purpose of t-SNE for visualization of word embeddings is to keep similar words close together in 2-dimensional $x,y$ coordinate pairs while maximizing the distance between dissimilar words. The t-SNE has a perplexity (PPL) tunable parameter used to balance the data points at both the local and global levels. We visualize the embeddings using PPL=20 on 5000-iterations of 300-D models. We use the same query words (see Table 6) by retrieving the top 20 nearest neighboring word clusters for a better understanding of the distance between similar words. Every query word has a distinct color for the clear visualization of a similar group of words. The closer word clusters show the high similarity between the query and retrieved word clusters. The word clusters in SG (see Fig. 5) are closer to their group of semantically related words. Secondly, the CBoW model depicted in Fig. 4 and GloVe Fig. 6 also show the better cluster formation of words than SdfastText Fig. 7, respectively. 7 Discussion and future work In this era of the information age, the existence of LRs plays a vital role in the digital survival of natural languages because the NLP tools are used to process a flow of un-structured data from disparate sources. It is imperative to mention that presently, Sindhi Persian-Arabic is frequently used in online communication, newspapers, public institutions in Pakistan and India. Due to the growing use of Sindhi on web platforms, the need for its LRs is also increasing for the development of language technology tools. But little work has been carried out for the development of resources which is not sufficient to design a language independent or machine learning algorithms. The present work is a first comprehensive initiative on resource development along with their evaluation for statistical Sindhi language processing. More recently, the NN based approaches have produced a state-of-the-art performance in NLP by exploiting unsupervised word embeddings learned from the large unlabelled corpus. Such word embeddings have also motivated the work on low-resourced languages. Our work mainly consists of novel contributions of resource development along with comprehensive evaluation for the utilization of NN based approaches in SNLP applications. The large corpus obtained from multiple web resources is utilized for the training of word embeddings using SG, CBoW and Glove models. The intrinsic evaluation along with comparative results demonstrates that the proposed Sindhi word embeddings have accurately captured the semantic information as compare to recently revealed SdfastText word vectors. The SG yield best results in nearest neighbors, word pair relationship and semantic similarity. The performance of CBoW is also close to SG in all the evaluation matrices. The GloVe also yields better word representations; however SG and CBoW models surpass the GloVe model in all evaluation matrices. Hyperparameter optimization is as important as designing a new algorithm. The choice of optimal parameters is a key aspect of performance gain in learning robust word embeddings. Moreover, We analysed that the size of the corpus and careful preprocessing steps have a large impact on the quality of word embeddings. However, in algorithmic perspective, the character-level learning approach in SG and CBoW improves the quality of representation learning, and overall window size, learning rate, number of epochs are the core parameters that largely influence the performance of word embeddings models. Ultimately, the new corpus of low-resourced Sindhi language, list of stop words and pretrained word embeddings along with empirical evaluation, will be a good supplement for future research in SSLP applications. In the future, we aim to use the corpus for annotation projects such as parts-of-speech tagging, named entity recognition. The proposed word embeddings will be refined further by creating custom benchmarks and the extrinsic evaluation approach will be employed for the performance analysis of proposed word embeddings. Moreover, we will also utilize the corpus using Bi-directional Encoder Representation Transformer [14] for learning deep contextualized Sindhi word representations. Furthermore, the generated word embeddings will be utilized for the automatic construction of Sindhi WordNet. 8 Conclusion In this paper, we mainly present three novel contributions of large corpus development contains large vocabulary of more than 61 million tokens, 908,456 unique words. Secondly, the list of Sindhi stop words is constructed by finding their high frequency and least importance with the help of Sindhi linguistic expert. Thirdly, the unsupervised Sindhi word embeddings are generated using state-of-the-art CBoW, SG and GloVe algorithms and evaluated using popular intrinsic evaluation approaches of cosine similarity matrix and WordSim353 for the first time in Sindhi language processing. We translate English WordSim353 using the English-Sindhi bilingual dictionary, which will also be a good resource for the evaluation of Sindhi word embeddings. Moreover, the proposed word embeddings are also compared with recently revealed SdfastText word representations. Our empirical results demonstrate that our proposed Sindhi word embeddings have captured high semantic relatedness in nearest neighboring words, word pair relationship, country, and capital and WordSim353. The SG yields the best performance than CBoW and GloVe models subsequently. However, the performance of GloVe is low on the same vocabulary because of character-level learning of word representations and sub-sampling approaches in SG and CBoW. Our proposed Sindhi word embeddings have surpassed SdfastText in the intrinsic evaluation matrix. Also, the vocabulary of SdfastText is limited because they are trained on a small Wikipedia corpus of Sindhi Persian-Arabic. We will further investigate the extrinsic performance of proposed word embeddings on the Sindhi text classification task in the future. The proposed resources along with systematic evaluation will be a sophisticated addition to the computational resources for statistical Sindhi language processing. References [1] Jennifer Cole. Sindhi. encyclopedia of language & linguistics volume8, 2006. [2] Raveesh Motlani. Developing language technology tools and resources for a resource-poor language: Sindhi. In Proceedings of the NAACL Student Research Workshop, pages 51–58, 2016. [3] Hidayatullah Shaikh, Javed Ahmed Mahar, and Mumtaz Hussain Mahar. Instant diacritics restoration system for sindhi accent prediction using n-gram and memory-based learning approaches. INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 8(4):149–157, 2017. [4] Abdul-Majid Bhurgri. Enabling pakistani languages through unicode. Microsoft Corporation white paper at http://download. microsoft. com/download/1/4/2/142aef9f-1a74-4a24-b1f4-782d48d41a6d/PakLang. pdf, 2006. [5] Wazir Ali Jamro. Sindhi language processing: A survey. In 2017 International Conference on Innovations in Electrical Engineering and Computational Technologies (ICIEECT), pages 1–8. IEEE, 2017. [6] Edward Loper and Steven Bird. Nltk: the natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics-Volume 1, pages 63–70. Association for Computational Linguistics, 2002. [7] Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. The stanford corenlp natural language processing toolkit. In Proceedings of 52nd annual meeting of the association for computational linguistics: system demonstrations, pages 55–60, 2014. [8] Wanxiang Che, Zhenghua Li, and Ting Liu. Ltp: A chinese language technology platform. In Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations, pages 13–16. Association for Computational Linguistics, 2010. [9] Martin Popel and Zdeněk Žabokrtskỳ. Tectomt: modular nlp framework. In International Conference on Natural Language Processing, pages 293–304. Springer, 2010. [10] Lluís Padró, Miquel Collado, Samuel Reese, Marina Lloberes, and Irene Castellón. Freeling 2.1: Five years of open-source language processing tools. In 7th International Conference on Language Resources and Evaluation, 2010. [11] Waqar Ali Narejo and Javed Ahmed Mahar. Morphology: Sindhi morphological analysis for natural language processing applications. In 2016 International Conference on Computing, Electronic and Electrical Engineering (ICE Cube), 2016. [12] Yang Li and Tao Yang. Word embedding for understanding natural language: a survey. In Guide to Big Data Applications, pages 83–104. Springer, 2018. [13] Ronan Collobert and Jason Weston. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM, 2008. [14] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, 2019. [15] Mutee U Rahman. Towards sindhi corpus construction. In Conference on Language and Technology, Lahore, Pakistan, 2010. [16] Fida Hussain Khoso, Mashooque Ahmed Memon, Haque Nawaz, and Sayed Hyder Abbas Musavi. To build corpus of sindhi. 2019. [17] Mazhar Ali Dootio and Asim Imdad Wagan. Unicode-8 based linguistics data set of annotated sindhi text. Data in brief, 19:1504–1514, 2018. [18] Mazhar Ali Dootio and Asim Imdad Wagan. Development of sindhi text corpus. Journal of King Saud University-Computer and Information Sciences, 2019. [19] Mazhar Ali and Asim Imdad Wagan. Sentiment summerization and analysis of sindhi text. Int. J. Adv. Comput. Sci. Appl, 8(10):296–300, 2017. [20] Kevin Lund and Curt Burgess. Producing high-dimensional semantic spaces from lexical co-occurrence. Behavior research methods, instruments, & computers, 28(2):203–208, 1996. [21] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems, pages 3111–3119, 2013. [22] Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Jauvin. A neural probabilistic language model. Journal of machine learning research, 3(Feb):1137–1155, 2003. [23] Jacob Andreas and Dan Klein. How much do word embeddings encode about syntax? In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 822–827, 2014. [24] Tobias Schnabel, Igor Labutov, David Mimno, and Thorsten Joachims. Evaluation methods for unsupervised word embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 298–307, 2015. [25] Tomas Mikolov, Edouard Grave, Piotr Bojanowski, Christian Puhrsch, and Armand Joulin. Advances in pre-training distributed word representations. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), 2018. [26] Edouard Grave, Piotr Bojanowski, Prakhar Gupta, Armand Joulin, and Tomas Mikolov. Learning word vectors for 157 languages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), 2018. [27] Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532–1543, 2014. [28] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781, 2013. [29] Neha Nayak, Gabor Angeli, and Christopher D Manning. Evaluating word embeddings using a representative suite of practical tasks. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 19–23, 2016. [30] Bénédicte Pierrejean and Ludovic Tanguy. Towards qualitative word embeddings evaluation: measuring neighbors variation. 2018. [31] Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Paşca, and Aitor Soroa. A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 19–27. Association for Computational Linguistics, 2009. [32] Roland Schäfer and Felix Bildhauer. Web corpus construction. Synthesis Lectures on Human Language Technologies, 6(4):1–145, 2013. [33] Zeeshan Bhatti, Imdad Ali Ismaili, Waseem Javaid Soomro, and Dil Nawaz Hakro. Word segmentation model for sindhi text. American Journal of Computing Research Repository, 2(1):1–7, 2014. [34] Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135–146, 2017. [35] Siyu Qiu, Qing Cui, Jiang Bian, Bin Gao, and Tie-Yan Liu. Co-learning of word representations and morpheme representations. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 141–150, 2014. [36] Omer Levy, Yoav Goldberg, and Ido Dagan. Improving distributional similarity with lessons learned from word embeddings. Transactions of the Association for Computational Linguistics, 3:211–225, 2015. [37] Laurens van der Maaten and Geoffrey Hinton. Visualizing data using t-sne. Journal of machine learning research, 9(Nov):2579–2605, 2008. [38] Rémi Lebret and Ronan Collobert. Word emdeddings through hellinger pca. arXiv preprint arXiv:1312.5542, 2013. [39] Amaresh Kumar Pandey and Tanvver J Siddiqui. Evaluating effect of stemming and stop-word removal on hindi text retrieval. In Proceedings of the First International Conference on Intelligent Human Computer Interaction, pages 316–326. Springer, 2009. [40] Mikhail Khodak, Andrej Risteski, Christiane Fellbaum, and Sanjeev Arora. Automated wordnet construction using word embeddings. In Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications, pages 12–23, 2017. [41] Andriy Mnih and Koray Kavukcuoglu. Learning word embeddings efficiently with noise-contrastive estimation. In Advances in neural information processing systems, pages 2265–2273, 2013. [42] Omer Levy and Yoav Goldberg. Neural word embedding as implicit matrix factorization. In Advances in neural information processing systems, pages 2177–2185, 2014. [43] Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. Placing search in context: The concept revisited. ACM Transactions on information systems, 20(1):116–131, 2002. [44] Alvaro Corral, Gemma Boleda, and Ramon Ferrer-i Cancho. Zipf’s law for word frequencies: Word forms versus lemmas in long texts. PloS one, 10(7):e0129031, 2015.
Effects of laser frequency drift in phase-sensitive optical time-domain reflectometry fiber sensors A.A. Zhirnov${}^{1}$    A.K. Fedorov${}^{1,2}$    K.V. Stepanov${}^{1}$    E.T. Nesterov${}^{1}$    V.E. Karasik${}^{1}$    C. Svelto${}^{1,3}$    A.B. Pnev${}^{1}$ ${}^{1}$Bauman Moscow State Technical University, 2nd Baumanskaya St. 5, Moscow 105005, Russia${}^{2}$Theoretical Department, DEPHAN, Novaya St. 100, Skolkovo, Moscow 143025, Russia${}^{3}$Dipartimento di Elettronica e Informazione del Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano, Italy (November 20, 2020) Abstract The present work studies the influence of laser frequency drifts on operating of phase-sensitive optical time-domain reflectometry ($\Phi$-OTDR) fiber sensors. A mathematical model and numerical simulations are employed to highlight the influence of frequency drifts of light sources on two characteristic scales: large-time (minutes) and short-time (milliseconds) frequency drifts. Numerical simulation results are compared with predictions given by the fluctuation ratio coefficient (FRC), and they are in a qualitative agreement. In addition to qualitative criteria for light sources given by the FRC, quantitive requirements for optimal light sources for $\Phi$-OTDR sensors are obtained. Numerical simulation results are verified by comparison with experimental data for three significantly different types of light source. ††preprint: APS/123-QED I Introduction Distributed vibration sensors based on the $\Phi$-OTDR technique are highly promising for remote control of long-distance objects such as bridges and pipelines Bao . In contrast to conventional OTDR sensing devices, probing the optic fiber with a narrowband light source allows locating perturbations by their effect on the backscattering signals (see Fig. 1), which are detected by taking into account their phases Taylor ; Park ; Choi . Last decades $\Phi$-OTDR based sensing systems attracted a significant deal of interest Taylor ; Martins ; Choi ; Park ; Juarez ; Martins2 ; Rao ; Lu ; Pnev . Recent developments in manufacturing of fiber-optic components result in significant decrease of the production cost of such sensing devices. However, phase-sensitive registration of backscattering light waves is possible only if the coherence length of the light source is not less than the pulse duration Taylor ; Martins ; Choi ; Park ; Juarez ; Martins2 ; Rao ; Lu . This requirement makes light sources one of the most expensive part of sensors. In general, quality of sensing systems for remote monitoring of extended objects is determined by its ability to detect perturbations caused by an activity near the sensor fiber within a background, where this background is formed both by external noises (e.g., seismic noises) and self-noises of the system. In order to maximize the rate of recognized (i.e., detected and classified) perturbations, $\Phi$-OTDR based sensors have been studied throughout via numerical simulations Zhong ; Alekseev ; Liokumovich ; Alekseev2 ; Alekseev3 ; Li ; Pnev . A recent study Zhong have shined a new light on the role of the laser source in $\Phi$-OTDR based sensors. In particular, it has been demonstrated that a frequency drift of sources can suppress the signal-to-noise ratio (SNR) and even may cause failures. Numerical simulations based on calculating the FRC for different light sources take into account only the frequency drift on large-time scales (with respect to the characteristic operation time of $\Phi$-OTDR systems, which is in order of miliseconds). This allows comparing a number of light sources and reveal the best one from this number. Thus, this method can be improved by development of quantitative criteria for laser sources for $\Phi$-OTDR based sensors. One may also suppose that short-time scales are also important for operating of the $\Phi$-OTDR based sensing systems since sufficiently fast frequency drifts can be a reason for failures. In the present work, we study the effect of frequency drifts of laser sources for $\Phi$-OTDR based sensing systems using experimental tests, numerical simulation, and experimental verification of the suggested model. First of all, we demonstrate experimentally the importance of frequency drift both on short-time and large-time scales by measuring the rate of correctly recognized in the background events for three light sources with different characteristics. On the basis of insights from experiment, we perform numerical simulation based on the mathematical model, which take into account two characteristic time scales of the frequency drift of laser sources. Thus, requirements for the frequency drift (additional to the once given by the FRC from Ref. Zhong ) for sources in $\Phi$-OTDR based sensors, which aimed on maximization of the rate of correctly recognized events, are formulated. Our paper is organized as follows. In Sec. II, we present experimental results on measurements of the rate of correctly recognized events for three light sources with different characteristics Note . We emulate events by shock-like perturbation of fixed amplitude and fixed duration. Experimental insights on relation between the frequency drift shape of source and recognition rates forces us to include frequency drift behaviour in the mathematical model, which is described in Sec. III. We conclude and formulate additional criteria and quantitative requirements to laser sources for $\Phi$-OTDR based sensing systems in Sec. IV. II Experimental tests In order to obtain an insight about the role of the source frequency drift, we perform experimental measurements of the rate of correctly recognized events in the self-noises of the system with other background noises being negligible. We use the following experimental setup (see Fig. 1). A probe signal pulse from one of three laser sources, which are characterized by power ($I$), linewidth ($\Delta{\nu}$; with respect to time 20 ms), and coherence length ($l^{\rm coh}$), is launched into the optical fiber (length of the fiber cable is about 25 km). These characteristics of investigated light sources are presented in Table 1. Backscattered waves of light are summed as amplitudes by taking into account their phases. Since they are randomly distributed, the measurement result is a randomly modulated signal. While the fiber is idle, the phases are not affected, and the shape of signals does not change. Any perturbation (event) near the optical fiber generates an acoustic wave that affects the fiber by displacement of scattering centers. For each source, we create a set of test events by placing a source of shock-like impacts of fixed amplitude and duration near fixed point of the fiber cable. We measure simultaneously space-time intensity distributions formed by a set of sequentially measured signals. Measured space-time field distribution $I(l,t)$ are input data for the filtering procedure based on the continuous wavelet transform of the following form $$\mathcal{W}\left[a,b\right]=\int_{t=t_{1}}^{T}{I(l,t)\psi_{a,b}(t)dt},$$ (1) Here $$\psi_{a,b}(t)\equiv\psi\left((t-b)/{a}\right),$$ (2) $a$ is the wavelet scale, $b$ is the wavelet shift. For our signals, we use $a{=}10$, where $\psi$ as the Symlets 8 wavelet function, which is near symmetric, orthogonal, and biorthogonal. After transform (1), we use exponential smoothing filtering with the coefficient being equal to $\alpha_{\rm F}=0.05$. To detect an event in the background, we use the simple procedure based on calculation of exceeding the threshold $I_{c}$, which is highlighted by red lines in the Fig. 2. Also we note that such procedure is the basic step of the recognition algorithm for $\Phi$-OTDR based sensors Fedorov . In our tests, we are primary interested in links between the rate $R_{\rm det}$ of correctly detected events, frequency drift behaviour, and parameters of the light source. We define the detection rate $R_{\rm det}$ in a straightforward way $$R_{\rm det}=\frac{N_{\rm correct}}{N_{\rm total}}-\frac{N_{\rm incorrect}}{N_{% \rm total}},$$ (3) where $N_{\rm total}$ is the total number of events, $N_{\rm correct}$ is the number of correctly recognized events, and $N_{\rm incorrect}$ is the number of incorrectly recognized events (failures). In fact, the $R_{\rm det}$ is the ratio of correctly detected perturbations to their total number during experiment with taking into account the number of incorrectly recognised events. During tests, $N_{\rm total}$ has order of tens events. Field distributions for three laser sources are presented before and after filtering as well as results of measurements for frequency drifts for the sources during tests are presented in Fig. 2. We start an analysis of our test from calculating of the FRC Zhong . We calculate the mean value of the FRC for $10^{2}$ counts the length of the sensor cable as follows $$\langle{\rm FRC}\rangle_{l}=\sum_{m=1}^{L}\left(\frac{1}{M_{k}-1}\sum_{n=2}^{M% _{k}}{\frac{\left|{I(l_{m},t_{n})-I(l_{m},t)}\right|}{\tau{L}}}\right),$$ (4) where $L$ is the total number of length counts of the sensor cable, $M_{k}$ is the total number of time counts during measurements, $I(l_{m},t_{n})$ is the registered intensity from $m$-th count of the cable in $n$-th count of time, and $\tau$ is the pulse duration. Our results confirm influences of laser source instability predicted in Ref. Zhong . The use of the FRC allows one to distinguish on the qualitative level that Sample (1) is worse than Sample (2) and Sample (3). Note that FRCs for Sample (2) and Sample (3) are close to each other. Thus, it is difficult to opt for Sample (3) demonstrating the best recognition rate by the FRC analysis only. From results of our tests, presented in Table 1 and Fig. 2, we can conclude that Sample (3) has demonstrated the lowest frequency drift among tested laser sources (Fig. 2) and the highest recognition rate since all events are recognized in the background. Nevertheless, opting for Sample (3) is possible on analysis of both large-time and short-time scale frequency drifts. Sample with the lowest short-time scale frequency drift has the best results from the viewpoint of detection of external perturbations. We note that Sample (1) has the narrowest linewidth. Counterintuitively, the Sample (1) has shown the worst results. But it instead shows much either short-term frequency drifts. The setup with this sample has not recognized a number of test events. Therefore, one can conclude that it is unsuitable for $\Phi$-OTDR based sensors. Sample (2) has demonstrated better results comparing Sample (1). Nevertheless, the setup with Sample (2) still was able to recognize all test events in the background. However, the frequency drifts region [see Fig. 2(b)] has become a reason for false alarms of the setup. We also draw attention to the fact that measurements demonstrate that Sample (1) and Sample (2) with similar behaviour on large-time scale ($v_{1}(t)\approx 40$ MHz/min and $v_{2}(t)\approx 40$ MHz/min) show very different results of detection of test events in the background. However, these source have difference in short-time scale frequency drifts: $v_{1}(t)\approx 25$ MHz/20ms and $v_{2}(t)\approx 0.5$ MHz/20ms. Also, we point out on the fact that the Sample (3) demonstrating the best result has the lowest short-time scale drift: $v_{2}(t)\approx 0.3$ MHz/20ms. Combination of best characteristics on both time scales is crucial. Thus, frequency changes on short-time scales are important from the viewpoint of detection of external perturbations. Therefore they should be taken into account on the path to formulation of quantitive requirements for optimal light sources for $\Phi$-OTDR based sensing devices. III Numerical simulations Insight from experimental data is that frequency drift on different time scales plays a crucial role for $\Phi$-OTDR based sensing systems from viewpoint of recognition of events in the background noise. We describe a mathematical model accounting this peculiarity. By using this model, we simulate output signals for light sources with different characteristics Note . First, we verify our model by direct comparison of its prediction with the collected experimental data. After that, we compare different light sources from the viewpoint of registration of events in the background. Finally, the numerical result allows us to formulate quantitive criteria to select proper laser sources for $\Phi$-OTDR based sensors, on the top of qualitative requirements given by FRC calculating Zhong . The suggested impulse–response mathematical model is one dimensional. The model is based on the process of the detection of the optical signal after its interaction with scattering centers in the sensor fiber. The fiber is characterized by damping and dissipative factors. In turn, the light source can be characterized by the same parameters like in experiment (power, linewidth, and coherence length). The source power is important from the viewpoint of the SNR in the detection part of the sensor and generation of nonlinear effects in the sensor fiber Pnev2 . The bandwidth of the signal and the coherence length define the maximal distance between the scattering center, which have an impact on the signal intensity. Finally, source frequency drifts are crucial for the SNR, and they are important both on short-time and large-time scales. The suggested mathematical model is based on the fact that resulting signal after scattering on all centres can be viewed as a random complex-valued signal. We assume that the damping factor of the optical fiber is fixed on the level $0.17$ dB/km. The scattering factor has more sophisticated structure. Then amplitude and phase distributions of scattering signals have the following Rayleigh form Alekseev3 ; Goodman : $$\displaystyle\begin{split}\displaystyle p(a)&\displaystyle=\left\{\begin{array% }[]{ll}\frac{a}{\sigma^{2}}\exp\left[-\frac{a}{2\sigma^{2}}\right],&\mbox{ if % }a>0,\\ 0,&\mbox{ if }a<0.\end{array}\right.\\ \displaystyle p(\phi)&\displaystyle=\left\{\begin{array}[]{ll}\frac{1}{2\pi},&% \mbox{ if }\phi\in\left({-\pi,\pi}\right],\\ 0,&\mbox{ otherwise}.\end{array}\right.\end{split}$$ (5) We assume that amplitude $a$ and phase $\phi$ of the signals are independent, where the set of amplitudes has the same distribution across the entire length of the sensor fiber and phases of the signal have a uniform distribution in the interval $\left(-\pi;\pi\right]$. The process of detection of signal can be described as an impulse–response model as follows. For a laser source with sufficiently narrow linewidth, the following expression holds for the backscattering intensity: $$\begin{split}\displaystyle I(T)=&\displaystyle I_{0}\alpha(t)\bigg{|}\sum% \nolimits_{i_{N}}{a_{i}\exp\left({-j\phi_{i}}\right)}\\ &\displaystyle\times\exp\left[\left(\frac{Tc/n-l_{i}}{l^{\rm coh}}\right)^{2}% \right]\Theta\left(\frac{Tc/n-l_{i}}{l^{\rm imp}}\right)\bigg{|}^{2}.\end{split}$$ (6) Here $I(T)$ is intensity of the signal before the detecting part (see Fig. 1), $I_{0}$ is the source power, $i$ numerates scattering centers, $a_{i}$ and $\phi_{i}$ are the scattering amplitude and phase of $i$-th center [they are distributed according to Eq. (5)], $T$ is the travel time of signal, which can easily recalculated to the length as $z=Tc/n$, where $n$ is the refractive index of the fiber, $c$ is vacuum light speed, $\alpha(t)$ is the optical signal damping in the fiber, and $l^{\rm imp}=\tau{c}/n$ is the pulse length in the fiber. Assuming $l^{\rm coh}\gg{l^{\rm imp}}$, one can neglect temporal dependence of the exponential term in the square brackets: $$\gamma\simeq\exp\left[\left(\frac{Tc/n-l_{i}}{l^{\rm coh}}\right)^{2}\right].$$ (7) Therefore, we transform (6) as follows $$I(T)=I_{0}\gamma\alpha(t)\left|\sum\nolimits_{i=1}^{N}{a_{i}\exp\left({-j\phi_% {i}}\right)}\right|^{2},$$ (8) where $N$ is the total number of scattering centers. During the numerical simulations, we assume that the standard scattering center has the volume ${200}\times{200}\times{200}$ nm${}^{3}$ Bunge . Therefore, the total number of scattering centers in the 10 m region of the fiber has the order of $10^{11}$. To reduce calculation time, we use the fact that sums of number with the Rayleigh distribution has the Rayleigh distribution. Then, one can approximate summation over ${\sim}10^{11}$ scattering centers by calculation over ${\sim}10^{3}$ scattering centers. Indeed, calculation of scattering phases of centers using the expression $$\phi_{i}=2\pi{l_{i}}/\lambda$$ (9) is more efficient; $\lambda$ is the laser wavelength. One has to take into account filtering of the high-frequency part of the signal in the detecting part of the sensing system. Thus, we have $$\begin{split}\displaystyle I_{p}(T)=I_{0}\gamma\alpha(t)\left|\sum\nolimits_{i% =1}^{N}{a_{i}\exp\left({-j\phi_{i}}\right)}\right|^{2}\otimes\,\delta_{\rm det% },\end{split}$$ (10) where $\delta_{\rm det}$ is the detector impulse response (Fig. 1). In this formalism, we can consider the process of formation of the response signal with respect to perturbations. A perturbation leads to displacements of scattering centers. Since the parameter $l_{j}$ is changed for $i$-th scattering center, we should take into account resulting variation of the scattering phases. For simplicity, one can assume that variation of scattering centers can be described as follows: $$\Delta{l}_{i}(t)=A\cos\left[\frac{2\pi(t-T_{s})}{\tau_{s}}\right]\exp\left[-% \frac{t-T_{s}}{\tau_{a}}\right],$$ (11) where $A$ is the perturbation amplitude, $T_{s}$ is the time moment relation to the impact of the perturbation, $\tau_{s}$ is the period of perturbation induced oscillations, $\tau_{a}$ is the damping constant. By substituting (11) to (9), we obtain periodical phase oscillations in the following form $$\phi_{i}(t)=\frac{2\pi}{\lambda}\left[{l_{i}}+\Delta{l_{i}}(t)\right].$$ (12) This expression being substituted in Eq. (13) allows one to obtain the signal detected by the system in the case of absence of the background. It should be noted that developed model assumes that the AOM creates perfect (in terms of zero-one amplitude) laser pulses. However, the AOM has the finite extinction ratio $k_{e}$ being about 55 dB. Thus, this imperfection can be of interest for our system. Nevertheless, negative effects from this imperfection are not valid for the case, where the extinction ratio is much greater that the ratio between the length of the fiber and length of the pulse $$k_{e}\gg{l_{\rm fiber}/l_{\rm pulse}}.$$ (13) This is the case for our setup since $k_{e}$ is about 55 dB and $l_{\rm fiber}/l_{\rm pulse}$ is close to 2500, i.e., to 34 dB. Second, we should take into account an additional impact position variations of the scattering centers due to thermal fluctuations, which can generally to be described as a white noise in the following form $$\begin{split}\displaystyle\phi_{i}(t)&\displaystyle=\frac{2\pi}{\lambda}\left[% {l_{i}}+\Delta{l_{i}}(t)+l_{N}(t)\right],\end{split}$$ (14) where $l_{N}(t)$ describes the position variation of the scattering centers. Due to the frequency drift $\nu(t)$, one has: $$\begin{split}\displaystyle\phi_{i}(t)&\displaystyle=\frac{2\pi}{c}\left(\nu+% \nu(t)\right)\left[{l_{i}}+l_{N}(t)\right].\end{split}$$ (15) Here we distinguish two time scales: $$\nu(t)=\nu_{l}(t)+\nu_{s}(t),$$ (16) where $\nu_{l}(t)$ stands for large-time frequency drift (scales of minutes) and $\nu_{s}(t)$ stands for short-time frequency drift (scales of milliseconds). In the detection part, there are additional sources of noise. They can be accounted as a white noise $N_{p}(t)$ as follows $$\begin{split}\displaystyle I_{p}(t)&\displaystyle=I_{0}\gamma\alpha(t)\\ &\displaystyle\times\left|\sum\nolimits_{i=1}^{N}{a_{i}\exp\left({-j\phi_{i}}(% t)\right)}\right|^{2}\otimes\,\delta_{\rm det}+N_{p}(t),\end{split}$$ (17) Considering all of these imperfections, we finally obtain the following relation $$\begin{split}\displaystyle I_{p}(t)&\displaystyle=N_{p}(t)+I_{0}\gamma\alpha(t% )\times\\ &\displaystyle\bigg{|}\sum\nolimits_{i=1}^{N}{a_{i}}\exp\!\bigg{(}\!-\frac{j2% \pi}{c}\left[\nu{+}\nu_{l}(t){+}\nu_{s}(t)\right]\times\\ &\displaystyle\left[{l_{i}}{+}l_{N}(t){+}\Delta{l}_{i}(t)\right]\!\bigg{)}\!% \bigg{|}^{2}\otimes\delta_{\rm det}.\end{split}$$ (18) Eq. (18) is the quintessence of the suggested mathematical model for $\Phi$-OTDR sensing systems. This expression describes the signal $I_{p}(T)$ registered by the $\Phi$-OTDR sensor in the time moment $T$. This signal consists of the noise component $N_{p}(t)$, which can be treated using the method described Ref. Pnev2 , and the useful signal. Input data for Eq. (18) are signal power $I_{0}$, coefficient $\gamma$ (7) depending on coherent properties of the laser source, and the optical signal damping in the fiber $\alpha(t)$. Summation takes place over scattering centers. We point out that the important ingredient of this model is taking into account two time scales $\nu_{l}(t)$ and $\nu_{s}(t)$. The final expression should be also converged with the detector impulse response. To obtain the signal without external perturbations, one can just set input data for the light source and put them into Eq. (18), chose configuration of scattering centers, distributions of amplitudes, and frequency drift on two scales. External perturbation described by the term $\Delta{l}_{i}(t)$ is zero. In order to obtain the result of measurements of $\Phi$-OTDR sensing system with external perturbation, one need to set its amplitude and envelope similar to that in Eq. (11). In order to verify the suggested model and its predictions, we compare the numerical results with experiments for our three laser sources. The results of simulations (see Fig. 3) of the space-time field distribution ${I}(l,t)$ for three different lasers allow one to conclude that the model reproduces experimental pictures with sufficiently high precision. Because of the random nature of optical signals, for estimation we use the following procedure. One can directly compare standard deviation of the signal from numerical simulation and experiments calculated as follows $$\langle{STD}\rangle_{L}=\frac{1}{L}\sum_{i=1}^{L}\sqrt{\frac{1}{M_{k}}\sum_{k=% 1}^{M_{k}}\left[I(l_{k},t_{i})-\langle{I(l_{k},t)}\rangle\right]},$$ (19) where the average value of the signal has been calculated according to the following expression: $$\langle{I(l_{j},t)}\rangle=\frac{1}{M_{k}}\sum_{k=1}^{M_{k}}{I(l_{j},t_{k})}.$$ (20) The detailed results in Table 2. We present standard deviation calculated using Eq. (19) for long-term and short-term frequency drifts. Comparison of numerical simulations and experiments demonstrate very close results, which indicates on the correctness of the model. IV Discussions and conclusions Now we also can formulate quintile criteria for the light source. Towards this end, we study the dependence between the number of properly revealed non-conventional events. On the basis of this study, we can conclude that the light source can have smooth drift of the center wavelength source up to 100 MHz/min, and short-term stability should below the level 1 MHz. This is presented in Fig. 4. Acknowledgements We thank S. Nikitin for useful comments. This work was supported by the Ministry for Education and Science of the Russian Federation within the Federal Program under Contract No. 14.579.21.0104 (A.K.F.). References (1) For a review, see X. Bao and L. Chen, Sensors 12, 8601 (2012). (2) H.F. Taylor and C.E. Lee, U.S. patent 5, 194847, March 16, 1993. (3) J. Park, W. Lee, and H. F. Taylor, Proc. SPIE 3555, 49 (1998). (4) K.N. Choi and H.F. Taylor, IEEE Photon. Technol. Lett. 15, 386 (2003). (5) J.C. Juarez, E.W. Maier, K.N. Choi, and H.F. Taylor, J. Lightwave Technol. 23, 2081 (2005). (6) Y.J. Rao, J. Luo, Z.L. Ran, J.F. Yue, X.D. Luo, and Z. Zhou, Proc. SPIE 7503, 75031O (2009). (7) Y. Lu, T. Zhu, L. Chen, and X. Bao, J. Lightwave Technol. 27, 3243 (2010). (8) H.F. Martins, S. Martin-Lopez, P. Corredera, P. Salgado, O. Frazão, and M. González-Herráez, Opt. Lett. 38, 872 (2013). (9) H.F. Martins, S. Martin-Lopez, P. Corredera, M.L. Filograno, O. Frazão, and M. González-Herráez, J. Lightwave Technol. 31, 3631 (2013). (10) E.T. Nesterov, A.A. Zhirnov, K.V. Stepanov, A.B. Pnev, V.E. Karasik, Ya.A. Tezadov, E.V. Kondrashin, and A.B. Ushakov, J. Phys. Conf. Ser. 584, 012028 (2015). (11) A.E. Alekseev, Ya.A. Tezadov, and V.T. Potapov, J. Commun. Tech. Electron. 56, 1490 (2011). (12) A.E. Alekseev, Ya.A. Tezadov, and V.T. Potapov, Quantum Electron. 42, 76 (2012). (13) A.E. Alekseev, Ya.A. Tezadov, and V.T. Potapov, Tech. Phys. Lett. 38, 89 (2012). (14) Q. Li, C. Zhang, L. Li, and X. Zhong, Optik 125, 2099–2103 (2014). (15) L.B. Liokumovich, N.A. Ushakov, O.I. Kotov, M.A. Bisyarin, and A.H. Hartog, J. Lightwave Technol. 33, 3660 (2015). (16) X. Zhong, C. Zhang, L. Li, S. Liang, Q. Li, Q. Lü, X. Ding, and Q. Cao, Appl. Opt. 53, 4645 (2014). (17) A.B. Pnev, A.A. Zhirnov, K.V. Stepanov, E.T. Nesterov, D.A. Shelestov, and V.E. Karasik, J. Phys. Conf. Ser. 584, 012016 (2015). (18) Parameters of light sources used in experimental setups and simulations correspond to available on market lasers with the following values of the central frequency: for Sample (1) it is $193\,490$ GHz, for Sample (2) it is $193\,460$ GHz, and for Sample (3) it is $193\,400$ GHz. (19) J.W. Goodman, Statistical Optics (Wiley-Interscience, New York, 1985). (20) X. Chen, X. Zhang, B. Yan, J. Li, X. Chen, G. Tu, Y. Liang, Z. Ni, B. Culshaw, and F. Dong, Proc. SPIE 8421, 8421A3 (2012). (21) We note that the power of light pulse after the amplifier is about 300 mW. (22) A.K. Fedorov, M.N. Anufriev, A.A. Zhirnov, K.V. Stepanov, E.T. Nesterov, D.E. Namiot, V.E. Karasik, and A.B. Pnev, Rev. Sci. Instrum. 87, 036107 (2016). (23) C.A. Bunge, R. Kruglov, and H. Poisel, J. Lightwave Technol. 24, 3137 (2006).
HZPP-9909 July 30, 1999 The Influence of Multiplicity Distribution on the Erraticity Behavior of Multiparticle Production 111  This work is supported in part by the Natural Science Foundation of China     (NSFC) under Grant No.19575021. Liu Zhixu        Fu Jinghua        Liu Lianshou Institute of Particle Physics, Huazhong Normal University, Wuhan 430079 China Tel: 027 87673313   FAX: 027 87662646   email: [email protected] Abstract The origin of the erraticity behaviour observed recently in the experiment is studied in some detail. The negative-binomial distribution is used to fit the experimental multiplicity distribution. It is shown that, with the multiplicity distribution taken into account, the experimentally observed erraticity behaviour can be well reproduced using a flat probability distribution. The dependence of erraticity behaviour on the width of multiplicity distribution is studied. PACS number: 13.85 Hd Keywords: Multiparticle production,  Negative-binomial distribution Erraticity Since the finding of unexpectedly large local fluctuations in a high multiplicity event recorded by the JACEE collaboration [1], the investigation of non-linear phenomena in high energy collisions has attracted much attention [2]. The anomalous scaling of factorial moments, defined as $$\displaystyle F_{q}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{M}\sum\limits_{m=1}^{M}\frac{\langle n_{m}(n_{m}-1)% \cdots(n_{m}-q+1)\rangle}{\langle n_{m}\rangle^{q}}$$ (1) at diminishing phase space scale or increasing division number $M$ of phase space [3]: $$\displaystyle F_{q}\propto M^{-\phi_{q}},$$ (2) called intermittency (or fractal) has been proposed for this purpose. The average $\langle\cdots\rangle$ in Eq.(1) is over the whole event sample and $n_{m}$ is the number of particle falling in the $m$th bin. This kind of anomalous scaling has been observed successfully in various experiments [4][5]. A recent new development along this direction is the event-by-event analysis  [6][7]. An important step in this kind of analysis was made by Cao and Hwa [8], who first pointed out the importance of the fluctuation in event space of the event factorial moments defined as $$\displaystyle F_{q}^{({\rm e})}$$ $$\displaystyle=$$ $$\displaystyle\frac{\frac{1}{M}\sum\limits_{m=1}^{M}n_{m}(n_{m}-1)\cdots(n_{m}-% q+1)}{\left(\frac{1}{M}\sum\limits_{m=1}^{M}n_{m}\right)^{q}}.$$ (3) Its fluctuations from event to event can be quantified by its normalized moments as: $$C_{p,q}=\langle\Phi_{q}^{p}\rangle,\quad\Phi_{q}={F_{q}^{(e)}}\left/\langle F_% {q}^{(e)}\rangle\right.,$$ (4) and by $dC_{p,q}/dp$ at $p=1$: $$\displaystyle\Sigma_{q}=\langle\Phi_{q}\ln\Phi_{q}\rangle.$$ (5) If there is a power law behavior of the fluctuation as division number goes to infinity, or as resolution $\delta=\Delta/M$ goes to very small, i.e., $$C_{p,q}(M)\propto M^{\psi_{q}(p)},$$ (6) then the phenomenon is referred to as erraticity [9]. The derivative of exponent $\psi_{q}(p)$ at $p=1$ $$\mu_{q}=\left.\frac{d}{dp}\psi_{q}(p)\right|_{p=1}=\frac{\partial\Sigma_{q}}{% \partial\ln M}.$$ (7) describes the anomalous scaling property of fluctuation-width and is called entropy index. The erraticity behaviour of multiparticle final states as described above has been observed in the experimental data of 400 GeV/$c$ pp collisions from NA27 [10]. However, it has been shown [11] that the single event factorial moment as defined in Eq.(3), using only the horizontal average over bins, cannot eliminate the statistical fluctuations well, especially when the multiplicity is low. A preliminary study shows that the experimentally observed phenomenon [10] can be reproduced by using a flat probability distribution with only statistical fluctuations [11]. This result is preliminary in the sense that it has fixed the multiplicity to 9 while the multiplicity is fluctuating in the experiment and has an average of $\langle n_{\rm ch}\rangle=9.84$ [12]. Since the erraticity phenomenon is a kind of fluctuation in event space and depends strongly on the multiplicity, the fluctuation in event space of the multiplicity is expected to have important influence on this phenomenon. In this letter this problem is discussed in some detail. The negative binomial distribution [13] will be used to fit the experimental multiplicity distribution [12]. Putting the resulting multiplicity distribution into a flat-probability-distribution model, the erraticity behaviour is obtained and compared with the experimental data. The consistency of these two shows that the erraticity behaviour observed in the 400 GeV/$c$ pp collision data from NA27 is mainly due to statistical fluctuations. The negative-binomial distribution is defined as [13] $$P_{n}=\left(\matrix{n+k-1\cr n}\right)\left(\frac{\bar{n}/k}{1+\bar{n}/k}% \right)^{n}\frac{1}{(1+\bar{n}/k)^{k}},$$ (8) where $n$ is the multiplicity, $\bar{n}$ is its average over event sample, $k$ is a parameter related to the second order scaled moment $C_{2}\equiv\langle n^{2}\rangle/\langle n^{2}\rangle$ through [13] $$C_{2}-1=\frac{1}{\bar{n}}+\frac{1}{k}.$$ (9) Using Eq.(8) to fit the multiplicity distribution of 400 GeV/$c$ pp collision data from NA27, we get the parameter $k=12.76$. The result of fitting is shown in Fig.1. It can be seen that the fitting is good. Then we take a flat (pseudo)rapidity distribution, i.e. let the probability for a particle to fall into each bin be equal to $p_{m}=1/M$ when the (pseudo)rapidity space is divided into $M$ bins. This means that there isn’t any dynamical fluctuation. Let the number $N$ of particles in an event be a random number distributed according to the negative binomial distribution Eq.(8) with $\bar{n}=9.84,k=12.76$. Put these $N$ particles into the $M$ bins according to the Bernouli distribution $$B(n_{1},n_{2},\cdots,n_{M}|p_{1},p_{2},\cdots,p_{M})=\frac{N!}{n_{1}!\cdots n_% {M}!}p_{1}^{n_{1}}\cdots p_{M}^{n_{M}},$$ (10) $$\sum_{m=1}^{M}n_{m}=N.$$ In total 60000 events are simulated in this way and the resulting $C_{p,q}$ are shown in Fig.2 together with the experimental data of 400 GeV/$c$ pp collisions from NA27. It can be seen from the figures that the model results are consistent with the data, showing that the erraticity phenomenon observed in this experiment is mainly due to statistical fluctuations. In order to study the relation of erraticity behaviour with the width of multiplicity distribution, the same calculation has been done for the cases: $\bar{n}=9,k=0.1,0.5,1.0,2.25,4.5,9,18$. These values of $k$ corresponds to diminishing width of distribution with $C_{2}=11.1,3.11,2.11,$ $1.56,1.33,1.22,1.17$ respectively, cf. Fig.3. The resulting ln$C_{p,2}$ and $\Sigma_{2}$ as function of ln$M$ are shown in Fig.4 and Fig.5. It can be seen from the figures that the moments $C_{p,2}$ for different $p$ separate farther and the characteritic function $\Sigma_{2}$ becomes larger when the value of $k$ is smaller. This means that the single event factorial moments fluctuate stronger in event space when the width of multiplicity distribution is wider. On the other hand, the straight lines obtained from fitting the last three points of $\Sigma_{2}$ versus ln$M$ are almost parallel for different $k$, and their slopes — the entropy indices $\mu_{2}$, which is the characteristic quantity of erraticity are insensitive to the width of multiplicity distribution. In summary, the multiplicity distribution of 400 GeV/$c$ pp collision data from NA27 has been fitted to the negative binomial distribution. Taking this multiplicity distribution into account, the erraticity phenomenon in a model without any dynamical fluctuation, i.e. with a flat probability distribution, has been studied. The resulting moments $C_{p,q}$ turn out to fit the experimental data very well. This shows that the erraticity phenomenon observed in this experiment is mainly due to statistical fluctuations. The dependence of erraticity phenomenon on the width of multiplicity distribution is examimed. It is found that the fluctuation of single event factorial moments in event space becomes stronger — $C_{p,2}$ and $\Sigma_{2}$ become larger — when the width of multiplicity distribution is wider. On the other hand, the entropy index $\mu_{2}$ depends mainly on the average multiplicity and is insensitive to the width of multiplicity distribution. References [1] T. H. Burnett et al., Phys. Rev. Lett. 50 (1983) 2062. [2] E.A. De Wolf, I.M. Dremin and W. Kittel, Phys. Rep. 270 (1996) 1. [3] A. Białas and R. Peschanski, Nucl. Phys. B 273 (1986) 703; ibid 308 (1988) 857; [4] N.M. Agababyan et al. (NA22), Phys. Lett. B382 (1996) 305; ibid B431 (1998) 451; [5] S. Wang, Z. Wang and C. Wu, Phys. Lett. B 410 (323) 1997. [6] A. Białas and B. Ziajia, Phys. Lett. B 378 (1996) 319. [7] Liu Lianshou (EMU01), Nucl. Phys. B (Proc. Suppl.) 71 (1999) 341 and the references cited therein. [8] Z. Cao and R. Hwa, Phys. Rev. Lett. 75 (1995) 1268; Z. Cao and R. Hwa, Phys. Rev. D 54 (1996) 6674; Z. Cao and R. Hwa, Phys. Rev. E 56 (1997) 326; Z. Cao and R. Hwa, Erraticity analysis of soft production by ECOMB, University of Oregon preprint OITS 666, 1999. [9] R. C. Hwa, Acta Physica Polonico B27 (1996) 1789. [10] Wang Shaushun and Wang Zhaomin, Phys. Rev. D 57 (1998) 3036. [11] Liu Lianshou, Fu Jinghua and Wu Yuanfang, On the Dominance of Statistical Fluctuation in the Factorial-Moment Study of Chaos in Low Multiplicity Events of High Energy Collisions, preprint HZPP9902 (HEP-PH/9903243). [12] Wang Shaoxun et al., HE&N Phys. 13 (1989) 673. [13] G. J. Alner etal. (UA5), Phys. Lett. B 160 (1985) 199. Figure Captions Fig.1  Fitting of the multiplicity distribution of 400 GeV/$c$ pp collision data to negative binomial distribution. Data taken from Ref.[12]. Fig.2  The moments $C_{p.2}$ from a flat probability distribution model with the multiplicity distribution taken into account, as compared with the 400 GeV/$c$ pp collision data taken from Ref.[12]. Fig.3  The negative binomial distribution with different values of parameter $k$. The average multiplicity is $\bar{n}=9$. Fig.4  The dependence of ln$C_{p,2}$ on ln$M$ in the flat probability distribution model, taken the negative-binomial type multiplicity distribution into account. The parameter $k$ takes different vaues as shown in the figure. The average multiplicity is $\bar{n}=9$. Fig.5  The dependence of $\Sigma_{2}$ on ln$M$ in the flat probability distribution model, taken the negative-binomial type multiplicity distribution into account. The parameter $k$ takes different vaues as shown in the figure. The average multiplicity is $\bar{n}=9$. ln$P(n_{\rm ch})$ $n_{\rm ch}$ Fig. 1 Fig. 2 ln$P(n_{\rm ch})$ $n_{\rm ch}$ Fig. 3 Fig. 4 Fig. 5
Decay widths of large-spin mesons from the non-critical string/gauge duality J. Sadeghi ${}^{a,b}$  and S. Heshmatian ${}^{a}$ ${}^{a}$ Sciences Faculty, Department of Physics, Mazandaran University, P. O. Box 47415-416, Babolsar, Iran ${}^{b}$ Institute for Studies in Theoretical Physics and Mathematics (IPM) P. O. Box 19395-5531, Tehran, Iran Email: [email protected]: [email protected] Abstract In this paper, we use the non-critical string/gauge duality to calculate the decay widths of large-spin mesons. Since it is believed that the string theory of QCD is not a ten dimensional theory, we expect that the non-critical versions of ten dimensional black hole backgrounds lead to better results than the critical ones. For this purpose we concentrate on the confining theories and consider two different six dimensional black hole backgrounds. We choose the near extremal $AdS_{6}$ model and the near extremal KM model to compute the decay widths of large-spin mesons. Then, we present our results from these two non-critical backgrounds and compare them together with those from the critical models and experimental data. 1 Introduction The gauge/gravity duality demonstrates the correspondence between a gravitational theory in anti de-sitter space and a gauge theory at large $N$ limit [1-6]. An example for this correspondence is the relation between type IIB string theory in ten dimensional background and $\cal{N}$=4 supersymmetric Yang-Mills theory on four dimensional boundary of $AdS_{5}$. In recent years, using this correspondence as a powerful tool to study the QCD, has been increased and lots of papers have been published in this context. For example, the dynamics of moving quark in a strongly coupled plasma [7-16] and the jet-quenching parameter [16-24] have been investigated. In addition, the motion of a quark-antiquark pair in the quark- gluon plasma has been studied in [25-31]. The calculations of decay widths of mesons are so important but they are hard to do by using the QCD methods because of the strong coupling problems. So, the holographic methods could help to overcome these difficulties. Recently, models including the various brane configurations have been introduced in critical dimensions to describe hadrons in the confining backgrounds. The model introduced in [32] is one example with $D4/D6$ brane configuration which leads to the heavy scaler and pseudo-scaler mesons. Also the model proposed by Sakai and Sugimoto [33] with $D4/D8/\overline{D8}$ in ten dimensional background leads to a nice description of hadron physics. Many authors have used the holographic methods to study the hadron physics [34-36]. The decay process of mesons is a remarkable task which has been studied by using the $SS$ model. The authors have calculated the meson masses and the decay rates by consideration of low-spin meson as small fluctuations of flavor branes. Of course these models can be used only for low-spin mesons and can not describe the large-spin mesons anymore. Large spin mesons are interesting because of their phenomenological features, therefore some authors have chosen the dual string theory description to study their decay processes in critical dimentions [37]. In this paper an interesting setup has been proposed to compute the decay widths of mesons. They have used a semi-classical U-shaped spinning string configuration. This string can decay into some outgoing mesons by touching one or more of the flavor branes, splitting and then getting reconnected to the brane due to the quantum fluctuations. The idea in this paper is to focus on the near wall geometry and build the string wave function near in this geometry by semi-classical quantization. The authors compared their results with the Casher-Neuberger-Nussinov model where quark-antiquark pair are connected by a chromoelectric flux tube [42]. There is also an old model called ”Lund model” describing the mesons decay [38]. There are improvements for this model in the literature where two massive quarks are connected together by a massless relativistic string [39]. The resulting formula leads to a better description for the decay widths. It shows that for a decay width linear in length, the ratio of $\Gamma/M$ is not a constant anymore. Also the decay process of the open strings [40] and closed strings [41] is studied before by using different methods. The decay widths of both low-spin and large-spin mesons has been studied in critical dimensions [33,37]. Also some calculations for the low-spin mesons has been done before by using the non-critical string/gauge duality [43]. But the decay widths of high spin mesons has not been studied in the context of the non-critical duals mesons yet. In holographic QCD, there is an idea that the string theory in dimensions less than ten is a good candidate to study the QCD. So, this motivate us to study the decay widths of large-spin mesons by using the non-critical version of ten dimensional black hole backgrounds [43-49]. For this purpose, we consider two different six dimensional backgrounds. The first one is the near extremal flavored $AdS_{6}$ which is dual to a four dimensional low energy effective gauge theory. Mesons in the IR theory are constructed by the quarks with a mass of the order of temperature. In this model is based on the near extremal $D4$ branes background and the $D6$ flavor branes added to this background. The second one is the same as the Klebanov-Maldacena model called KM model [47] with flavored $AdS_{5}\times S^{1}$ background. In the near-extremal background, there is a system of $D3$ and uncharged $D5$ branes in six dimensional string theory and one of the gauge theory flat directions is compact on a thermal circle in order to break supersymetry [43]. The near extremal solution is dual to the four dimensional theory at finite temperature without supersymmetry and conformal invariance. In this paper we use the semi-classical model introduced in ref. [37] and do the same calculations in the two non-critical dual pictures. For this purpose, we choose the flat space-time approximation for simplicity. According to ref. [37] we construct the wave function for the string configuration and use it to compute the decay width. This paper is organized as follows: In section 2 we use the non-critical $AdS_{6}$ background to write an expression for the meson decay width. In section 3 we we use another non-critical background, the near extremal KM model with $AdS_{5}\times S^{1}$ black hole, and obtain another equation for the meson decay width for six dimensional string theory. In the last section, we present our numerical results and compare them with the previous models and the experimental data of ref. [50]. Also we use the modified relation between the length of horizontal part of the string and its mass derived in [39] and obtain the decay widths for two non-critical backgrounds of sections 2, 3 and compare them with the data. 2 Decay widths in the near extremal $AdS_{6}$ model In this paper we use two different non-critical backgrounds to calculate the mesons decay widths; the near extremal $AdS_{6}$ model in this section and the near extremal Klebanov-Maldacena model with $AdS_{5}\times S^{1}$ background in the next section. Also we use the method proposed in ref. [37] for the critical dimensions to calculate the decay widths. First, we briefly review the model of ref. [37] and then use it to do our calculations. In this paper, a semi-classical U-shaped spinning string configuration ith two massive endpoints on the flavor brane is considered. The string is pulled toward the infrared wall and also extends along it. This configuration is equivalent to a high spin meson with massive quarks. The string can decay into some outgoing mesons by touching one or more of the flavor branes, spliting and then getting reconnected to the branes due to the quantum fluctuations. The idea in this paper is to focus on the near wall geometry and build the string wave function near in this geometry by semi-classical quantization. The total wave function for a classical $U$-shaped string is [37] $$\displaystyle\Psi\big{[}\{{\mathcal{N}}_{n}\}\big{]}=\prod_{n}\Psi_{n}\big{[}{% \mathcal{N}}_{n}(X^{M})\big{]}\,.$$ (1) where ${\mathcal{N}}_{n}(X^{M})$ are normal coordinates, $X^{M}$ are the target space coordinates and $\Psi_{n}\big{[}{\mathcal{N}}_{n}(X^{M})\big{]}$ are the wave function of normal modes ${\mathcal{N}}_{n}$ . Due to the quantum fluctuations, the string may touch the flavor brane in one or more points with the probability given by [37] $$\displaystyle{\mathcal{P}}_{\text{fluct}}=\int^{\prime}_{\{{\mathcal{N}}_{n}\}% }\big{|}\,\Psi\big{[}\{{\mathcal{N}}_{n}\}\big{]}\,\big{|}^{2}\,,$$ (2) and only the configurations with the following condition are being integrated $$\displaystyle\max\big{(}U(\sigma)\big{)}\geq U_{B}\,.$$ (3) The splitting probability for the string to is given by [37] $$\displaystyle{\cal P}_{\text{split}}:=\frac{1}{T_{\text{eff}}}\,\frac{\Gamma_{% \text{open}}}{L}\,.$$ (4) Using this relation, the total decay width takes this form $$\displaystyle\Gamma=T_{\text{eff}}{\cal P}_{\text{split}}\,\times\,\int^{% \prime}_{\{{\mathcal{N}}_{n}\}}\big{|}\,\Psi\big{[}\{{\mathcal{N}}_{n}\}\big{]% }\,\big{|}^{2}\,K\big{[}\{{\mathcal{N}}_{n}\}\big{]}\,,$$ (5) where $\,K\big{[}\{{\mathcal{N}}_{n}\}\big{]}$ is a factor with the dimension of $L$ which measures the size of string sesments intersect the flavor brane. Finally, the authors in ref.[37] obtained the approximated decay width as following $$\displaystyle\Gamma_{\text{approx}}=\Big{(}T_{\text{eff}}\,{\mathcal{P}}_{% \text{split}}\,\times\,L\,\times\,\kappa_{\text{max}}\Big{)}\,\times\,{\cal P}% _{\text{fluct}}\,,$$ (6) where $\kappa_{\text{max}}$ is dimensionless. The fluctuation probability is the main thing should be computed for the decay width. ${\cal{P}}_{\text{fluct}}$ . Now we are going to use the procedure proposed in ref. [37] to evaluate the decay widths. First, we use the model introduced above to calculate the meson decay width in the near extremal $AdS_{6}$ black hole background [43]. This background is constructed by near extremal color $D4$-branes and additional $D4/\overline{D4}$ flavor branes. In order to have a non-supersymmetric gauge theory with massless fundamentals, $D4$ flavor branes are added to the background which are extended along the Minkowski directions and stretched along the radial direction. The background metric has the form $$\displaystyle ds^{2}_{6}$$ $$\displaystyle=$$ $$\displaystyle\left(\frac{U}{R_{AdS}}\right)^{2}dx_{1,3}^{2}+\left(\frac{R_{AdS% }}{U}\right)^{2}\frac{dU^{2}}{f(U)}+\left(\frac{U}{R_{AdS}}\right)^{2}f(U)d% \theta^{2},$$ (7) $$\displaystyle F_{(6)}$$ $$\displaystyle=$$ $$\displaystyle Q_{c}\left(\frac{U}{R_{AdS}}\right)^{4}dx_{0}\wedge dx_{1}\wedge dx% _{2}\wedge dx_{3}\wedge dU\wedge d\theta$$ (8) $$\displaystyle e^{\phi}$$ $$\displaystyle=$$ $$\displaystyle\frac{2\sqrt{2}}{\sqrt{3}Q_{c}}\,\,,\quad\qquad R_{AdS}^{2}=\frac% {15}{2},$$ (9) where $\phi$ is a constant dilaton, $F_{(6)}$ is the $RR$ six-form field strength and $$\displaystyle f(U)=1-\left(\frac{U_{\Lambda}}{U}\right)^{5}.$$ (10) The coordinate $\theta$ should be periodic in order to avoid a conical singularity on the horizon $$\displaystyle\theta\sim\theta+\delta\theta\,\,,\qquad\qquad\delta\theta=\frac{% 4\pi R_{AdS}^{2}}{5U_{\Lambda}}\,\,,$$ (11) where $L_{\Lambda}\equiv\delta\theta$ is the size of the thermal circle and should be small in order to have a dual four dimensional low energy effective gauge theory [32]. The mass scale for this non-critical metric is $$\displaystyle M_{\Lambda}=\frac{2\pi}{\delta\theta}=\frac{5}{2}\frac{\ U_{% \Lambda}}{R_{AdS}^{2}}.$$ (12) At leading order, the fluctuations of horizontal part of the string on the wall experiences a flat geometry. First, we use the following coordinate to see when the flat approximation is valid [37] $$\displaystyle\eta^{2}=\frac{U-U_{\Lambda}}{U_{\Lambda}}\,.$$ (13) Then we expand the metric of equation (7) around $\eta\,=\,0$ to quadratic order and find the following expression for the $AdS_{6}$ metric $$\displaystyle{\rm d}s^{2}\sim\left(\frac{U_{\Lambda}}{R_{AdS}}\right)^{2}(1+2% \eta^{2})(\eta_{\mu\nu}{\rm d}X^{\mu}{\rm d}X^{\nu})+\frac{4}{5}R_{AdS}^{2}{% \rm d}\eta^{2}+5\left(\frac{U_{\Lambda}}{R_{AdS}}\right)^{2}\eta^{2}{\rm d}% \theta^{2}.$$ (14) Then we consider the following solutions of a rotating string at the IR wall $$\displaystyle T=L\tau\,,\quad X^{1}=L\sin\tau\sin\sigma\,,\quad X^{2}=L\cos% \tau\sin\sigma\,,\quad U=U_{\Lambda}\,,$$ (15) where $L$ is the length of the horizontal part of the string. To quantize the linearized metric (14) by using the Polyakov formulation. The Polyakov string action in a curved background is given by $$S=\frac{1}{2\pi\alpha^{\prime}}\int\!{\rm d}\tau\int_{0}^{2\pi\sqrt{\alpha^{% \prime}}}\!{\rm d}\sigma\,G_{MN}\left[\dot{X}^{M}\dot{X}^{N}-X^{M}{}^{\prime}X% ^{N}{}^{\prime}\right]\,.$$ (16) As explained in ref. [37], the fluctuations along the wall directions are irrelevant to construct the wave function. So, we only consider the fluctuations in the $\eta$ and $X^{\mu}$ directions (transverse to the wall). By expanding the above action around the solutions of (15) and keeping the quadratic terms in $\eta$, we find the following action $$\displaystyle S=\frac{1}{2\pi\alpha^{\prime}}\left(\frac{U_{\Lambda}}{R_{AdS}}% \right)^{2}\int\!{\rm d}\tau{\rm d}\sigma\,\frac{4}{5}\frac{R_{AdS}^{4}}{U_{% \Lambda}^{2}}\left[\left(\dot{\eta}^{2}-{\eta^{\prime}}^{2}\right)-b\,\cos^{2}% (\sigma)\left(1+2\eta^{2}\right)\,\right]$$ $$\displaystyle+\left[\left(1+2\eta^{2}\right)\left(\delta\dot{X}^{\mu}\delta% \dot{X}^{\nu}\eta_{\mu\nu}-\delta X^{\mu^{\prime}}\delta X^{\nu^{\prime}}\eta_% {\mu\nu}\right)\right]\,,$$ (17) where $b$ is a dimensionless parameter as following $$\displaystyle b\equiv\frac{5}{2}\frac{\pi^{2}L^{2}U_{\Lambda}^{2}}{R_{AdS}^{4}% }\,,$$ (18) which determines the effect of curvature. If $b\ll\,1$, the string fluctuations are small enough, so we can use the flat space approximation with the following coordinate $$\displaystyle\eta=\sqrt{\frac{5}{4}}\frac{U_{\Lambda}}{R_{AdS}^{2}}\,z\,.$$ (19) to write the metric (14) in the conformally flat form. $$\displaystyle{\rm d}s^{2}\sim\left(\frac{U_{\Lambda}}{R_{AdS}}\right)^{2}\Big{% (}\eta_{\mu\nu}{\rm d}X^{\mu}{\rm d}X^{\nu}+{\rm d}z^{2}\Big{)}+5\left(\frac{U% _{\Lambda}}{R_{AdS}}\right)^{2}\eta^{2}{\rm d}\theta^{2}\,.$$ (20) Again by expanding the Polyakov action for the fluctuations in the directions transverse to the wall and use $T=L\tau$, we find $$\displaystyle S_{\text{fluct}}=\frac{L}{2\pi\alpha^{\prime}_{\text{eff(1)}}}% \int\!{\rm d}T{\rm d}\sigma\,\left[-(\partial_{T}z)^{2}+\frac{1}{L^{2}}(% \partial_{\sigma}z)^{2}\right]\,,$$ (21) where we have neglected the fluctuations in the directions along the wall. In the above formula, the string effective coupling for the near extremal $AdS_{6}$ background is as following $$\displaystyle\alpha^{\prime}_{\text{eff(1)}}=\alpha^{\prime}\left(\frac{R_{AdS% }}{U_{\Lambda}}\right)^{2}\,,$$ (22) which is obtained by using the following equation for the non-critical string stretching close to the horizon of the $AdS_{6}$ black hole [43] $$\displaystyle T_{eff(1)}=\frac{1}{2\pi\alpha^{\prime}}\sqrt{g_{00}g_{xx}}|_{% wall}=\frac{1}{2\pi\alpha^{\prime}}\left(\frac{U_{\Lambda}}{R_{AdS}}\right)^{2}$$ (23) Then, by imposing the Dirichlet boundary conditions for the fluctuations $z(\sigma,\tau)$, one can write the following equation $$\displaystyle z(\sigma,\tau)=\sum_{n>0}z_{n}\cos(n\sigma)\,.$$ (24) We put this in the action (21) and integrate over $\sigma$ coordinate to obtain $$\displaystyle S_{\text{fluct}}=\frac{L}{2\alpha^{\prime}_{\text{eff(1)}}}\int% \!{\rm d}T\,\left[\sum_{n>0}\left(-(\partial_{T}z_{n})^{2}+\frac{n^{2}}{L^{2}}% z_{n}^{2}\right)\right]\,.$$ (25) for the fluctuation in the $z$ direction. From this equation, we can easily find that the system is similar to infinite number of linear harmonic oscillators. The frequencies are $n/L$ and the masses are $L/\alpha^{\prime}_{\text{eff(1)}}$. We can see that this result is in the form of the critical setup obtained in ref. [37]. But the important difference is in the expression of $\alpha^{\prime}_{\text{eff(1)}}$ (equation (22)) which is different in critical and non-critical cases. Also the values of $U_{\Lambda}$ and $R_{AdS}^{2}$ differ from the corresponding critical values. The wave function in the factorized form is [37] $$\displaystyle\Psi(\{z_{n}\},\{x_{n}\})=\Psi_{\text{long}}(\{x_{n}\})\times\Psi% _{\text{sphere}}(\{y_{n}\})\times\Psi_{\theta}(\{\theta_{n}\})\times\Psi_{% \text{trans}}(\{z_{n}\})\,.$$ (26) where $\Psi_{\text{long}}=\Psi_{\text{sphere}}=\Psi_{\theta}=1$ because only the fluctuations transverse to the wall contribute to the fluctuation probability and those belong to other directions are being integrated. So the wave function for the transverse directions is written as [37] $$\displaystyle\Psi[\{z_{n}\}]=\prod_{n=1}^{\infty}\Psi_{0}(z_{n})\,.$$ (27) From equation (25) we can write the following equation for the wave functions of the string compared to the harmonic oscillators $$\displaystyle\Psi_{0}(z_{n})=\left(\frac{n}{\pi\alpha^{\prime}_{\text{eff(1)}}% }\right)^{1/4}\exp\left(-\frac{n}{2\alpha^{\prime}_{\text{eff(1)}}}\,z_{n}^{2}% \right)\,.$$ (28) This equation is also similar to the critical case [37], but the difference is in the $\alpha^{\prime}_{\text{eff(1)}}$ equation. All oscillators are in their ground state and there is no relevant excited mode. Also there is the following condition [37] $$\displaystyle\sum_{n>0}\big{|}z_{n}\big{|}\leq z_{B}\,.$$ (29) which means that if one add all the modes constructively, the total amplitude is still smaller than $z_{B}$. This condition leads to the following expression for the upper bound on the fluctuation probability [37] $$\displaystyle{\cal P}^{\text{max}}_{\text{fluct}}=1-\idotsint\limits_{\sum_{n>% 0}|z_{n}|\leq z_{B}}\,\prod_{n=1}^{\infty}{\rm d}z_{n}\,\big{|}\Psi(\{z_{n}\})% \big{|}^{2}\,.$$ (30) There is also another condition for the string not to touch the brane which leads to a lower bound for the string fluctuation probability as following [37] $$\displaystyle{\mathcal{P}}^{\text{min}}_{\text{fluct}}=1-\lim_{N\rightarrow% \infty}\,\int_{0}^{z_{B}}\!{\rm d}z_{1}\int_{0}^{z_{B}}\!{\rm d}z_{2}\cdots% \int_{0}^{z_{B}}\!{\rm d}z_{N}\,\,\big{|}\Psi(\{z_{n}\})\big{|}^{2}\ .$$ (31) The authors in ref [37] have evaluated this integral numerically and fitted their result to a Gaussian with the following expression $$\displaystyle{\mathcal{P}}^{\text{min}}_{\text{fluct}}\approx\exp\left(-1.3% \frac{z_{B}^{2}}{\alpha^{\prime}_{\text{eff}}}\right)\,.$$ (32) They have obtained this result by using the ${\mathcal{P}}^{\text{min}}_{\text{fluct}}$ plot in terms of $z_{B}/\sqrt{\alpha^{\prime}_{\text{eff}}}$. If we do the same process, the following equation for ${\mathcal{P}}^{\text{min}}_{\text{fluct}}$ is obtained $$\displaystyle{\mathcal{P}}^{\text{min}}_{\text{fluct}}\approx\exp\left(-1.3% \frac{z_{B}^{2}}{\alpha^{\prime}_{\text{eff(1)}}}\right)$$ (33) This equation is the same as ref. [37], except that the $\alpha^{\prime}_{\text{eff}}$ is replaced by $\alpha^{\prime}_{\text{eff(1)}}$. Then we put equation (33) into the equation (6) and find the decay width of large-spin mesons in the flat space approximation as following $$\displaystyle\Gamma_{\text{flat}}=\Big{(}\text{const}.\times T_{\text{eff}}\,{% \mathcal{P}}_{\text{split}}\,\times\,L\Big{)}\,\times\,\exp\left(-1.3\frac{z_{% B}^{2}}{\alpha^{\prime}_{\text{eff(1)}}}\right)\,.$$ (34) This is the decay width we obtain by using the near extremal $AdS_{6}$ black hole background. The difference between our result and the result of critical dimensions (ref. [37]) is the precise form of the exponent. Since the string effective coupling is different in these two backgrounds, this leads to different results for the decay width. Also this difference exists in the case of the Lund model [38]. We present our numerical results in the last section. Equation (34) shows that the ratio $\Gamma/L$ is constant on the same Regge trajectory just like the results of the ref. [37] and the Lund model. But the experimental data do not support this result exactly. As mentioned in ref. [37], this deviation could be justify by the fact that the Regge trajectories in the nature are not straight lines and one should consider the effect of two massive endpoints as following [39] $$\displaystyle\frac{L}{M}=\frac{2}{\pi\,T_{\text{eff}}}-\frac{m_{1}+m_{2}}{2T_{% \text{eff}}M}+{\cal O}\left(\frac{m_{i}^{2}}{M^{2}}\right)\,.$$ (35) In ref. [37], this relation has been applied to the decay rates. The authors showed that in the case of $\Gamma$ linear in $L$, the $\Gamma/M$ ratio in not a constant. They concluded that as $M$ increase, this ratio increases and reaches to a universal value for large $M$. So, equation (35) leads to a better result for the decay width which is compatible with the experimental data. We also use this equation together with equation (22) to find the decay widths for $a$ and $f$ mesons. We discuss our results in the last section. 3 Decay widths in the near extremal $KM$ model In this section we use another non-critical background to compute the decay width of large-spin mesons. We choose the near extremal version of Klebanov-Maldacena model, $AdS_{5}\times S^{1}$. The system is composed of $D3$ color branes and uncharged $D5$ flavor branes in six dimensional non-critical string theory. Here, we consider one of gauge theory flat dimensions to be compact on a thermal circle. This background is dual to a four dimensional field theory at finite temperature with fundamental flavors. The $AdS_{5}\times S^{1}$ metric is given by [43] $$\displaystyle ds^{2}=\left(\frac{U}{R^{\prime}_{AdS}}\right)^{2}\left(dx^{2}_{% 1,2}+\left(1-\left(\frac{U_{\Lambda}}{U}\right)^{4}\right)d\theta^{2}\right)+% \left(\frac{R^{\prime}_{AdS}}{U}\right)^{2}\frac{dU^{2}}{1-\left(\frac{U_{% \Lambda}}{U}\right)^{4}}+R_{S^{1}}^{2}d\varphi^{2}\,,$$ (36) with $R^{\prime}_{AdS}=\sqrt{6}$, $R_{S^{1}}^{2}=\frac{4Q_{c}^{2}}{3Q_{f}^{2}}$, $U_{\Lambda}^{4}=2b_{1}R_{AdS}^{{}^{\prime}4}$ and $e^{\phi_{0}}=\frac{4}{3Q_{f}}$. The RR 5-form field strength is $$\displaystyle F_{5}=Q_{c}\left(\frac{U}{R^{\prime}_{AdS}}\right)^{3}dx^{0}% \wedge dx^{1}\wedge dx^{2}\wedge d\theta\wedge dU\,,$$ (37) and the period of the compact direction $\theta$ is as following $$\displaystyle\theta\sim\theta+\frac{\pi R_{AdS}^{{}^{\prime}2}}{U_{\Lambda}}\,.$$ (38) Again, we use the coordinate of equation (15) to expand the metric (36) around $\eta=0$ just like what we did in the previous section. Then we find $$\displaystyle{\rm d}s^{2}\sim\left(\frac{U_{\Lambda}}{R^{\prime}_{AdS}}\right)% ^{2}(1+2\eta^{2})(\eta_{\mu\nu}{\rm d}X^{\mu}{\rm d}X^{\nu})+R_{AdS}^{{}^{% \prime}2}{\rm d}\eta^{2}+4\left(\frac{U_{\Lambda}}{R^{\prime}_{AdS}}\right)^{2% }\eta^{2}{\rm d}\theta^{2}+R_{S^{1}}^{2}d\varphi^{2}.$$ (39) Then we consider fluctuations in the directions transverse to the wall and expand the Polyakov action (16) around the solutions (15). By keeping quadratic terms in $\eta$, we obtain the following action $$\displaystyle S=\frac{1}{2\pi\alpha^{\prime}}\left(\frac{U_{\Lambda}}{R^{% \prime}_{AdS}}\right)^{2}\int\!{\rm d}\tau{\rm d}\sigma\,\frac{R_{AdS}^{{}^{% \prime}4}}{U_{\Lambda}^{2}}\left[\left(\dot{\eta}^{2}-{\eta^{\prime}}^{2}% \right)-b^{\prime}\,\cos^{2}(\sigma)\left(1+2\eta^{2}\right)\,\right]$$ $$\displaystyle+\left[\left(1+2\eta^{2}\right)\left(\delta\dot{X}^{\mu}\delta% \dot{X}^{\nu}\eta_{\mu\nu}-\delta X^{\mu^{\prime}}\delta X^{\nu^{\prime}}\eta_% {\mu\nu}\right)\right]\,,$$ (40) where the dimensionless parameter $b^{\prime}$ is $$\displaystyle b^{\prime}\equiv\frac{2L^{2}U_{\Lambda}^{2}}{R_{AdS}^{{}^{\prime% }4}}\,.$$ (41) In the case of $b^{\prime}\ll 1$, we have the flat space approximation because the fluctuations of the string are small. Then we can use the coordinate $$\displaystyle z=\frac{R_{AdS}^{{}^{\prime}2}}{U_{\Lambda}}\,\eta\,,$$ (42) to write the metric (40) in the following conformally flat form $$\displaystyle{\rm d}s^{2}\sim\left(\frac{U_{\Lambda}}{R^{\prime}_{AdS}}\right)% ^{2}\Big{(}\eta_{\mu\nu}{\rm d}X^{\mu}{\rm d}X^{\nu}+{\rm d}z^{2}\Big{)}+\left% (\frac{U_{\Lambda}}{R^{\prime}_{AdS}}\right)^{2}\eta^{2}{\rm d}\theta^{2}\,+R_% {S^{1}}^{2}d\varphi^{2},.$$ (43) Again we expand the Polyakov action for the fluctuations in the transverse directions and use $T=L\tau$, we can find $$\displaystyle S_{\text{fluct}}=\frac{L}{2\pi\alpha^{\prime}_{\text{eff(2)}}}% \int\!{\rm d}T{\rm d}\sigma\,\left[-(\partial_{T}z)^{2}+\frac{1}{L^{2}}(% \partial_{\sigma}z)^{2}\right]\,,$$ (44) where $$\displaystyle\alpha^{\prime}_{\text{eff(2)}}=\alpha^{\prime}\left(\frac{R^{% \prime}_{AdS}}{U_{\Lambda}}\right)^{2}\,,$$ (45) is the effective string coupling for the near extremal $KM$ model. Because of the $AdS$ radii differences in these two non-critical backgrounds, this equation is not the same as equation (22). Then we put equation (24) for the fluctuations $z(\sigma,\tau)$ into the action (44) and integrate over $\sigma$ coordinate to find $$\displaystyle S_{\text{fluct}}=\frac{L}{2\alpha^{\prime}_{\text{eff(2)}}}\int% \!{\rm d}T\,\left[\sum_{n>0}\left(-(\partial_{T}z_{n})^{2}+\frac{n^{2}}{L^{2}}% z_{n}^{2}\right)\right]\,.$$ (46) This action is also similar to the action for an infinite number of linear harmonic oscillators with frequencies $n/L$ and masses $L/\alpha^{\prime}_{\text{eff(2)}}$. Then by using the process of previous section, we find the string fluctuation probability as following $$\displaystyle{\mathcal{P}}^{\text{min}}_{\text{fluct}}\approx\exp\left(-1.3% \frac{z_{B}^{2}}{\alpha^{\prime}_{\text{eff(2)}}}\right)\,.$$ (47) By inserting equation (47) into equation (6), we obtain the following equation for the decay width of large-spin mesons in the near extremal $KM$ model $$\displaystyle\Gamma_{\text{flat}}=\Big{(}\text{const}.\times T_{\text{eff}}\,{% \mathcal{P}}_{\text{split}}\,\times\,L\Big{)}\,\times\,\exp\left(-1.3\frac{z_{% B}^{2}}{\alpha^{\prime}_{\text{eff(2)}}}\right)\,,$$ (48) where $z_{B}$ is the position of the flavor brane. This equation is for the flat space approximation and is similar to the near extremal $AdS_{6}$ results and also ref. [37]. These results are different in the precise form of the exponent which depend on effective string couplings. This is the decay width obtained by using the near extremal $KM$ model with the $AdS_{5}\times S^{1}$ black hole background. We present our numerical results in the last section. 4 Results and discussion In this section, we present the numerical results for the decay widths and compare them with the models in [37] and the experimental data of ref. [50]. As mentioned before, the difference between equations (34) and (48) and the critical model [37] is the different effective string couplings. We use these equations and obtain the decay widths numerically. For this purpose, we put the values $R_{AdS}=\sqrt{\frac{15}{2}}$, $R^{\prime}_{AdS}=\sqrt{6}$ and $T_{\text{eff}}\approx 0.177~{}\text{GeV}^{2}$ into the equations (34),(48) and use equations (22),(45) to plot the decay width (fig. 1). From this diagram, we can see that the decay widths of mesons in the non-critical backgrounds of sections 2,3 are more than in the critical model. So we find larger values for the decay widths compared to the critical model. Then we plot $\Gamma/M$ in terms of $M$ for two large-spin mesons, $a$ and $f$ by using a sigmoidal fitting for the experimental data of ref. [50] (see fig.2 ). From this diagram, we can see that the decay widths in the nature are not constant on the same Regge trajectory. The decay widths behave like equation (35) in which the effect of two massive endpoints of the string is considered. Also we can use equation (35) for the two non-critical models of sections 2,3 and the critical model [37] to plot $\Gamma/M$ in terms of $M$ (fig.3). In this figure we can see that for the $a$ trajectory, there is a good agreement between the $AdS_{6}$ diagram and the experimental data for all spin values (the left diagram). For the $f$ trajectory we can see that our results deviate from the experimental results in low spins and the fitting of experimental data has a better agreement with the critical model of ref. [37] but for high spins in the $f$ trajectory, the $AdS_{5}\times S^{1}$ diagram is more compatible with the experimental data (the right diagram). In this paper we used the non-critical string/gauge duality and obtained the decay widths for large-spin mesons. We chose two different six dimensional black hole backgrounds; the near extremal $AdS_{6}$ background and the near extremal KM model. By using the method of ref.[37], we obtained expressions for the decay widths and plotted it in terms of the position of the flavor brane (fig.1). From this diagram it is easy to see that the non-critical models lead to larger values for the decay width. Then we used another equation from ref. [39] and plotted the decay width in terms of the masses of meson states. We compared these results with the fitting of experimental data of ref. [50] for mesons $a$ and $f$ (fig.3). From these diagrams one can find that for $a$ meson, the $AdS_{6}$ background leads to a better result. Also for $f$ meson, the near extremal KM model has better agreement with data only for large spins. For lower spins, the critical model of ref. [37] is closer to the experimental data. References [1] Maldacena J M 1998 The large N limit of superconformal field theories and supergravity Adv. Theor. Math. Phys. 2 231. [2] Witten E 1998 Anti-de Sitter space and holography Adv. Theor. Math. Phys. 2 253. [3] Schwarz J H 1999 Introduction to M Theory and AdS/CFT Duality (Lecture Notes in Physics vol 525) (Berlin: Springer) pp 1–21 (arXiv:hep-th/9812037). [4] Douglas M R and Randjbar-Daemi S 1999 Two lectures on AdS/CFT correspondence arXiv:hep-th/9902022. [5] Petersen J L 1999 Introduction to the Maldacena conjecture on AdS/CFT Int. J. Mod. Phys. A 14 3597. [6] Klebanov Igor R 2000 TASI lectures: introduction to the AdS/CFT correspondence arXiv:hep-th/0009139. [7] Herzog CP, Karch A, Kovtun P, Kozcaz C and Yaffe L G 2006 Energy loss of a heavy quark moving through N = 4 supersymmetric Yang–Mills plasma J. High Energy Phys. JHEP07(2006)013 (arXiv:hep-th/0605158). [8] Herzog C P 2006 Energy loss of heavy quarks from asymptotically AdS geometries J. High Energy Phys. JHEP09(2006)032 (arXiv:hep-th/0605191). [9] Gubser S S 2006 Drag force in AdS/CFT Phys. Rev. D 74 126005. [10] Vazquez-Poritz J F 2008 Drag force at finite ’t Hooft coupling from AdS/CFT arXiv:hep-th/0803.2890. [11] Caceres E and Guijosa A Drag force in charged N = 4 SYM plasma J. High Energy Phys. JHEP11(2006)077. [12] Matsuo T, Tomino D andWenWY 2006 Drag force in SYM plasma with B field from AdS/CFT J. High Energy Phys. JHEP10(2006)055. [13] Sadeghi J, Setare M R, Pourhassan B 2009 Drag force with different charges in STU background and AdS/CFT J. Phys. G: Nucl. Part. Phys. 36 (2009) 115005 (19pp). [14] Sadeghi J, Setare M R, Pourhassan B and Hashmatian S 2009 Drag force of moving quark in STU background Eur. Phys. J. C 61 527 (arXiv:0901.0217 [hep-th]). [15] Sadeghi J and Pourhassan B 2008 Drag force of moving quark at the N = 2 supergravity J. High Energy Phys. JHEP12(2008)026 (arXiv:0809.2668 [hep-th]). [16] Nakano E, Teraguchi S and Wen W Y 2007 Drag force, jet quenching and AdS/QCD Phys. Rev. D 75 085016. [17] Liu H, Rajagopal K and Wiedemann U A 2006 Calculating the jet quenching parameter from AdS/CFT Phys. Rev. Lett. 97 182301. [18] Vazquez-Poritz J F 2006 Enhancing the jet quenching parameter from marginal deformations arXiv: hep-th/0605296. [19] Caceres E and GuijosaA2006 On drag forces and jet quenching in strongly coupled plasmas J. High Energy Phys. JHEP12(2006)068. [20] Lin F L and Matsuo T 2006 Jet quenching parameter in medium with chemical potential from AdS/CFT Phys. Lett. B 641 45. [21] Avramis S D and Sfetsos K 2007 Supergravity and the jet quenching parameter in the presence of R-charge densities J. High Energy Phys. JHEP01(2007)065. [22] Armesto N, Edelstein J D and Mas J 2006 Jet quenching at finite ’t Hooft coupling and chemical potential from AdS/CFT J. High Energy Phys. JHEP09(2006)039. [23] Edelstein J D and Salgado C A 2008 Jet quenching in heavy Ion collisions from AdS/CFT AIP Conf. Proc. 1031 207 (arXiv:0805.4515). [24] Fadafan K B 2008 Medium effect and finite t’Hooft coupling correction on drag force and jet quenching parameter arXiv:0809.1336. [25] Peeters K, Sonnenschein J and Zamaklar M 2006 Holographic melting and related properties of mesons in a quark gluon plasma Phys. Rev. D 74 106008. [26] Liu H, Rajagopal K and Wiedemann U A 2007 An AdS/CFT calculation of screening in a hot wind Phys. Rev. Lett. 98 182301 (arXiv:hep-ph/0607062). [27] Chernicoff M, Garcia J A and Guijosa A 2006 The energy of a moving quark-antiquark pair in an N = 4 SYM plasma J. High Energy Phys. JHEP 09 068. [28] Erdmenger J, Evans N, Kirsch I and Threlfall E J 2008 Mesons in gauge/gravity duals Eur. Phys. J. A 35 81. [29] Sadeghi J and Heshmatian S 2010 Screening length of rotating heavy meson from AdS/CFT Int J Theor Phys 49 1811 (arXiv:hep-th/0812.4816). [30] Ali-Akbari M and B. Fadafan K 2009 Rotating mesons in the presence of higher derivative corrections from gauge-string duality arXiv:0908.3921v1 [hep-th]. [31] Chernicoff M, Garcia J A and Guijosa A 2006 The energy of a moving quark-antiquark pair in an N = 4 SYM plasma J. High Energy Phys. JHEP09(2006)068. [32] Witten E 1998 Anti-de Sitter space, thermal phase transition, and confinement in gauge theories Adv. Theor. Math. Phys. 2 505 (arxiv:hep-th/9803131). [33] Sakai T and Sugimoto S 2004 Low Energy Hadron Physics in Holographic QCD (arXiv:hep-th/0412141v5); Sakai T and Sugimoto S 2005 More on a Holographic Dual of QCD (arXiv:hep-th/0507073v4) ; Hashimoto K Sakai T and Sugimoto S 2008 Holographic Baryons (arXiv:hep-th/0806.3122v3) ; Hata H Sakai T Sugimoto S and Yamato S 2007 Baryons from instantons in holographic QCD (arXiv:hep-th/0701280). [34] Kim K Y, Sin S J and Zahed I 2007 The Chiral Model of Sakai-Sugimoto at Finite Baryon Density arXiv:0708.1469v3 [hep-th]; Bergmann O, Lifschytz G and Lippert M 2007 Holographic Nuclear Physics JHEP 0711:056. [35] Kruczenski M, Zayas L A P, Sonnenschein J, and Vaman D (2005) Reggie trajectories for mesons in the holographic dual of large $N_{c}$ QCD JHEP 06 046 (arxiv:hep-th/0410035). [36] Rozali M et al. (2008) Cold nuclear matter in holographic QCD J. High Energy Phys. JHEP 0801 053; Sin S J, Yang S, Zhou Y (2009) Comments on Baryon Melting in Quark Gluon Plasma with gluon condensation (arXiv:hep-th/0907.1732v1); Curtis G et al. (1999) Baryons and string creation from the 5-brane world-volume action Nucl. Phys. B 547 127–1242; [37] Peeters K, Sonnenschein J and Zamaklar M (2006) Holographic decay of large-spin mesons (arxiv:hep-th/0511044v3). [38] Sjöstrand T (1982) The Lund Monte Carlo for jet fragmentation Comp. Phys. Commun 27 243; Andersson B (1998) The Lund model Cambridge University Press;Andersson B et al. (1983) Parton fragmentation and string dynamics Phys. Rep. 97 31. [39] Ida M (1978) Relativistic motion of massive quarkes joined by a massless string Prog. Theor. Phys. 59 1661. [40] Bigazzi F and Cotrone A L (2006) New predictions on meson decays from string splitting JHEP 0611:066 (arxiv:hep-th/0606059v2); Cotrone L A, Martucci L, and Troost W (2005) String splitting and strong coupling meson decay(arxiv:hep-th/0511045); Dai J and Polchinski J (1989) The decay of macroscopic fundamental strings Phys. Lett. B 220 387; Gupta K S and Rosenzweig C (1994) Semiclassical decat of excited string states on leading Regge Trajectories Phys. Rev. D 50 3368 (arxiv:hep-ph/9402263); Peeters K, Plefka J, and Zamakler M (2004) Splitting spinning strings in AdS/CFT JHEP 11 054 (arxiv:hep-th/0410275). [41] Wilkinson R B, Turok N, and Mitchell D (1990) The decay of highly excited closed strings Nucl. Phys. B 332 131. [42] Casher A, Neuberger H, and Nussinov S (1979) Chromoelectric flux tube model of particle production Phys. Rev. D 20 179-188. [43] Casero R, Paredes A, and Sonnenschein J (2005) Fundamental matter, meson spectroscopy and non-critical string/gauge duality (arxiv:hep-th/0510110). [44] Kuperstein S and Sonnenschein J (2004) Non-critical supergravity ($d>1$) and holography JHEP 0407 049 (arxiv:hep-th/0403254). [45] Bigazzi F et al. (2005) Non-critical holography and four-dimensional CFT’s with fundamentals JHEP 0510 012 (arxiv:hep-th/0505140). [46] Giveon A, Kutasov D, and Pelc O (1999) Holography for non-critical superstrings JHEP 10 035 (arxiv:hep-th/9907178). [47] Klebanov I R and Maldacena J M (2004) Superconformal gauge theories and non-critical superstrings Int. J. Mod. Phys. A 19 5003 (arxiv:hep-th/0409133). [48] Kuperstein S and Sonnenschein J (2004) Non-critical near extremal $AdS_{6}$ background as a holographic laboratory of four dimensional YM theory JHEP 0411 026. [49] Ferretti G, Kalkkinen J, and Martelli D (1999) Non-critical type 0 string theories and their field theory duals Nucl. Phys. B 555 135-156 (arxiv:hep-th/9904013); Armoni A, Fuchs E and Sonnenschein J (1999) Confinement in 4d yang-mills theories from non-critical type 0 string theory JHEP 06 027 (arxiv:hep-th/9903090); Ghoroku K, (2000) Yang-mills theory from non-critical string J. Phys. G 26 233-244 (arxiv:hep-th/9907143); Alvarez E, Gomez C, and Hernandez L (2001) Non-critical poincare invariant bosonic string backgrounds and closed string tachyons Nucl. Phys. B 600 185-196 (arxiv:hep-th/0011105); Murthy S (2003) Notes on non-critical superstrings in various dimensions JHEP 11 056 (arxiv:hep-th/0305197); Fotopoulos A, Niarchos V, and Prezas N (2005) D-branes and SQCD in non-critical superstring theory (arxiv:hep-th/0504010); Ashok S K, Murthy S, and Troost J (2005) D-branes in non-critical superstrings and minimal super Yang-Mills in various dimensions (arxiv:hep-th/0504079); Grassi P A and Oz Y (2005) Non-critical covariant superstrings (arxiv:hep-th/0507168). [50] Nakamura K et al. (Particle Data Group) (2010) J. Phys. G: Nucl. Part. Phys. 37 075021.
Conditional beam splitting attack on quantum key distribution John Calsamiglia${}^{1}$    Stephen M. Barnett ${}^{2}$    and Norbert Lütkenhaus${}^{3}$ ${}^{1}$ Helsinki Institute of Physics, PL 64, FIN-00014 Helsingin yliopisto, Finland ${}^{2}$ Department of Physics and Applied Physics, University of Strathclyde,John Anderson Building, 107 Rottenrow,Glasgow G4 0NG, Scotland. ${}^{3}$ MagiQ Technologies Inc., 275 Seventh Avenue, 26th floor, NY 10001-67 08, USA. (Received November 19, 2020) Abstract We present a novel attack on quantum key distribution based on the idea of adaptive absorption[1]. The conditional beam splitting attack is shown to be much more efficient than the conventional beam spitting attack, achieving a performance similar to the, powerful but currently unfeasible, photon number splitting attack. The implementation of the conditional beam splitting attack, based solely on linear optical elements, is well within reach of current technology. I Introduction The use of quantum effects extends our communication capabilities beyond the solutions offered by classical communication theory. A prominent example is quantum key distribution (qkd) which allows to expand a small initial secret key shared between two parties into a larger secret key. This task, which cannot be accomplished within classical communication theory, enables the two parties to exchange secret messages via the encryption technique of the one-time pad [2]. The idea has been introduced by Wiesner [3] and the first complete protocol for qkd has been given by Bennett and Brassard [4]. In a quantum optical implementation, the sender (Alice) encodes a random bit value “0” and “1” in the orthogonal polarization states of a single photon. She chooses at random either a linear or a circular polarization basis. The receiver (Bob) uses a polarization bases chosen at random from these two bases. In the following classical communication, Alice and Bob identify those signals for which they used the same basis, and the corresponding bit values form the sifted key. Either due to noise or due to eavesdropping, Alice’s and Bob’s version of the sifted key differ. As long as the error rate is below some threshold, they can correct these errors and perform privacy amplification [5] to obtain a secure key. The theoretical security analysis of this scheme has been a subject of intense research and only recently a full proof of security for the whole protocol has been given [6, 7, 8, 9]. A first implementation of this protocol [10] demonstrated the feasibility of this scheme. Since then, many groups improved the implementations. State of the art schemes can maintain the coherence of the system over distances as far as 50 km. However, the signals used in these implementations are not single photon signals. Instead, weak coherent pulses (wcp) are used with typical average photon numbers of $0.1$ or higher. The signals are described by coherent states in the chosen polarization mode. This modification of the signals together with the large loss of the fiber optical implementations over long distances opens a security loophole [11]. The restrictions on practical implementations imposed by the use of wcp signals has been demonstrated [12], giving a limit on the distance over which qkd can be performed as a function of Bob’s detection efficiency and dark count rate. Nevertheless, it has been shown that, despite these restrictions, it is still possible to obtain a secure key [13, 14] The key process which makes Eve very powerful if Alice and Bob use wcp signals is the photon number splitting attack (pns) [12, 13]. In this attack Eve performs a quantum non-demolition measurement (qnd) of the total number of photons of the signal. Whenever she finds that a signal contains two or more photons, she deterministically takes one photon out of the signal. The remaining photons of the signal are then forwarded over a lossless channel to Bob. For a wcp with mean photon number $\mu$, sent by Alice, Eve obtains a single photon in the same polarization state as those in the signal reaching Bob with probability $$p_{{\sc pns}}^{\rm{succ}}=1-\rm{e}^{-\mu}-\mu\rm{e}^{-\mu}.$$ (1) Eve now can delay the measurement on that photon until she learns the polarization basis of each signal, thereby learning the bit value for each signal. In order to ensure that Bob does not get too many signals compared to the installed lossy quantum channel, Eve can actually block some signals completely, starting with the initial one-photon signals which cannot be split. We find that this strategy gives the complete key to Eve once the losses of the installed quantum channel are so high that she can block all single photon signals. This splitting process used in the pns attack is allowed by quantum mechanics, but the implementation is out of reach of current technology. Therefore earlier analyses of this situation made use of the beam splitting attack (bs) which has the appeal of simplicity and feasibility. The basic concept uses the idea that a lossy quantum channel acts like a combination of a lossless channel and a beam splitter which accounts for the losses. Eve monitors the second output arm of the beam splitter and will gain the complete knowledge of a bit of the sifted key (via a delayed measurement) if a multi-photon signal is split such that Bob and Eve both get at least one photon of the signal. The central quantities are the probability that Bob receives a non-vacuum signal $$P_{{\sc bs}}^{\rm{B}}[\neg 0]=1-\exp\left(-\mu\eta\right)$$ (2) where $\eta$ is the single photon transmission efficiency of the quantum channel. The probability that Bob and Eve both receive a signal is $$p_{{\sc bs}}^{\rm{succ}}=\left[1-\exp\left(-\mu\eta\right)\right]\left[1-\exp% \left(-\mu(1-\eta)\right)\right].$$ (3) Despite its simplicity and perfect simulation of the lossy channel, the beam splitter attack is very ineffective when replacing channels with large losses, i.e. large transmission distances. In that case, for example, two photon signals are more likely to see both photons being directed to Eve (and therefore becoming useless) rather than being split. In [1] the authors present the idea of adaptive absorption. This consists of sending a photonic signal through a linear absorber in which absorption events can be continuously monitored in such a way that as soon as a single photon is absorbed, a feed-forward mechanism decouples the signal from the absorbing medium. With this simple procedure it is possible to extract precisely one photon from a field-mode prepared in any state (other than the vacuum, of course). In this paper we show how the idea of adaptive absorption leads to the conditional beam splitting attack (cbs) on weak coherent pulse qkd. By using only linear optical elements, the cbs reduces the number of events where more than one photon is split off. This allows an eavesdropping efficiency that overwhelms the conventional beam splitting attack and can be as large as the ones given by the pns. The paper will be organized as follows. In Sec. II we describe the cbs attack and introduce the quantum jump method, which in turn will be used to calculate the state of the signal during the various stages of the attack. The results will allow us to compare the performances of the cbs and the conventional bs  attack. For sake of simplicity this will be done in the scenario in which the eavesdropper is able delay her measurement until the encoding basis is announced by Alice [15]. In Sec. III we consider the more realistic situation in which Eve does not have the technological skills to store photons, and introduce a variation of the cbs where Eve tries to split two single photons from the signal before forwarding it to Bob. In Sec. IV we study the photon statistics in Bob’s detectors and see that in principle Alice and Bob could use this information to disclose Eve’s attack. The possibility of improving the cbs attack by using mixed strategies will be investigated in Sec. V. In Sec. VI we rederive some basic results for finite beam splitters and compare them with the ones obtained using the quantum jump method. Sec. VII concludes the paper with a brief summary. II Conditional beam splitting attack As mentioned in the introduction the hope is to take advantage of the fact that the signal bits are implemented through polarized coherent states with very low mean photon number (instead of single photons). To do that Eve will weakly couple her modes to the signal modes and try to extract one excitation from the signal sent by Alice. As soon as she gets one photon into her modes, Eve will allow any remaining signal photons to reach Bob through an ideal channel. Otherwise she will keep on trying for a longer time. If she does not succeed after a maximum coupling time $\tau$, Eve will directly send the signal to Bob through the ideal channel. After Alice announces publicly the encoding basis used to send each of the bits, Eve can measure her photon to learn the bit value of transmitted signal [15]. Only in the cases where the multiphoton signal is split in such a way that both Eve and Bob receive a non-vacuum signal, will the bit value learned by Eve form part of the sifted key shared between Alice an Bob. We will therefore refer to the probability of this event as the probability of success of the cbs ($p^{\rm{succ}}_{{\sc cbs}}$). Since this attack does not produce any qubit errors, we will take the probability of success as a figure of merit for the attack. On the other hand, in order to remain unnoticed, Eve’s attack has to be such that the number of non-vacuum signals that arrive to Bob agrees with what he expects from the lossy channel. Hence, the probability $P^{\rm{B}}_{{\sc cbs}}[\neg 0]$ that Bob receives a non-vacuum signal fixes a bound on the eavesdropping attack. The probabilities $p^{\rm{succ}}_{{\sc cbs}}$ and $P^{\rm{B}}_{{\sc cbs}}[\neg 0]$ will be the central quantities when evaluating the attack. In Fig.1 a possible implementation of the cbs is shown. The initial state sent by Alice occupies only two photonic modes ($a$ and $b$) corresponding for example to the two polarization degrees of freedom of a traveling mode. Conditional beam splitting consists of sending the input state to a polarization independent weak beam splitter. A measurement to determine if there are any photons is then done in the weakly coupled output arm (modes $a_{e}$ and $b_{e}$). If no photon is detected the signal is sent through an identical beam splitter again. Otherwise the signal is transmitted through a perfect channel without any further processing. To investigate this procedure we will take the limit of infinitesimally weak beam splitting which will allow us to take results from quantum jump methods [16, 17, 18] as we did for the study of adaptive absorption [1]. In Sec. VI we will give some numerical results for finitely weak beam splitters. Quantum jump methods are wave function (as opposed to density matrix) approaches to study the evolution of small systems coupled to a large reservoir. In addition to being a very powerful method it has a nice physical interpretation: The stochastic evolution of the wave function corresponds to the stochastic read outs of the continuously monitored reservoir. We preface our analysis with a brief review of the quantum jump approach to open systems. Suppose that initially the system is in the state $\left|\phi(0)\right>$. If no jump occurs the evolution of the system is described by the effective Hamiltonian, $H_{\rm{eff}}=H_{S}-\frac{i}{2}\sum_{m}J^{\dagger}_{m}J_{m}$, $$\left|\phi^{0}(t)\right>=\mbox{e}^{-iH_{\rm{eff}}t}\left|\phi(0)\right>\mbox{.}$$ (4) Here $H_{S}$ is the Hamiltonian of the isolated system and the $J_{m}$’s are the jump operators which account for the coupling to the reservoir. Since the effective Hamiltonian is not hermitian, this state $\left|\phi^{0}(t)\right>$ is not normalized. The square of its norm gives the probability of having no jump after a time $t$, $$p_{0}(t)=\left<\phi(0)\right|\mbox{e}^{iH_{\rm{eff}}^{\dagger}t}{e}^{-iH_{\rm{% eff}}t}\left|\phi(0)\right>$$ (5) If a jump to a mode $m$ occurs in a time between $t$ and $t+\delta t$ then the system will be in the state $$\left|\phi^{m}(t+\delta t)\right>=\sqrt{dt}J_{m}\left|\phi(t)\right>$$ (6) immediately after this period. And again the probability of this event is given by the square of the norm of the conditional state, $$\delta p_{m}(t)=\left<\phi(t)\right|J_{m}^{\dagger}J_{m}\left|\phi(t)\right>dt$$ (7) Using these probabilities one can follow the history of the system’s wave function when the reservoir is continuously monitored to detect what kind of jump occurred or if any jump occurred at all. It can be shown that by averaging over different histories the state of the system at a given time $t$, one recovers the master equation $$\displaystyle\frac{d\rho}{dt}$$ $$\displaystyle=$$ $$\displaystyle-i[H_{S},\rho]-\frac{1}{2}\sum_{m}(J^{\dagger}_{m}J_{m}\rho+\rho J% ^{\dagger}_{m}J_{m})$$ (8) $$\displaystyle+\sum_{m}J_{m}\rho J_{m}^{\dagger}$$ for the system density operator $\rho$. The first term in this equation describes the standard unitary evolution of the system. The last two terms describe the relaxation process due to the coupling of the system to the reservoir. In order for this description to be true, the coupling and the reservoir have to be such that the jump probability $\delta p_{m}(t)$ is very small and does not depend on the previous history of the system. In this paper we use the quantum jump method to study the evolution of the system formed by the signal modes ($a$ and $b$), and Eve’s modes ($a_{e}$ and $b_{e}$). The action of the infinitesimally weak beam splitting together with a measuring device that checks if the number of photons in Eve’s modes has increased will play the role of reservoir being continuously monitored. An increase in the photon number in Eve’s modes is then represented by the jump operator $$J=\epsilon(a_{e}^{\dagger}a+b_{e}^{\dagger}b)\mbox{,}$$ (9) where $\epsilon\ll 1$ is the reflection coefficient of the weak beamsplitting. After this jump the photons in Eve’s modes and the signal photons will be in a shared wave function. If no jump occurs Eve’s modes will remain in the vacuum state. The initial state is given by $$\left|\phi(0)\right>=\left|\alpha;\beta\right>\left|0;0\right>\mbox{,}$$ (10) where the first two modes are the signal modes initially in a coherent state and with a definite polarization, and the last modes are Eve’s modes. The mean photon number of the signal sent by Alice is $\mu=|\alpha|^{2}+|\beta|^{2}$. If Eve does not intervene then the signal that Bob will receive after going through the lossy channel is $$\left|\phi^{B}_{\eta}\right>=\left|\sqrt{\eta}\alpha;\sqrt{\eta}\beta\right>% \left|0;0\right>,$$ (11) where $\eta$ is the transmissivity of the channel. This means that the probability that Bob gets a non-vacuum signal is $$P^{\rm{B}}_{\eta}[\neg 0]=1-P^{B}_{\eta}[0]=1-\exp(-\mu\eta),$$ (12) where $P^{\rm{B}}_{\eta}[0]$ is the probability of Bob getting the vacuum signal. We now proceed to calculate what happens when Eve tries to eavesdrop on the signal using the conditional beam splitting attack (cbs). At time $t_{o}=0$ Eve starts a conditional beam splitting attack on the signal sent by Alice. The conditional state of the system after a time $t$ when no photon has been detected in Eve’s modes is $$\displaystyle\left|\phi^{0}(t)\right>$$ $$\displaystyle=$$ $$\displaystyle\mbox{e}^{-t\frac{1}{2}J^{\dagger}J}\left|\phi(0)\right>$$ (13) $$\displaystyle=$$ $$\displaystyle\mbox{e}^{-\frac{t\epsilon^{2}}{2}(n_{a}+n_{b})}\left|\alpha;% \beta\right>\left|0;0\right>$$ $$\displaystyle=$$ $$\displaystyle\mbox{e}^{\frac{1}{2}(\gamma_{t}^{2}-1)\mu}\left|\gamma_{t}\alpha% ;\gamma_{t}\beta\right>\left|0;0\right>\mbox{,}$$ where we have defined $\gamma_{t}=\exp(-\frac{t\epsilon^{2}}{2})$ and used the normal ordered form of the exponential of the number operator [19]. The squared norm of this state is the probability of detecting no photon in Eve’s mode after a time $t$, $$p_{0}(t)=\left<\phi^{0}(t)\right.\left|\phi^{0}(t)\right>=\mbox{e}^{(\gamma_{t% }^{2}-1)\mu}\mbox{.}$$ (14) If the first jump occurs in the time interval $[t,t+dt]$ then the conditional state of the system is $$\displaystyle\left|\phi^{1}(t+dt)\right>=J{\rm e}^{-t\frac{1}{2}J^{\dagger}J}% \left|\phi(0)\right>$$ $$\displaystyle={\rm e}^{\frac{1}{2}(\gamma_{t}^{2}-1)\mu}J\left|\gamma_{t_{1}}% \alpha;\gamma_{t}\beta\right>\left|0;0\right>$$ $$\displaystyle={\rm e}^{\frac{1}{2}(\gamma_{t}^{2}-1)\mu}\epsilon\gamma_{t_{1}}% \left|\gamma_{t}\alpha;\gamma_{t_{1}}\beta\right>(\alpha\left|1;0\right>+\beta% \left|0;1\right>)\mbox{.}$$ (15) The probability density of detecting a photon in Eve’s modes at the time interval $[t,t+\delta t]$ is therefore given by $$p_{1}(t)=\left<\phi^{1}(t)\right.\left|\phi^{1}(t)\right>=\epsilon^{2}\gamma_{% t_{1}}^{2}\mu\mbox{e}^{(\gamma_{t_{1}}^{2}-1)\mu}\mbox{.}$$ (16) As already mentioned, the first thing that Alice and Bob will check to detect the presence of Eve, is that Bob receives a fraction of non-vacuum signals consistent with the lossy channel, given in  (12). That is, $$P^{{\rm B}}_{{\sc cbs}}[\neg 0]=P^{{\rm B}}_{\eta}[\neg 0]\;\mbox{ or }\;P^{{% \rm B}}_{{\sc cbs}}[0]=P^{{\rm B}}_{\eta}[0]\mbox{.}$$ (17) The total probability that after the cbs attack Bob gets a vacuum signal is $$\displaystyle P^{{\rm B}}_{{\sc cbs}}[0]$$ $$\displaystyle=$$ $$\displaystyle p_{0}(\tau)|\!\left<0;0\right.\left|\gamma_{\tau}\alpha;\gamma_{% \tau}\beta\right>|^{2}+$$ (18) $$\displaystyle+\int_{0}^{\tau}p_{1}(t)|\!\left<0;0\right.\left|\gamma_{t}\alpha% ;\gamma_{t}\beta\right>|^{2}dt$$ $$\displaystyle=$$ $$\displaystyle{\rm e}^{-\mu}(1+\mu(1-\gamma_{\tau}^{2})),$$ where we have made use of Eqs. (14) and (16). With the results of Eqs. (12) and (18) we find the required value of the coupling time $\tau$ so that condition expressed in Eq. (17) is fulfilled, $$\gamma_{\tau}^{2}={\rm e}^{-\epsilon^{2}\tau}=\frac{1}{\mu}(1+\mu-{\rm e}^{\mu% (1-\eta)})\mbox{. }$$ (19) Notice that $\gamma_{\tau}^{2}<\eta$. In order to quantify the performance of the cbs we will now calculate the probability of a successful splitting $p^{{\rm succ}}_{{\sc cbs}}$. This is the probability that Eve manages to extract one photon from the signal and still leaves at least one photon for Bob. Splittings that leave the transmitted signal in the vacuum state are not useful to Eve since these bits will not contribute to the sifted key. The success probability for the cbs  attack is given by $$\displaystyle p^{{\rm succ}}_{{\sc cbs}}$$ $$\displaystyle=$$ $$\displaystyle 1-(P^{{\rm B}}_{{\sc cbs}}[0]+P^{E}_{{\sc cbs}}[0]-P^{{\rm EB}}[% 0,0])$$ $$\displaystyle=$$ $$\displaystyle 1-{\rm e}^{-\mu}(1+\mu(1-\gamma_{\tau}^{2}))-{\rm e}^{(\gamma_{% \tau}^{2}-1)\mu}+{\rm e}^{-\mu}$$ $$\displaystyle=$$ $$\displaystyle 1-\mu{\rm e}^{-\mu}(1-\gamma_{\tau}^{2})-{\rm e}^{-(1-\gamma_{% \tau}^{2})\mu},$$ (21) where $P^{E}_{{\sc cbs}}[0]=p_{0}(\tau)$ is the probability of having no photon in Eve’s modes (i.e. no jump) after the attack and $P^{{\rm EB}}[0,0]={\rm e}^{-\mu}$ is the probability that there are photons neither in Eve nor in Bob’s modes. By inverting Eq. (19) we find the transmissivity $\eta$ ‘mimicked’ (in the sense of Eq. (17)) by the cbs attack, $$\eta_{{\sc cbs}}=1-\frac{1}{\mu}\ln(1+\mu(1-\gamma_{\tau}^{2})).$$ (22) Since $\eta_{{\sc cbs}}$ is an increasing function of $\gamma_{\tau}^{2}$ and we know that this achieves its minimum for $\tau\rightarrow\infty$ ($\gamma_{\tau}^{2}\rightarrow 0$), we find that, given a mean photon number $\mu$, the attack just described cannot mimic arbitrarily large channel losses. The minimum transmissivity is then given by $$\eta_{{\sc cbs}}^{{\rm min}}=1-\frac{1}{\mu}\ln(1+\mu)\approx\frac{1}{2}\mu-% \frac{1}{3}\mu^{2}+O(\mu^{3})\mbox{.}$$ (23) This makes sense since for $\tau\rightarrow\infty$ all non-vacuum signals will leak one photon to Eve while the remaining photons always reach Bob. It is clear, therefore, that the removal of only one photon cannot account for arbitrarily high channel losses. In order to meet Bob’s expectations even when $\eta\leq\eta_{{\sc cbs}}^{{\rm min}}$, Eve can apply the protocol corresponding to $\eta_{{\sc cbs}}^{{\rm min}}$ and then block the outgoing signals with a probability $$p^{{\rm block}}_{{\sc cbs}}=\frac{1-{\rm e}^{\mu\eta}}{1-{\rm e}^{\mu\eta_{{% \sc cbs}}^{{\rm min}}}}\mbox{.}$$ (24) Note that if the channel loss is equal to or larger than $1-\eta_{{\sc cbs}}^{{\rm min}}$, then the probability of success is equal to the probability of having more than one photon in a signal pulse $$p^{{\rm succ}}_{{\sc cbs}}(\eta_{{\sc cbs}}^{{\rm min}})=1-{\rm e}^{\mu}-\mu{% \rm e}^{\mu}\mbox{.}$$ (25) This means that for these high channel losses (i.e. $\eta\leq\eta_{{\sc cbs}}^{{\rm min}}$) Eve can extract one excitation from the signal without modifying Bob’s expected number of non-vacuum signals. All Bob’s non-vacuum contributions effectively come from the multiphoton part of the signal pulses, and Eve will possess one photon from each of those signals. Therefore she will obtain the full sifted key shared by Alice and Bob after the public announcement of the bases. This could never have happen if Eve had chosen the bs attack. This is a very important feature since Eve’s knowledge of the full key does not leave any room for Alice and Bob to perform privacy amplification to obtain a secure key. Taking into account the blocking, in the regime of high losses, the success probability calculated in  (21) takes the following form $$\displaystyle p^{{\rm succ}}_{{\sc cbs}}=\left\{\begin{array}[]{cc}1-\mu{\rm e% }^{-\mu}(1-\gamma_{\tau}^{2})-{\rm e}^{-(1-\gamma_{\tau}^{2})\mu}&\mbox{: }% \eta>\eta_{{\sc cbs}}^{{\rm min}}\\ \\ p^{{\rm block}}_{{\sc cbs}}\lim\limits_{\gamma_{\tau}\to 0}p^{{\rm succ}}_{{% \sc cbs}}=1-{\rm e}^{-\mu\eta}&\mbox{: }\eta\leq\eta_{{\sc cbs}}^{{\rm min}}% \end{array}\right.\mbox{.}$$ (26) In order to evaluate the performance of an attack it is more convenient to normalize the probability of success with the probability that Bob gets a non-vacuum signal (potential sifted key bit) to obtain the key fraction known by Eve, $$f_{{\sc cbs}}=\frac{p^{{\rm succ}}_{{\sc cbs}}}{1-P^{{\rm B}}_{{\sc cbs}}[0]}% \mbox{.}$$ (27) As discussed previously, for $\eta\leq\eta_{{\sc cbs}}^{{\rm min}}$ Eve can acquire the whole key, so in this regime $f_{{\sc cbs}}=1$. For other channel loss values the key fraction never reaches unity. Similarly one can define the same quantity for the bs attack obtaining $$f_{{\sc bs}}=\frac{p^{{\rm succ}}_{{\sc bs}}}{1-P^{{\rm B}}_{{\sc bs}}[0]}=1-{% \rm e}^{-(1-\eta)\mu},$$ (28) where we have made use of Eq. (2) and  (3). It is easy to prove that the fraction of key known by Eve is always bigger for the cbs than for the bs attack. In Fig. 2 we can see these fractions as a function of the channel transmissivity for a typical value of the mean photon number used in current experiments (see also Fig. 8). In order to compare quantitatively the cbs with the bs  we define the performance quotient $q_{{\sc cbs}}=\frac{f_{{\sc cbs}}}{f_{{\sc bs}}}$ which is plotted in Fig. 3 as a function of the mean photon number of the signal pulses and the channel transmissivity. We see that for small $\mu$ the cbs  can be substantially better than the bs. For example, for the experimentally reasonable values $\mu=0.1$ and $\eta=0.1$ the cbs provides Eve with a fraction of the key $q_{{\sc cbs}}=5.4$ times bigger than the bs attack. For a fixed mean photon number $\mu$, the optimum advantage $q^{{\rm max}}_{{\sc cbs}}=1+\mu^{-1}$ is achieved when the channel losses are $\eta=\eta_{{\sc cbs}}^{{\rm min}}$. III CBS without storage There is an important fact that makes the cbs, as described above, rather more technically demanding than the bs. In order to follow the cbs protocol described in the previous section Eve has to be able to perform two experimentally non-trivial tasks [20]. a) Firstly she has to detect the presence of photons in her modes $a_{e}$ and $b_{e}$ without altering their polarization. This is necessary because Eve has to be able to carry out the conditional dynamics, i.e. stop the splitting as soon as she gets the desired photon. This operation is not as technologically demanding as the pns since it only needs to discriminate the non-vacuum states from the vacuum. However, it is out of reach of the immediately available technology. b) The second task Eve must be able to realize is to store the extracted photon until the encoding basis is publicly announced by Alice. In the conventional beam splitting attack Eve is not required to carry out task a). On the other hand, we notice that Alice can always delay the public announcement, making it harder for Eve to store coherently the signal and thereby, effectively constraining Eve to attacks that do not rely on the storage of the extracted signal. This of course applies to all the error-free attacks that we have seen in this paper, that is the pns, bs and cbs. Inasmuch as Eve is forced to carry out the signal measurement before knowing the encoding basis, in the cbs attack she does not have to rely anymore on her ability to detect photons without disturbing their polarization (task a)). Eve can realize the conditional dynamics by directly measuring the extracted photons with photodetectors. Acknowledging the impracticability of indefinite qubit storing, thus, puts on equal footing the bs and cbs as far as technological difficulty is concerned, and still leaves the pns as unfeasible. As in the previous section, we can now calculate the performance of the cbs in this new no-storage or direct measurement scenario. In this situation the cbs is bound to fail in half of the cases. Eve only succeeds when she measures the extracted photon in the right basis. The probability of success and key fraction in the directly measured conditional beam splitting (dcbs) will accordantly be reduced by a factor $\frac{1}{2}$, i.e. $p^{{\rm succ}}_{{\sc dcbs}}=\frac{1}{2}p^{{\rm succ}}_{{\sc cbs}}$ and $f_{{\sc dcbs}}=\frac{1}{2}f_{{\sc cbs}}$. Clearly, since the number of extracted photons is the same as in the scenario with the possibility of storage, the number of non-vacuum signal that arrive at Bob’s site remains the same. The success probability for the directly measured beam splitting attack (dbs) can be calculated taking into account that Eve’s attack is unsuccessful only in the case where all split photons from a signal are measured [21] in the wrong basis, $$\displaystyle p^{{\rm succ}}_{{\sc dbs}}$$ $$\displaystyle=$$ $$\displaystyle(1-e^{-\eta\mu})\left(1-\sum_{n=0}^{\infty}\frac{1}{2^{n}}\frac{% \mu^{n}(1-\eta)^{n}}{n!}e^{-\mu(1-\eta)}\right)$$ (29) $$\displaystyle=$$ $$\displaystyle(1-e^{-\eta\mu})(1-e^{-\frac{\mu}{2}(1-\eta)})\mbox{.}$$ The fraction of the key known by Eve in this attack is therefore, $$f_{{\sc dbs}}=1-e^{-\frac{\mu}{2}(1-\eta)}\mbox{.}$$ (30) Since for small $\mu$ the most important contribution in the bs  comes from the two photon signals, as in cbs, the success probability of both attacks is reduced approximately by the same factor $\frac{1}{2}$. But this factor is always a bit larger for dbs since in the cases where more than one photon per signal are split, Eve has a bigger chance to measure in the right basis. The performance quotient is now defined relatively to the dbs, $q_{{\sc dcbs}}=\frac{f_{{\sc dcbs}}}{f_{{\sc dbs}}}$ and it is plotted in Fig. 4 for relevant values of $\mu$ and $\eta$. We see that for large mean photon numbers ($\mu>\ln 4\approx 1.4$) the dbs can actually perform slightly better than the ${\sc dcbs}$ for some range of channel losses (see also Figs. 8 and 7). Despite this last remark it looks like the cbs maintains its efficiency over the bs  under the no-storage constraint (compare Fig. 4 with Fig. 3), but in fact under this scenario the ${\sc dcbs}$ has lost the threatening feature of being able to extract the full key ($f_{{\sc cbs}}=1$) for high channel losses (see Fig. 8). In the remaining of this section we will propose a variation of the cbs which allows Eve to extract the full key even in the no-storage scenario, and to perform better than dbs for any mean photon number. The idea of the adapted conditional beam splitting attack (acbs) is to extract two photons, one by one, from the signal and measure each of them in a different polarization basis. In order to do that, Eve just has to follow the same protocol as in the single photon cbs but instead of stopping the beam splitting as soon as she detects one photon, she has to continue the splitting until a second photon is extracted. As previously, in order to match the expected number of photons in Bob’s side, the splitting procedure can run for a maximum coupling time $\tilde{\tau}$ after which the signal must be transmitted to Bob through a lossless channel. Obviously, this attack will only be advantegeous in the no-storage scenario since otherwise, when the key can be extracted from the first photon, it is of no use to extract a second photon. The total probability that Bob gets a vacuum signal after the acbs is $$\displaystyle P^{{\rm B}}_{{\sc acbs}}[0]=p_{0}(\tilde{\tau})p_{c}^{0}(\mu)+% \int_{0}^{\tilde{\tau}}p_{1}(t_{1})$$ $$\displaystyle\left[\int_{t_{1}}^{\tilde{\tau}}p_{11}(t_{2}|t_{1})p_{c}^{0}(% \gamma_{t_{2}}^{2}\mu)dt_{2}+p_{10}(\tilde{\tau}|t_{1})p_{c}^{0}(\gamma_{% \tilde{\tau}}^{2}\mu)\right]dt_{1}$$ $$\displaystyle={\rm e}^{-\mu}(1+\mu(1-\gamma_{\tilde{\tau}}^{2})+\frac{\mu^{2}}% {2}(1-\gamma_{\tilde{\tau}}^{2})^{2})\mbox{,}$$ (31) where $p_{c}^{n}(\mu)$ is the probability of having $n$ photons in a coherent state with mean photon number $\mu$, $p_{11}(t_{2}|t_{1})=\epsilon^{2}\gamma_{t_{2}}^{2}\mu{\rm e}^{\mu(\gamma_{t_{2% }}^{2}-\gamma_{t_{1}}^{2})}$ is the probability of having a jump at $t_{2}$ conditional to a previous jump at $t_{1}$, and $p_{10}(\tilde{\tau}|t_{1})={\rm e}^{\mu(\gamma_{\tau}^{2}-\gamma_{t_{1}}^{2})}$ is the probability of having no jump in the time interval $[t_{1},\tilde{\tau}]$ conditional to a jump at $t_{1}$. These probabilities can be calculated following the general results in the beginning of Sec. II. The previous result is equal to the corresponding probability in cbs  (Eq.18) plus a second order in $\mu$ term which represents the removal of two photons. The coupling time $\tilde{\tau}$ can be now fixed so that Bob’s probability of detecting at least one photon agrees with the result he would expect from the lossy channel, $$\displaystyle P^{{\rm B}}_{{\sc acbs}}[\neg 0]$$ $$\displaystyle=$$ $$\displaystyle P^{{\rm B}}_{\eta}[\neg 0]\longrightarrow P^{{\rm B}}_{{\sc acbs% }}[0]=P^{{\rm B}}_{\eta}[0]$$ (32) $$\displaystyle\longrightarrow\gamma_{\tilde{\tau}}^{2}$$ $$\displaystyle=$$ $$\displaystyle{\rm e}^{-\epsilon^{2}\tilde{\tau}}=1+\frac{1}{\mu}(1-\sqrt{2{\rm e% }^{\mu(1-\eta)}-1})\mbox{.}$$ (33) As expected we see that the maximum coupling time will be smaller for the acbs than the cbs  $\gamma_{\tau}^{2}<\gamma_{\tilde{\tau}}^{2}<\eta$. To calculate the probability of success we have to count the events in which Eve extracts a signal while leaving some non-vacuum contribution to Bob. We also have to take into account that Eve only gets the bit value with certainty when she manages to extract two single photons, otherwise she will only get it in half of the cases. The success probability for the acbs attack is then given by $$\displaystyle p^{{\rm succ}}_{{\sc acbs}}=\int_{0}^{\tilde{\tau}}p_{1}(t_{1})% \left[\int_{t_{1}}^{\tilde{\tau}}p_{11}(t_{2}|t_{1})p_{c}^{\neg 0}(\gamma_{t_{% 2}}^{2}\mu)dt_{2}+\right.$$ $$\displaystyle\left.+\frac{1}{2}p_{10}(\tilde{\tau}|t_{1})p_{c}^{\neg 0}(\gamma% _{\tilde{\tau}}^{2}\mu)\right]dt_{1}=1-{\rm e}^{-\mu}\left[{\rm e}^{\gamma_{% \tilde{\tau}}^{2}\mu}+\right.$$ $$\displaystyle\left.+\frac{\mu}{2}(1-\gamma_{\tilde{\tau}}^{2})(1+{\rm e}^{% \gamma_{\tilde{\tau}}^{2}\mu})+\frac{\mu^{2}}{2}(1-\gamma_{\tilde{\tau}}^{2})^% {2}\right]\mbox{,}$$ (34) where $p_{c}^{\neg 0}(\mu)$ is the probability of having at least one photon in a coherent state with mean photon number $\mu$. By inverting Eq. (33) we find that the transmissivity ‘mimicked’ (in the sense of Eq. (32)) by the acbs attack is $$\eta_{{\sc acbs}}=1-\ln(1+\mu(1-\gamma_{\tilde{\tau}}^{2})+\frac{\mu^{2}}{2}(1% -\gamma_{\tilde{\tau}}^{2})^{2}).$$ (35) Since this is an increasing function of $\gamma_{\tilde{\tau}}^{2}$ we find the minimum transmissivity that can be mimicked by the acbs  without extra blocking (for $\tilde{\tau}\rightarrow\infty$) is $$\eta_{{\sc acbs}}^{{\rm min}}=1-\frac{1}{\mu}\ln(1+\mu+\frac{\mu^{2}}{2})% \approx\frac{1}{2}\mu^{2}+O(\mu^{3}).$$ (36) In this limit of very high losses Eve can extract two excitations from all signals and still meet Bob’s expectations. Therefore, by measuring each photon in a different basis, she will be able to acquire the full key even in the no-storage scenario ( $f_{{\sc acbs}}=\frac{p^{{\rm succ}}_{{\sc acbs}}}{P^{{\rm B}}_{{\sc acbs}}[% \neg 0]}=1$). If the losses are still higher ($\eta\leq\eta_{{\sc acbs}}^{{\rm min}}$) Eve has to block some signals with probability $$p^{{\rm block}}_{{\sc acbs}}=\frac{1-{\rm e}^{-\mu\eta}}{1-{\rm e}^{-\mu\eta_{% {\sc acbs}}^{{\rm min}}}}.$$ (37) The performance quotient is now $q_{{\sc acbs}}=\frac{f_{{\sc acbs}}}{f_{{\sc dbs}}}$. In Fig. 5 we can see the values of this ratio as a function of the tranmissivity of the channel and the mean photon number. Notice that in this case the performance quotient is larger than unity for all values of the mean photon number, which means that the acbs is more efficient than dbs. On the other hand, for low mean photon numbers ($\mu<1$), higher losses are required to achieve the same performance as the dcbs (see also Figs. 7 and 8). We see that the acbs can still be substantially better that the dbs. For the values $\mu=0.1$ and $\eta=0.1$ the acbs attack provides Eve with a fraction of the key $q_{{\sc acbs}}=1.3$ times bigger than the dbs attack. In Fig. 6 we can see the behavior of the performance quotient as a function of the mean photon number for a fixed value of the losses. We observe that, contrary to the other attacks, for a fixed transmissivity of the channel $\eta$, the efficiency of acbs over dbs can increase with $\mu$. For example, if the mean photon number of the previous example is increased to $\mu=1.1$ keeping the transmissivity in $\eta=0.1$ the efficiency factor grows to $q_{{\sc acbs}}=2.48$. In Fig. 7 we represent which of the studied attacks is most effective in the no-storage scenario for different values of the parameters $\mu$ and $\eta$. As a summary, in Fig. 8 we show the key fraction as a function of the losses for four different values of the mean photon and for the different attacks studied in this paper. For comparison, the results for the pns in the storage and no-storage scenarios are also plotted. Once again, we notice that the cbs and acbs provide real alternatives to the, at present, unfeasible pns and the ineffective bs. IV Photon statistics The task of an eavesdropper is to acquire knowledge about the secret key that Alice and Bob want to share. But the important point is that she does this without leaving any trace in the signal that could indicate her presence to Alice and Bob. Eve’s eavesdropping capabilities are therefore strongly dependent on the capabilities of Alice and Bob to prepare and analyze their signals. In this work we have assumed that Alice is not able to prepare single photon signals, using instead very weak coherent states. Moreover, we have assumed that Bob’s capabilities are limited as well by a detection setup that gives the same outcome for single photons than for multiphoton signals. In this situation Eve has only to forward signals to Bob in such a way that expected number of non-vacuum signals is the same as for the lossy channel, as expressed in Eq.(17). In this section we will study what happens in the situations where Bob’s capabilities are not so poor. In particular we consider the realistic situation in which Bob’s analyzer is a polarizing beam splitter (in two possible orientations according to the basis measured) with a photon detector in each of its arms. We will assume that these detectors do not have photon number resolution, so that they only have two possible outcomes corresponding to non-vacuum (‘click’) and vacuum (no ‘click’) impinging signals. When Bob and Alice bases coincide, only one of Bob’s detectors can click and Eve remains safe. But in the case that Bob measures in a different basis than the encoding basis, the multiphoton part of the signal can lead to simultaneous clicks in both detectors. Here is where Eve can reveal her presence depending on what attack she chooses. Bob is expecting to receive a weak coherent state of a given amplitude (determined by the amplitude of the selected coherent state and by the channel losses) and therefore an expected number of these double clicks. The probability of these double clicks without Eve’s intervention (or under the bs attack) is given by $$p^{dc}_{{\sc bs}}=\frac{1}{2}(1-{\rm e}^{-\frac{\mu\eta}{2}})^{2}\mbox{,}$$ (38) where the factor $\frac{1}{2}$ accounts for the probability that Alice and Bob use different basis. When Eve tries to eavesdrop using the cbs attack without extra blocking (i.e $\eta>\eta_{{\sc cbs}}^{{\rm min}}$) the probability of double counts is, $$\displaystyle p^{dc}_{{\sc cbs}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}p_{0}(\tau)(1-{\rm e}^{-\frac{\gamma_{\tau}\mu}{2}})^{% 2}+\frac{1}{2}\int_{0}^{\tau}p_{1}(t)(1-{\rm e}^{-\frac{\gamma_{t}\mu}{2}})^{2% }dt$$ (39) $$\displaystyle=$$ $$\displaystyle\frac{1}{2}-\frac{{\rm e}^{-\mu}}{2}(4{\rm e}^{\frac{\mu}{2}}-1-% \mu(1-\gamma_{\tau}^{2})-2{\rm e}^{-\frac{\gamma_{\tau}^{2}\mu}{2}})\mbox{.}$$ By using (19) to express $\gamma_{\tau}$ in terms of the the channel losses, and taking into account the blocking probability $p^{{\rm block}}_{{\sc dcbs}}$ (24) for $\eta<\eta_{{\sc cbs}}^{{\rm min}}$ we plot in Fig. 9 the ratio of both probabilities $q^{dc}_{{\sc cbs}}=\frac{p^{dc}_{{\sc cbs}}}{p^{dc}_{{\sc bs}}}$ as a function of the mean photon number and channel transmissivity. We see that the cbs increases the probability of double-clicks relative to the lossy channel. The reason for this increase is that the cbs only takes one photon out while several signal photons might be lost in the lossy channel. For increasing transmissivities the differences tend to vanish. So, for example $\mu=\{0.1,0.01\}$ and $\eta=0.3$ lead to $q_{dc}=\{1.1,1.02\}$. But for higher losses the ratio can be quite high, e.g for $\mu=\{0.1,0.01\}$ and $\eta=0.1$ $q_{dc}=\{3.45,1.2\}$. Inclusion of the effect of dark counts in Bob’s detectors in $p^{dc}_{{\sc cbs}}$ shows only a slight reduction of the discrepancies for reasonable dark count probabilities ($p_{\mbox{D}}\sim 10^{-5}$). Even though this is a definite handicap of the cbs  it might be of relative importance in practice. The probability of double counts is very small ($p^{dc}\sim\frac{1}{8}(\eta\mu)^{2}$) while the statistical fluctuations are large, therefore the number of transmitted signals needed to appreciate Eve’s intervention might be exorbitant. As shown in Fig. 11, for some parameter regimes one can find a good compromise between the probability of success and the double-count rate. Moreover, as we shall see next, the disparity in the number of double-clicks can be further decreased if Eve uses the two-photon splitting adopted for the non-storage conditions (acbs). The double-click probability in this case is $$\displaystyle p^{dc}_{{\sc acbs}}$$ $$\displaystyle=$$ $$\displaystyle 1-{\rm e}^{-\mu}\left(8{\rm e}^{\frac{\mu}{2}}-1-\mu(1-\gamma_{% \tau}^{2})-\frac{1}{2}\mu^{2}(1-\gamma_{\tau}^{2})^{2}+\right.$$ (40) $$\displaystyle\left.-2{\rm e}^{-\frac{\gamma_{\tau}^{2}\mu}{2}}(3+\mu(1-\gamma_% {\tau}^{2}))\right)\mbox{.}$$ In Fig. 10 we can see the ratio between this probability and the corresponding probability for the lossy channel $q^{dc}_{{\sc acbs}}=\frac{p^{dc}_{{\sc acbs}}}{p^{dc}_{{\sc bs}}}$, taking into account the blocking for $\eta<\eta_{{\sc acbs}}^{{\rm min}}$. In Fig. 11 the same quantity is compared to the performance quotient. We see that for the ${\sc acbs}$ attack the double click probability takes nearly the same values as for the lossy channel ($\mu=0.1$ and $\eta=0.1$ give $q_{dc}=1.05$) while still providing larger key fractions than the bs attack. V Mixed strategies In the previous sections Eve’s attacks were characterized by only one parameter $\gamma_{\tau}$. Once the type of attack was chosen (cbs, acbs) she could only vary the coupling time $\tau$ to tune her attack. In this section we will study the situation in which Eve can use mixed strategies, i.e. for each signal she can choose a different coupling time. In this way, Eve has the freedom to choose a probability distribution $\{p_{i}\}$ according to which she will apply a coupling time $\tau_{i}$ (i.e. $\gamma_{\tau_{i}}$). A mixed strategy could in principle lead to a lowering of the double-click probability without sacrificing to much probability of success. Unfortunately we will see that this is not the case for the cbs and acbs. We start by considering the cbs attack. In this case the number of non-vacuum signals received by Bob (18) is linearly dependent on $\gamma_{\tau_{i}}^{2}$, therefore the mixed strategy has to be such that the average value of $\gamma_{\tau_{i}}^{2}$ is equal to the pure strategy value fixed in Eq. (19), i.e. $$\displaystyle\bar{P}^{{\rm B}}_{{\sc cbs}}[0]=\sum_{i}p_{i}P^{{\rm B}}_{{\sc cbs% }}[0]_{i}$$ (41) $$\displaystyle=\sum_{i}p_{i}{\rm e}^{-\mu}(1+\mu(1-\gamma_{\tau_{i}}^{2}))=P^{{% \rm B}}_{{\sc cbs}}[0]$$ $$\displaystyle\longrightarrow$$ $$\displaystyle\sum_{i}p_{i}\gamma_{\tau_{i}}^{2}=\gamma_{\tau}^{2}.$$ (42) Now we want to know what is the behavior of the probability of success and of the double-clicks when using mixed strategies. From Jensen’s inequality [22] we know that $$f^{\prime\prime}(x)\stackrel{{\scriptstyle\geq}}{{<}}0\Longleftrightarrow\sum_% {i}p_{i}f(x_{i})\stackrel{{\scriptstyle\geq}}{{<}}f(\sum_{i}p_{i}x_{i}).$$ (43) Since Eq. (42) assures that Bob gets the expected number of clicks we find, by using Eq. (21), that the mixed strategy always leads to a smaller probability of success than a pure one $$\displaystyle\bar{p}^{{\rm succ}}_{{\sc cbs}}$$ $$\displaystyle=$$ $$\displaystyle\sum_{i}p_{i}p^{{\rm succ}}_{{\sc cbs}}(\gamma_{\tau_{i}}^{2})% \mbox{ and }\frac{d^{2}p^{{\rm succ}}_{{\sc cbs}}}{d(\gamma_{\tau}^{2})^{2}}<0$$ (44) $$\displaystyle\Longleftrightarrow\bar{p}^{{\rm succ}}_{{\sc cbs}}<p^{{\rm succ}% }_{{\sc cbs}}(\gamma_{\tau}^{2})\mbox{.}$$ Similarly one can easily see that the probability of double-clicks in the cbs $p^{dc}_{{\sc cbs}}$ (39) increases when mixing strategies. Since we already saw that the double-click probability is higher than for the lossy channel we arrive to the conclusion that using mixed strategies does not offer any advantages to the cbs  attack. The same happens in the two-photon conditional beam splitting (acbs). To prove this, one has to write the probability of success and of double-clicks as a function of Bob’s probability of receiving a non-vacuum signal, and check for its concavity/covexity. VI CBS with finite beam splitters In this section we will see how to implement the cbs attack with finite reflectivity beam splitters. The beam splitters have now a finite reflectivity $|r|^{2}$ (and transmissivity $|t|^{2}=1-|r|^{2}$). Without any loss of generality we will assume that $r$ and $t$ for the signal are real. We also assume that they are independent of polarization. The initial state can be written as $\left|\phi_{0}\right>=\left|\alpha;\beta\right>\left|0;0\right>$ where the first ket is Alice’s coherent state and the second is Eve’s mode. After the first beam splitter the state of the system is $$\left|\phi_{1}\right>=\left|t\alpha;t\beta\right>\left|r\alpha;r\beta\right>% \mbox{.}$$ (45) Notice that Eve’s and the signal modes are still in a separable state. This means that Bob’s state will be the same independently of what Eve does to her modes. For coherent input states, Bob’s state will depend only on the number of beam splitters put in by Eve during the attack. Eqs.(13) and (15) reflect this situation in the infinitesimal beam splitting case. This simplified the calculations leading to the results in this paper, but the quantum jump method described in Sec. II can be used to describe the conditional beam splitting for any kind of input as long as Eve’s modes are in the vacuum state initially [1]. After the first beam splitter Eve will try to detect the presence of photons in her modes. With probability $p_{1}^{0}=p_{c}^{0}(r^{2}(|\alpha|^{2}+|\beta|^{2}))={\rm e}^{-\mu r^{2}}$ she will detect no photon. If this happens she will split the signal with a second beam splitter. She will repeat this process until she detects some photons in her modes. After the $m^{th}$ beamsplitting the state of the system is $$\left|\phi_{m}\right>=\left|t^{m}\alpha;t^{m}\beta\right>\left|rt^{m-1}\alpha;% rt^{m-1}\beta\right>\mbox{.}$$ (46) To keep the notation simple, we are omitting here the tested vaccum modes of earlier beam splitters. The total probability of reaching this state is, $$\displaystyle p_{m}$$ $$\displaystyle=$$ $$\displaystyle\prod_{i=1}^{m-1}p_{c}^{0}(\mu r^{2}t^{2(m-1)})=\prod_{i=1}^{m-1}% {\rm e}^{-\mu r^{2}t^{2(i-1)}}=$$ (47) $$\displaystyle=$$ $$\displaystyle{\rm e}^{-\mu r^{2}\sum_{i=1}^{m-1}t^{2(i-1)}}={\rm e}^{-\mu(1-t^% {2(m-1)})}.$$ Therefore the total probability that Eve detects some signal after the $m^{th}$ beam splitter is $$\displaystyle p_{m}^{\neg 0}$$ $$\displaystyle=$$ $$\displaystyle p_{m}p_{c}^{\neg 0}(\mu r^{2}t^{2(m-1)})$$ (48) $$\displaystyle=$$ $$\displaystyle{\rm e}^{-\mu(1-t^{2(m-1)})}(1-{\rm e}^{-\mu r^{2}t^{2(m-1)}})% \mbox{.}$$ If this event occurs, the splitting stops and the signal is sent to Bob through the lossless channel. Otherwise Eve keeps on adding beam splitters until she reaches a maximum number $N$ of attempts. After the conditional beam splitting attack with finite beam splitters (cbsf), Bob will therefore receive a coherent state $\left|\phi^{bob}_{m}\right>=\left|t^{m}\alpha\right>$ with probability $p_{m}^{\neg 0}$ for $m=1\ldots N-1$ or $p_{N}$ for $m=N$. From here one can calculate the probability of vacuum signals arriving at Bob’s site in the cbsf $$\displaystyle P^{{\rm B}}_{{\sc cbsf}}[0]$$ $$\displaystyle=$$ $$\displaystyle p_{N}p_{c}^{0}(\mu t^{2N})+\sum_{m=1}^{N-1}p_{m}^{\neg 0}p_{c}^{% 0}(\mu t^{2m})$$ (49) $$\displaystyle=$$ $$\displaystyle{\rm e}^{-\mu}\left(1-N+\sum_{n=0}^{N-1}{\rm e}^{\mu r^{2}t^{2n}}% \right)\mbox{.}$$ The probability of success of the cbsf attack is, $$\displaystyle p_{{\sc cbsf}}^{{\rm succ}}$$ $$\displaystyle=$$ $$\displaystyle\sum_{m=1}^{N}p_{m}^{\neg 0}p_{c}^{\neg 0}(\mu t^{2m})=\sum_{m=1}% ^{N}p_{m}(1-{\rm e}^{-\mu t^{2m}})$$ (50) $$\displaystyle=$$ $$\displaystyle 1+{\rm e}^{-\mu}\left(N-{\rm e}^{\mu t^{2N}}-\sum_{n=0}^{N-1}{% \rm e}^{\mu r^{2}t^{2n}}\right)$$ Notice that Eqs. (48) and (47) and the derived quantities are in agreement up to first order in $r^{2}$ with the corresponding probabilities for the infinitesimal cbs  derived with the quantum jump method, i.e. Eqs. (14) and (16) (with $r^{2}=\frac{\tau}{N}\epsilon^{2}$). In order to match Bob’s probability of detecting a non-vacuum signal with the result he would expect from the lossy channel, Eve has now two free parameters: the maximum number $N$ of beam splitters used and their transmission amplitude $t$. From the infinitesimal cbs results we know that for small reflection coefficients (i.e. $t\sim 1$), $t^{2N}\approx\gamma_{\tau}^{2}$ taking $\gamma_{\tau}^{2}$ from Eq. (19). Of course, for a finite $N$ this approximation will not hold when we are in or near the region $\eta<\eta_{{\sc cbs}}^{{\rm min}}$ which corresponds to $\tau\rightarrow\infty$. The reason for this is that now we are dealing with finite beam splitters and accordingly, for any given $N$, we can ‘mimic’ a lossy channel with arbitrarily high losses. For the same reason now we are not forced to block any signals as done for the cbs  in the high losses regime. To get the quantitative results presented in the rest of this section we have to find numerically the condition on Eve’s free parameters ($t$ and $N$) such that Bob’s expectations on the number of non-vacuum signals are fulfilled. For a given $N$ we use Newton’s method to find the value of $t$ for which $P^{{\rm B}}_{{\sc cbsf}}[0]$ (49) equals to $P^{{\rm B}}_{\eta}[0]$  (12), taking a starting value of $t_{o}=\eta^{\frac{1}{2N}}$. In Fig. 12 and 13 we have plotted the performance quotient $q_{{\sc cbsf}}=\frac{f_{{\sc cbsf}}}{f_{{\sc bs}}}$ as a function of $\mu$ and $\eta$ for an attack using a maximum of two and ten beams splitters respectively. We notice that doing ${\sc cbsf}$ with only two beam splitters, Eve already obtains a much bigger fraction of the key than with the standard beamsplitting attack, e.g for $\mu=0.1$ and $\eta=0.1$ the key she obtains is $q_{{\sc cbsf}}^{N=2}=2.5$ times longer. With ten beam splitters she almost reaches the infinitesimal result (compare with Fig. 3). For $\mu=0.1$ and $\eta=0.1$ the key she obtains is $q_{{\sc cbsf}}^{N=10}=4.7$ times longer. In Fig. 14 we can compare the key fraction obtained with the bs, cbs and cbsf with different numbers of beam splitters. The number of double-clicks in Bob’s detectors when he measures in the wrong basis can be calculated from $$p^{dc}_{{\sc cbsf}}=p_{N}p_{c}^{\neg 0}(\frac{1}{2}\mu t^{2N})^{2}+\sum_{m=1}^% {N-1}p_{m}^{\neg 0}p_{c}^{\neg 0}(\frac{1}{2}\mu t^{2m})^{2}.$$ (51) The ratio between the double-click probabilities of the bs and cbsf for various values of $N$, is plotted Fig. 11 together with the corresponding performance quotients. This figure shows that, depending on the parameter regime, an attack with a small number of finite beam splitters might be more suitable than the infinitesimal cbs  in that the number of double clicks is much closer to its lossy channel values. VII Conclusion In this paper we have introduced a novel attack on weak coherent pulse quantum key distribution in lossy channels. The conditional beam splitting takes advantage of the multiphoton component of the transmitted signals to extract information on the encoded bit. This task is accomplished optimally using the photon number splitting attack. However, the implementation of the pns attack entails a quantum non demolition measurement which is something unattainable, at least at the single photon level, with the current technology. Until now the technologically possible alternative to the pns  has been the simple beam splitting attack [23]. For high losses (i.e. long transmission distances) the bs attack turns out to be very ineffective. Here we have presented the conditional beam splitting attack which also requires only linear optical elements and is therefore feasible with present technology, but is much more efficient than the conventional beam splitting attack. The cbs attack, thus, shortens substantially the gap in performance between the ideal and practical eavesdropping attacks. This is of great importance if one considers that the eavesdroppers success in a quantum key distribution attack is dictated by the technology at the moment of the signal transmission. In contrast with the case of classical cryptographic protocols, no future technologies can help unveil present key exchanges. Starting from the simplest scenario in which Eve is capable of storing her signals until the encoding basis is announced, we have moved to more realistic situation in which Eve has no storing capabilities. For this situation an adapted cbs attack, based on the extraction of two single photons, has been proven to be advantageous for some relevant parameter regimes. Numerical results for the implementation of cbs with finite reflectivity beam splitters show that using only two beam splitters one can easily duplicate the efficiency of the conventional beam splitting, and that the infinitessimal cbs results are reached with the use of a few beam splitters. The photon statistics at Bob’s detectors has been studied and shown to be a matter of concern in this type of attack. However, we argue that, if handled with care, this drawback does not disqualify the cbs attacks. We believe that further elaborations of this basic idea can lead to attacks which are specialized for certain parameter regimes and other protocols. Acknowledgements S. M. B. thanks the Royal Society of Edinburgh and the Scottish Executive Education and Lifelong Learning Department for financial support. J.C. acknowledges the Academy of Finland (project 4336) and the European Union IST EQUIP Programme for financial support. References [1] J. Calsamiglia, S.M. Barnett, N. Lütkenhaus and K-A. Suominen, Phys. Rev. A (in press), and quant-ph/0106086. [2] G. S. Vernam, Journal of the American Institute of Electrical Engineers 45, 109 (1926). [3] S. Wiesner, Sigact News 15, 78 (1983). [4] C. H. Bennett and G. Brassard, in Proceedings of IEEE International Conference on Computers, Systems, and Signal Processing, Bangalore, India (IEEE, New York, 1984), pp. 175–179. [5] C. H. Bennett, G. Brassard, C. Crépeau, and U. M. Maurer, IEEE Trans. Inf. Theory 41, 1915 (1995). [6] D. Mayers, Unconditional security in quantum cryptography, Journal of ACM, 2001 (to appear); also available as quant-ph/9802025. [7] P. W. Shor and J. Preskill, Phys. Rev. Lett. 85, 441 (2000). [8] E. Biham et al., in Proceedings of the Thirty Second Annual ACM Symposium on Theory of Computing, New York, USA (ACM, New York, 2000), pp. 715–724. and quant-ph/9912053 [9] H.-K. Lo and H. F. Chau, Science 283, 2050 (1999). [10] C. H. Bennett et al., J. Cryptology 5, 3 (1992). [11] B. Huttner, N. Imoto, N. Gisin, and T. Mor, Phys. Rev. A 51, 1863 (1995). [12] G. Brassard, N. Lütkenhaus, T. Mor, and B. Sanders, Phys. Rev. Lett. 85, 1330 (2000). [13] N. Lütkenhaus, Phys. Rev. A 61, 052304 (2000). [14] H. Inamori, N. Lütkenhaus, and D. Mayers, quant-ph/0107017. [15] Naturally, this requires Eve to be able to perform a qnd measurement in order to determine the presence of a single photon without absorbing it. This is not realistic with current technology. Allowing Eve to do this, however, serves to demonstrate the full potential of the CBS attack. [16] K. Mølmer, Y. Castin and J. Dalibard, J. Opt. Soc. Am. B. 10, 524 (1993) [17] M. B. Plenio and P.L. Knight, Rev. Mod. Phys. 70 101 (1998). [18] H. Carmichael An open systems approach to quantum optics (Springer-Verlag, Berlin, 1993). [19] S. M. Barnett and P. M. Radmore, Methods in theoretical quantum optics (Oxford University Press, Oxford, England, 1997) p. 236. [20] For simplicity we will not consider in this paper the effects of any imperfections in the channel used by Eve to send the photons to Bob. In any case it is reasonable to assume that Eve can provide a channel with lower losses than the one used by Alice and Bob. The effect of considering Eve’s losses is equivalent to the effect of Bob’s detector efficiency. [21] Eve can use a cascade of beams splitters in order to distribute the photons she receives so that the probability of having two photon detections is infinitesimally small. [22] See e.g. W. P. Ziemer, Weakly differentiable functions: Sobolev spaces and functions of bounded variation (Springer, New York, 1989). [23] Other technlogically possible alternatives become available if quantum bit errors are present. See e.g. S. Félix, N. Gisin, A. Stefanov and H. Zbinden, quant-ph/0106086.
\jyear 2021 \equalcont These authors contributed equally to this work. \equalcontThese authors contributed equally to this work. 1]\orgdivDepartment of Computational and Data Sciences, \orgnameGeorge Mason University, \orgaddress\street4400 University Dr, \cityFairfax, \postcode22030, \stateVirginia, \countryUnited States Origins of Face-to-face Interaction with Kin in US Cities \fnmJericho \surMcLeod [email protected]    \fnmUnchitta \surKan [email protected]    \fnmEduardo \surLópez [email protected] [ Abstract People interact face-to-face on a frequent basis if (i) they live nearby and (ii) make the choice to meet. The first constitutes an availability of social ties; the second a propensity to interact with those ties. Despite being distinct social processes, most large-scale human interaction studies overlook these separate influences. Here, we study trends of interaction, availability, and propensity across US cities for a critical, abundant, and understudied type of social tie: extended family that live locally in separate households. We observe a systematic decline in interactions as a function of city population, which we attribute to decreased non-coresident local family availability. In contrast, interaction propensity and duration are either independent of or increase with city population. The large-scale patterns of availability and interaction propensity we discover, derived from analyzing the American Time Use Survey and Pew Social Trends Survey data, unveil previously-unknown effects on several social processes such as the effectiveness of pandemic-related social interventions, drivers affecting residential choice, and the ability of kin to provide care to family. keywords: Social mixing, kinship, non-household family, family proximity, epidemic modeling 1 Introduction A defining feature of people’s lives are the pattern of how they socialize face-to-face. These patterns are pivotal to people’s economic productivity Bettencourt , the social support they receive for companionship or needs-based care DunbarSpoor ; Wellman ; RobertsPsyc , a sense of belonging to their local community SCANNELL20101 ; LEWICKA , and even the epidemiology of infectious diseases of their cities of residence anderson1991infectious ; VespignaniReview . Such patterns of interaction arise from two factors, the mere presence of specific social ties living near a person regardless of interaction, and the temporal dynamics of actually engaging with those present ties for one or more activities. Here, we refer to the first of these factors as the availability of ties, and the second as the propensity to interact with available ties. Once availability and propensity come together, they generate observed interaction, ultimately the variable that matters for many purposes. For a given geographic location (e.g., a city), availability and propensity can be thought of as average per-capita quantities characterizing the location. Furthermore, it is in principle possible for friends, family, co-workers, or other types of ties to be more available on average in one location than another, as well as to have different average local propensities to engage with types of ties. If indeed such differences exist, the patterns of interaction across locations will change accordingly. These effects, as we explain in this paper, can have consequential implications. Here, to both show the detectable impacts of availability and propensity, and to argue in favor of the essential value of tracking them separately, we study the city-specific patterns of people’s face-to-face interaction with extended family ties in the US. We call these ties non-coresident local family (nclf), which are family members by blood or marriage that live in the same city but not the same household. The relevance of cities in the study of interaction is clear: cities constitute the settings within which people conduct the overwhelming majority of their work and social lives, which also means that the consequences of face-to-face interaction are most palpable within those cities. These consequences, mentioned at the outset, encompass the economic Bettencourt ; Altonji , social DunbarSpoor ; Wellman ; RobertsPsyc ; COMPTON2014 ; taylor1986patterns ; SCANNELL20101 , and epidemiological VespignaniReview . The focus on extended family, on the other hand, can be explained because of their preferential status among social ties DunbarSpoor ; Wellman ; personal-networks and overwhelming abundance (at least $75\%$ of people in the US report to have nclf Choi ; Spring ). Compared to other types of social ties typically considered in surveys, nclf ties are a particularly interesting and impactful subject of study for their surprising neglect in the literature in the face of solid evidence of their critical role Furstenberg ; Bengtson and, most urgently, the important revelation of their power to remain active Feehan and very risky Cheng ; Koh during the COVID-19 pandemic. Therefore, in order to test our theory, in this article we empirically and theoretically study the large-scale patterns of face-to-face interaction with nclf of people living in US cities. With the aid of the American Time Use Survey (ATUS) and Pew Research Center’s Social Trends Survey (PSTS) data, we develop a thorough characterization of nclf interaction, availability, and interaction propensity. We begin by showing the significant amount of nclf interaction as compared to non-family interaction, providing justification for our close attention to nclf ties. We find that nclf interaction systematically decays with city population size with virtual independence on the activity (tracked by the ATUS) driving the interaction. Similarly, we observe this decaying behavior in the availability of nclf ties as a function of city size. To discern whether the decay in nclf interaction is driven by availability or propensity, we construct a probabilistic model based on the propensity to interact given availability, allowing us to calculate how population size may affect propensity. Strikingly, we find that the propensity to interact with nclf for many types of activities is roughly constant or slightly increasing with city population size, implying that what dictates whether or not people see non-coresident family is simply that they are locally available. Consistent with this observation, we also find that the interaction duration of activities with nclf (also captured by the ATUS) is not typically affected by population size. Finally, we provide summary estimates of propensities that show which activities and times of the week (weekdays or weekends) contribute most to nclf interaction. The interest we place in relating nclf availability and propensity to city population size and specific activities requires some elaboration. The first, city population, is a variable known to affect many aspects of cities including people’s average productivity Bettencourt , the average number of social ties people have Kasarda , residential mobility decisions Reia2022 , and the basic reproduction number (denoted $R_{o}$) of contagious diseases Dobson , to name a few. These effects are also known to relate to social interaction (with nclf and other ties), which means that to properly understand interaction one must relate it to population size. With respect to the activities people engage in with nclf, these are at the heart of the benefits provided by socializing with family, as stated above. Thus, for example, knowing the propensity to interact with nclf for care-related activities provides, when combined with availability, a clear picture of how much family assumes supporting roles in the lives of their kin, and how the social and economic consequences of those decisions may affect outcomes. In other words, in order to properly assess the social benefits of nclf, an empirical understanding of the patterns of interaction per activity is critical. At first glance, it may appear that a decomposition of interaction into a set of existing available ties and a separate propensity of encounters with those ties is an unnecessary complication, especially if what matters is their combined product, interaction. However, each of these variables functions differently. Available ties depend on people’s residential location, whereas the propensity of encounters are driven by needs that are dealt with over short time scales. This means that if a need to change interaction rapidly were to arise, it would be propensity that should be adjusted. For example, the COVID-19 pandemic was marked by numerous restrictions to mobility Hale2021 which modify propensities but not availability. Another illustration of the importance of this decomposition is highlighted by the availability of family links: if we compare two cities with different availabilities of family ties, needs for certain types of assistance such as care for young children ogawa1996family ; COMPTON2014 or the elderly taylor1986patterns ; MCCULLOCH1995 can be met by family members more often where availability is larger. Our work offers several important contributions. Perhaps the most critical is conceptual, by introducing and supporting with evidence the idea that interaction at the level of a city can be decomposed into the more primitive components, availability and propensity. In particular, we show for nclf that indeed these two quantities can behave independently of each other and thus generate non-trivial patterns of interaction. In doing this, we offer the first systematic attempt to empirically characterize interaction and its propensity with nclf over a large collection of activities across a substantial sample of US cities, preserving both city and activity individuality. It is worth noting that, since propensity is not directly tracked in survey data, our estimation on this novel quantity generates a new angle by which to think about the practical temporal aspects of face-to-face interaction. From the perspective of each activity, we provide an overall picture of what brings nclf together with greater frequency, and we do this in a way that distinguishes such contact city by city. The implications of our findings take on several forms, inherently connected to the differences in the way that availability and propensity operate. For example, in the COVID-19 pandemic, the very existence of larger availability of nclf with decreasing population suggests that smaller cities have an intrinsic disadvantage when trying to reduce interaction given that the critical and generally favored family tie is more abundant. Another suggestion from our work is that the ability for kin to support each other varies city by city along with the benefits of this support. Finally, two other contributions that we expand upon in the Discussion pertain to (i) the indication that our work makes in terms of how surveys of face-to-face interaction would improve greatly if they captured separately availability and propensity and (ii) the way in which availability and propensity relate and possibly advance other research fields such as network theory in their approach to interaction. 2 Results 2.1 Interaction We define nclf interaction, denoted $f$, as the proportion of people in the target population of the ATUS in a city $g$ that engage in an activity with nclf on a given day. Here, we consider nclf to be any family member identified by the ATUS as not residing in the same household as the respondent (see Supplementary Table 2). Because the choice to interact generally depends on the nature of the activity (e.g., care versus social) and the type of day of the week in which it occurs (weekday or weekend), we define a 2-dimensional vector $\alpha=(\text{activity, day-type})$, which we call activity-day, and assume that $f$ is a function of both $g$ and $\alpha$. We use data from the ATUS, which surveys Americans about how they spend their time and with whom during the day prior to their survey interview, to estimate nclf interaction (see Data). Explicitly, $$f(g,\alpha)=\frac{\sum_{i\in g}w_{i}a_{i}(\alpha)}{\sum_{i\in g}w_{i}},$$ (1) where the sum is over respondents $i$ who reside in $g$, $a_{i}(\alpha)=1$ if the respondent reports an $\alpha$ with nclf in the ATUS and is 0 otherwise, and $w_{i}$ denotes the respondent sampling weights whose unit is persons and whose sum is an estimate the target population in $g$ of the ATUS (non-institutionalized civilians aged 15 and above). The sampling weights $w_{i}$ can be thought of as the number of people $i$ represents in the population given our inability to survey the entire population. The weights are recalibrated so that the demographic distribution of the sample matches closely that of the target population in each city (see Weight re-calibration). Out of $384$ US metropolitan core-based statistical areas (CBSAs, or more commonly referred to as metro areas), our ATUS sample makes it possible to analyze $258$ which account for close to $260$ million people, or approximately $80\%$ of the US population over the year of the ATUS we focus on ($2016$ to $2019$, inclusive). In the ATUS, certain CBSAs are unidentified due to privacy standards. Table 1 lists the types of activities captured by the ATUS. It is informative to begin our results with an overall estimate of the amount of interaction with nclf in the ATUS cities captured here. For this purpose, we apply Eq. 1 to those cities and find that, on average, $23.73\%$ of people in the target population interact with nclf on any given day. This can be compared to the corresponding $45.84\%$ of non-family interaction (see Supplementary Fig. S5). While nclf interaction is estimated to be about half the interaction with that non-family contacts, one should bear in mind that non-family includes ties such as co-workers who are encountered with high frequency over the work week. Assuming that the non-family interaction estimate indicates contact on approximately a weekly basis, we see that over the cities represented here, nclf interaction occurs at a rate of about $23.73/45.84\approx 0.518$ compared to non-family. In other words, these numbers suggest that on average people see nclf every other week. In order to develop a more nuanced picture of nclf interaction, we now apply more detailed analyses. To understand the broad population-size effects on nclf interaction, we analyze $f(g,\alpha)$ against $p(g)=\log_{10}P(g)$, the log of population of $g$ (we use log-population due to the heavy-tailed distribution of city population in the US Malevergne ; IOANNIDES ; Levy ; Eeckhout2004 ; Eeckhout2009 ). The resulting set of points $\{p(g),f(g,\alpha)\}_{g}$ over all the $g$s in our data can be analyzed in several ways in order to look for possible trends. Here, we apply three approaches that generate complementary information: non-parametric modal regression ChenMode , weighted cubic smoothing splines wang2011smoothing , and weighted least squares. The first and second methods capture the typical and average behavior, respectively, while the last method corroborates our results. In all cases, because the variance in the data is sensitive to sampling, we weigh each location $g$ by the sample size of the survey in that location. We first apply non-parametric modal regression to identify the typical behavior of $f$ with respect to $p$. This regression is based on estimating through kernel density estimation (KDE) the conditional probability density $\rho(f\>\lvert\>p,\alpha)$ and its conditional mode $f^{*}(p,\alpha)$, the value of $f$ given $p$ for which $\rho(f\>\lvert\>p,\alpha)$ is largest. We first apply the method to the most general $\alpha=\alpha_{o}=\text{(any activity, any day)}$; this choice of $\alpha$ also provides a useful baseline against which to compare all other results. The light-gray curve in Fig. 1A shows the mode $f^{*}$ as a function of $p$. We can clearly observe the systematically decaying trend of $f^{*}(p,\alpha_{o})$ as a function of $p$. For the smallest log-population $p(g)\approx 4.958$, $f^{*}(g,\alpha_{o})$ is estimated to be $0.267$; for the large limit $p(g)\approx 7.286$, $f^{*}(g,\alpha_{o})\approx 0.211$. This represents an overall drop of $\approx 0.056$, i.e., about $21\%$ of interaction over the log-population range. The concrete form of the trend of $f^{*}$ with $p$ appears to be slightly slower than linear. Figure 1A also displays a heatmap of the conditional probability density $\rho(f\>\lvert\>p,\alpha_{o})$ scaled by $\rho(f^{*}\>\lvert\>p,\alpha_{o})$. The color represents a normalized scale across values of $p$ of the location of the probability mass (note that $\rho(f\>\lvert\>p,\alpha_{o})/\rho(f^{*}\>\lvert\>p,\alpha_{o})=1$ when $f=f^{*}$). The concentration of $\rho(f\>\lvert\>p,\alpha_{o})$ above and below $f^{*}(p,\alpha_{o})$, expressed by the intense red color, crucially suggests that $f(g,\alpha_{o})$ is typically quite similar to $f^{*}(p(g),\alpha_{o})$, the point on the modal regression corresponding to the population $p=p(g)$ of $g$. In addition, since $f^{*}(p(g),\alpha_{o})$ has a systematically decaying trend with $p$, it means that in general $f$ also decays with $p$. Aside from the typical behavior of $f$ with $p$, we also analyze its average behavior via the well-known method of cubic smoothing splines wang2011smoothing . In this method, the average behavior is captured by the function $f^{(b)}(p\>\lvert\>\alpha_{o})$ that minimizes the sum of quadratic errors between $f^{(b)}(p\>\lvert\>\alpha_{o})$ and the data plus a cost for curvature in $f^{(b)}(p\>\lvert\>\alpha_{o})$ which controls for over-fitting. The fitted spline is shown as the black dotted curve in Fig. 1A, and exhibits a similar decay pattern as the modal regression. The decay of $f^{*}$ and $f^{(b)}$ with respect to $p$, observed in Fig. 1 for $\alpha_{o}$ is not an isolated effect. We now expand our analysis to include ‘social’ and ‘care’ activities reported by the ATUS respondents, defined as $\alpha_{\text{social}}=\text{(any social activity, any day)}$ and $\alpha_{\text{care}}=\text{(any care activity, any day)}$, respectively (see Supplementary Table 4 for the ATUS activity types we consider to be social or care-related). Using these new aggregate activity-days ($\alpha_{\text{social}}$ and $\alpha_{\text{care}}$) leads to similarly decaying modal regressions with respect to $p$. These can be seen in Fig. 2A (inset), where we can also notice that, depending on the specific $\alpha$, the range of values of $f^{*}(p,\alpha)$ also changes. For example, $f(g,\alpha_{\text{social}})$ ranges from about $0.226$ to $0.181$ as population increases, a range that is not very different from that of $f(g,\alpha_{o})$, whereas $f(g,\alpha_{\text{care}})$ is constrained to the range from $0.078$ to $0.058$ over the population range. Following the conceptual framework we proposed in the Introduction, we would want to know whether these decaying trends in $f$ with $p$ originate from availability or propensity. The fact that these trends persist across three different activity-days is suggestive of the hypothesis that availability is the origin of the consistent decay because it is independent of timing and activity by definition. Under this hypothesis, propensity would explain the relative differences in interaction given a specific activity-day based on people’s likelihood of interacting with nclf given the nature of the engagement. If availability exhibits a decaying trend, the combination of $\alpha$-independent availability and $\alpha$-dependent propensity could generate the observed interaction behavior. To test this possibility with $f^{*}$ and the $\alpha$s already in Fig. 2A (inset), we perform a scaling of $f^{*}(p,\alpha)$ by its average, given by $\langle f^{*}(\alpha)\rangle:=\int_{p_{\min}}^{p_{\max}}f^{*}(p,\alpha)dp/(p_{\max}-p_{\min})$. The rescaling leads to the main part of Fig. 2A in which we can see that the curves for the three different $\alpha$ overlap (collapse). The collapse suggests that the decaying trends for each of the $\alpha$s are not unrelated but, instead, are driven by a shared function that is independent of $\alpha$ (if it was $\alpha$-dependent, the functional forms of $f^{*}(p,\alpha)/\langle f^{*}(\alpha)\rangle$ with respect to $p$ would be different and, hence, not collapse). Although this result is not in itself proof that the hypothesis just presented is correct, it does suggest that independent analyses of availability and propensity are warranted. The last method of analysis we employ to study the relation between $f(g,\alpha)$ and $p(g)$ is weighted least squares (WLS). First, we corroborate that the set of aggregate activities (Tab. 1, top section), corresponding to the three cases of $\alpha$ discussed above, display negative significant regression coefficients ($\beta_{1}$). Note that, since $f^{*}(p,\alpha)$ in Figs. 1A and 2A shows a non-linear decaying trend, there is value in going beyond the analysis of averages based on WLS and smoothing splines. Given the consistency uncovered for the aggregate activities, it is pertinent to explore whether decaying trends also hold for other ATUS major category activities. There are three reasons for this: (i) aggregations of several activities could be susceptible to Simpson’s paradox, generating spurious trends by way of aggregation, (ii) if indeed a decaying trend in availability is affecting $f(p,\alpha)$ systematically over $\alpha$, further evidence would be garnered by doing this analysis, and (iii) for different reasons, both modal regression and smoothing spline results are not as reliable for $\alpha$s that do not occur often (see last column of Tab. 1). The $\beta_{1}$ column of Tab. 1 under the area of “Individual activity” shows these results, which further support our thinking. Indeed, the regression coefficients for many individual activities are negative and statistically significant. The preponderance of significant negative slopes indicates that the trends observed using aggregate $\alpha$s (such as for Figs. 1 and 2A) are not affected by aggregation effects, strengthening the case for an $\alpha$-independent availability that decays with $p$. Before proceeding to test availability, we note that the decaying trend in nclf interaction with population is not due to an overall decay of interaction with all non-household contacts (family or not) in big cities. On the contrary, data shows that interaction with non-household contacts in fact increase as cities get larger (see Supplementary Section 3). 2.2 Availability Although the ATUS does not provide data to directly test any trend of nclf availability with $p$, this can be done by leveraging a survey from the Pew Research Center called the Pew Social Trends Survey (PSTS) PEWsocialtrends . As part of the PSTS, respondents are asked how many family members live within an hour’s drive of where they live, providing a way to measure nclf availability. The PSTS reports this number for each respondent in discrete categories ($0$, 1 to 5, 6 to 10, 11 to 15, 16 to 20, and 21 or more). In establishing a link between interaction and availability, a pertinent concept that should not be overlooked is that individuals do not engage with their family members on equal footing, nor do they interact with all of them DunbarSpoor ; MOK2007 —an important finding from the literature on ego networks concerned with how much interaction people have with their kin and other kinds of contacts. Instead, individuals place some kinship ties at a high level of importance while relegating others to be of low relevance. To illustrate, while a distant relative may be proximal to somebody in a location, this proximity may play no role because the relative is not particularly important in the person’s social network. In contrast, interacting with, say, a parent, an offspring, a sibling, or the progeny of these family members, is likely to be much more important. This sorting, regardless of which relationships end up being important and which do not, leads to the effect that not every single kinship-tie is necessarily useful to count. Guided by this theory, we define for each PSTS respondent $i$ a variable $b_{i}(k)$ that takes on the value $1$ when they report having $k$ or more nclf available, and $0$ if they report having less than $k$ nclf. Then, we define a measurement of overall availability for each location $g$ given by $$\phi(g,k)=\frac{\sum_{i\in g}w_{i}b_{i}(k)}{\sum_{i\in g}w_{i}}.$$ (2) Similar to the ATUS, each PSTS respondent is given a weight $w_{i}$ that balances the demographic distribution of the sample such that the sample is representative at the national level. Given the categorical reporting of the PSTS variable, we develop an algorithm that allows us to reliably change $k$ by increments of one unit (see Supplementary Section 4.2). In order to determine the population trend, if any, of $\phi(g,k)$ as a function of $p(g)$, we carry out the modal regression and smoothing-spline analysis again. The results are shown in Fig. 1B. As conjectured, the trends of availability captured by $\phi^{*}(p,k)$ and $\phi^{(b)}(p,k)$, respectively the modal regression and the spline of $\{p(g),\phi(g,k)\}_{g}$, are decaying with respect to $p$. As an additional consistency check for this decay, we calculate the WLS slope coefficient of $\phi(g,k)$ as a function of $p(g)$ and find that it is both negative and significant (see $\beta_{1}$ in Tab. 1, community section). The results presented here, and for the rest of the paper, are for $k=3$. However, varying $k$ does not change the results qualitatively (see Supplementary Figure 6). The results here support our hypothesis that nclf interaction rates in larger cities are lower than smaller cities because overall nclf availability is also lower in the larger cities, but they do not yet paint a complete picture. One pending and interesting question is how people choose to interact with family that is local to them (their propensity). One should contemplate the possibility that this propensity may also display negative $p$-dependence (i.e., that people in larger cities have lower need or desire to see their family). The results of Sec. 2.1, and perhaps our experience, would suggest that such a behavior may be negligible or even unlikely. Furthermore, consideration of the various functions that nclf perform runs counter to a propensity that decays with $p$. Whilst this simple picture is convincing and in agreement with intuition, none of our analysis so far can determine this. To be able to test propensity, we next introduce a probabilistic framework that can help us discern between behavioral and non-behavioral effects. 2.3 Probabilistic framework for interaction, propensity, and availability To understand the interplay between availability and propensity that leads to actual interaction, we introduce a probabilistic model where each of these effects is explicitly separated. The model is structured such that it is easy to relate to typical survey data such as what we use here. An in-depth explanation of the model can be found in Supplementary Section 5. Consider a model with various cities. In a city $g$, any given individual $i$ is assigned two random variables, one that indicates if the individual has nclf available ($b_{i}=1$) or not ($b_{i}=0$), and another that indicates if they report performing $\alpha$ with nclf ($a_{i}(\alpha)=1$) or not ($a_{i}(\alpha)=0$). In addition, individuals in a location $g$ are grouped into population strata within which they all share the same personal characteristics. For example, $i$ may be male, of a certain age range, and a given ethnicity. These characteristics are captured in the vector $\mathbf{c}(i)$. All individuals that share the same vector $\mathbf{c}$ of characteristics represent a segment of the target population, and this induces a set of weights $w(\mathbf{c})$ for each of those individuals such that the sum $\sum_{i\mathchar 24635\relax\;\mathbf{c}(i)=\mathbf{c}}w(g,\mathbf{c}(i))$ is the size of the target population $Q(g,\mathbf{c})$ in $g$ with features equal to $\mathbf{c}$. For simplicity, we first work with given values of $g$, $\mathbf{c}$, and $\alpha$. We introduce the probability $\kappa(g,\mathbf{c},\alpha)$ that an individual in stratum $\mathbf{c}$ of location $g$ reports doing $\alpha$ with nclf available to them. We think of this as the pure propensity to interact. On the other hand, the probability of available nclf is given by $\phi(g,\mathbf{c})$, which does not depend on $\alpha$. While dealing with given $g$, $\mathbf{c}$, and $\alpha$, we simply use $\kappa$ and $\phi$ and then reintroduce $\mathbf{c}$, $g$, and $\alpha$ when needed. On the basis of these definitions, the joint probability that a given individual has concrete values $a_{i},b_{i}$ is given by $${\rm Pr}(a_{i},b_{i})=\kappa^{a_{i}}(1-\kappa)^{1-a_{i}}\phi\delta_{b_{i},1}+\delta_{a_{i},0}(1-\phi)\delta_{b_{i},0},\quad[a_{i},b_{i}=0,1],$$ (3) where $\delta_{u,v}$ is the Kronecker delta, equal to $1$ when $u=v$, and $0$ otherwise. This expression captures all possible combinations of availability and interaction: people without availability have probability $1$ to respond $a_{i}=0$, whereas those with availability, which occurs with probability $\phi$, have a chance of $\kappa\phi$ to report interacting with family and $1-\kappa\phi$ to report not interacting with family. On the basis of the personal probability captured in Eq. 3, and using $s=s(g,\mathbf{c})$ to represent the number of individuals surveyed in $g$ with $\mathbf{c}$, we can determine the marginal distribution that $y=y(g,\mathbf{c},\alpha)$ individuals report doing $\alpha$ with nclf, equivalent to the probability that there are exactly $y$ individuals in total for whom $a(\alpha)=1$. This marginal is given by the binomial distribution $${\rm Pr}(y)=\binom{s}{y}(\kappa\phi)^{y}(1-\kappa\phi)^{s-y}.$$ (4) It is worth remembering that this applies to a specific subset of people in $g$, i.e. those respondents with personal features $\mathbf{c}$. We now use Eq. 4 and the fact that its expectation is given by $y(g,\mathbf{c},\alpha)=s(g,\mathbf{c})\kappa(g,\mathbf{c},\alpha)\phi(g,\mathbf{c})$ to estimate the model expectation for actual interaction, $f_{m}(g,\alpha)$, which can in turn be related to our data. This calculation is straightforward because the statistics for each $\mathbf{c}$ are independent. Specifically, since each individual with $\mathbf{c}$ in $g$ represents $w(g,\mathbf{c})$ others, $f_{m}(g,\alpha)$ is equal to the weighted average of the expectations $y(g,\mathbf{c},\alpha)$, or $$f_{m}(g,\alpha)=\frac{\sum_{\mathbf{c}}w(g,\mathbf{c})s(g,\mathbf{c})\kappa(g,\mathbf{c},\alpha)\phi(g,\mathbf{c})}{\sum_{\mathbf{c}}w(g,\mathbf{c})s(g,\mathbf{c})}=\frac{\sum_{\mathbf{c}}Q(g,\mathbf{c})\kappa(g,\mathbf{c},\alpha)\phi(g,\mathbf{c})}{Q(g)},$$ (5) where the relation between sample weights, size, and population $Q(g,\mathbf{c})=w(g,\mathbf{c})s(g,\mathbf{c})$ has been used, together with the fact that $Q(g)=\sum_{\mathbf{c}}Q(g,\mathbf{c})$. Equation 1 can be interpreted as a trial of the current model, with expectation given by Eq. 5. 2.4 Effective propensity Ideally, we would like to determine the propensity $\kappa(g,\mathbf{c},\alpha)$ for all $g$, $\mathbf{c}$, and $\alpha$. If data allowed it, we could achieve this by equating the expectation of $y$, given by $s(g,\mathbf{c})\kappa(g,\mathbf{c},\alpha)\phi(g,\mathbf{c})$, with its sample value $\sum_{i\in g}a_{i}(\alpha)\delta_{\mathbf{c},\mathbf{c}(i)}$ (the number of respondents that report doing $\alpha$ with nclf) and solving for $\kappa(g,\mathbf{c},\alpha)$. However, this strategy is hampered by the fact that we do not have enough information to determine $\phi(g,\mathbf{c})$ with sufficient accuracy. To address this limitation, we employ a different strategy that is able to provide valuable information about propensity at location $g$ for each $\alpha$. To explain it, we begin by noting that availability can be written on the basis of $\phi(g,\mathbf{c})$ as $$\phi(g)=\frac{\sum_{\mathbf{c}}Q(g,\mathbf{c})\phi(g,\mathbf{c})}{Q(g)}.$$ (6) Next, we introduce the quotient between interaction and availability calculated from the model. We call this quotient $\lambda(g,\alpha)$, the effective propensity of interaction. It is given by $$\lambda(g,\alpha)=\frac{f_{m}(g,\alpha)}{\phi(g)}=\frac{\sum_{\mathbf{c}}\kappa(g,\mathbf{c},\alpha)Q(g,\mathbf{c})\phi(g,\mathbf{c})}{\sum_{\mathbf{c}}Q(g,\mathbf{c})\phi(g,\mathbf{c})}.$$ (7) In this expression, $\lambda(g,\alpha)$ is a weighted average of $\kappa(g,\mathbf{c},\alpha)$ over the part of the population of $g$ that does have nclf available (where the weights are $Q(g,\mathbf{c})\phi(g,\mathbf{c})$). In other words, it constitutes the effective average of propensity. Whilst Eq. 7 allows us to interpret the meaning of $\lambda(g,\alpha)$, its calculation is straightforwardly done by using the sample values $f(g,\alpha)$ and $\phi(g)$ given respectively by Eqs. 1 and 2. Fig. 1C, we show the modal regression $\lambda^{*}(g,\alpha_{o})$ and smoothing-spline $\lambda^{(b)}(g,\alpha_{o})$ for the set of points $\{(p(g),\lambda(g,\alpha_{o}))\}_{g}$. Both curves show that population size has a very small increasing effect on $\lambda(g,\alpha_{o})$, from which we can deduce that the propensity to interact with nclf for those people that have nclf available is not being limited or reduced by $p$. As with $f^{*}(p,\alpha)$, $\lambda^{*}(p,\alpha)$ for the aggregate $\alpha_{o}$, $\alpha_{\text{social}}$, and $\alpha_{\text{care}}$ also collapse when divided by their averages $\langle\lambda^{*}(\alpha)\rangle:=\int_{p_{\min}}^{p_{\max}}\lambda^{*}(p,\alpha)dp/(p_{\max}-p_{\min})$ (Fig. 2B). Although this may superficially appear unsurprising given the relations between $f$, $\phi$, and $\lambda$ (Eq. 7), it is worth keeping in mind that $\lambda^{*}(p,\alpha)$ is the directly estimated conditional mode of the $\lambda(g,\alpha)$ data. Thus, the collapse suggests that the trends that govern the values of $\lambda(g,\alpha)$ for individual cities are indeed multiplicative: a product of a $\phi$ dependent on $g$ but independent of $\alpha$, and a pure propensity $\kappa$ dependent on $\alpha$, predominantly. A more comprehensive analysis of trends of $\lambda$ with respect to $p$ is performed through WLS, shown in Tab. 1 (fifth column) for the remaining $\alpha$s in the ATUS. There are $8$ significant coefficients, of which only religious activities has a negative slope, albeit with a very small value; the remaining significant coefficients are positive. Among all coefficients (significant and non-significant), only $3$ coefficients altogether have negative slopes. This shows the predominant tendency for the effective propensity to either increase slightly with $p$ or be roughly independent of it. Having studied the propensity $\lambda$ with respect to $p$, we now focus on how various activity-days $\alpha$ affect it. In Fig. 3, we present box plots of the sets of values $\{\lambda(g,\alpha)\}_{g}$ for those activity-days that are sufficiently well-sampled. Here, we make a distinction between weekday and weekend when constructing $\alpha$s. Median values are represented by the triangle and square shapes. Triangles point up if $\lambda$ of a given $\alpha$ shows a statistically significant increase with $p$ and squares indicate that $\lambda$ has no significant trend with $p$. No statistically significant decreasing trends of $\lambda$ with $p$ were observed for any $\alpha$. The trends and their significance correspond to those in Tab. 1. Aggregate versions of $\alpha$ are placed at the top of the figure to provide a reference. As expected, people exhibit a larger propensity to interact with nclf on weekends than on weekdays for all activity categories. Spending social time with family has the largest propensity with weekend median value of $0.44$, which can be interpreted as the chances that an individual with nclf available would do a social activity with them on a weekend are approximately $44\%$, in marked contrast to weekdays when the propensity drops to $22.7\%$. In comparison, care-related activities have weekday propensities of $10.4\%$, increasing slightly to $12.6\%$ on the weekends. Another observation from Fig. 3 is that the ranges of values of $\lambda$ can be large for the most common activities. The values of $\lambda$ can be used for estimation of interaction in a variety of cases. For example, assuming independent random draws with $\lambda$ as the success rate for any given $\alpha$, we can quickly estimate such quantities as the proportion of people (in a city or nationally) that meet with nclf in a period of time (say, a month) to do $\alpha$, or the average wait time until the first meeting to do $\alpha$. To complete our analysis of $\lambda$, we present two other summary results in Supplementary Section 7. First, we provide a rank-ordered reference of activities by propensity averaged over the US in Supplementary Table 4. This shows which activities are associated with high or low propensity. Second, to learn about how values of $\lambda(g,\alpha)$ are distributed across concrete US metropolitan areas, and particularly which places exhibit either considerably larger or smaller propensities than other cities of similar populations, we also present Supplementary Table 5, which shows the top and bottom $10$ locations by average rank-order based on weighted $z$-scores of $\lambda(g,\alpha)$ with respect to $\lambda^{*}(p(g),\alpha)$, where ranks are averaged for values of k ranging from 1 to 3. Similar analysis is conducted for interaction duration $t$ (defined next) and shown in Supplementary Table 6. 2.5 Interaction duration with nclf family As a final analysis of the interplay between people’s nclf interaction and population size, we study one last quantity captured in ATUS: the interaction duration with nclf. The relationship between interaction duration and population size can also be studied with the techniques we have used thus far. Let us denote the duration of $i$’s interaction with nclf for $\alpha$ by $t_{i}(\alpha)$. Thus, for a given location $g$ and $\alpha$, the average interaction duration is given by $$t(g,\alpha)=\frac{\sum_{i\in g}w_{i}t_{i}(\alpha)a_{i}(\alpha)}{\sum_{i\in g}w_{i}a_{i}(\alpha)},$$ (8) where $a_{i}(\alpha)$ is defined the same way as in Eq. 1. Note that interaction duration is averaged only over those that report nclf interaction for the $\alpha$ under consideration. This is because $t$ clearly involves a behavioral component, and we are interested in determining the role that population size may exert in the behavior of people when they interact with nclf. If an increase in $p$ was associated with a decrease in the duration of interaction, it could suggest, for example, that people’s busy lives in larger cities limit how much they are able to interact with family, which could be a signal associated with the decaying trends for $f$. In Figs. 1D and 2C we present examples of modal regressions and smoothing splines of interaction duration $t$ with respect to $p$ for aggregate $\alpha_{o}$, $\alpha_{\text{social}}$, and $\alpha_{\text{care}}$. In all cases, $t^{*}(g,\alpha)$ and $t^{(b)}(g,\alpha)$ are either approximately steady over $p$ or increasing slightly. As in the case of propensity, interaction duration captures the behavioral dimension of interaction. The lack of a decaying trend provides further support for the notion that family availability may be the main driver of interaction decay and that people’s attitudes towards interacting with family are not diminished by population size. 3 Discussion The separation of interaction into two necessary factors, availability and propensity, provides a new lens by which to understand it in a more coherent and principled way. Over relatively short periods of time (say, weeks or months) and at the population level of cities, patterns of availability are rigid, which is to say they are structural and approximately fixed in time and space. On the other hand, patterns of propensity are due to day-to-day decision-making and encompass the bulk of the agency that individuals have in controlling interaction at any given time in the short term; propensity is much more flexible than availability, and generally operates at shorter time scales. If we phrased interaction in the language of social networks, availability can be thought of as a static well-defined social network based purely on underlying formal social relations (unambiguously defined for nclf, e.g., parent, sibling, in-law), while propensity is a stochastic process occurring on the static network. Perhaps the most salient observation we make is that, at the level of population of cities, availability explains the heterogeneity in nclf interaction, effectively providing clear support for our approach. In addition, for most activities propensity shows either a weak positive population-dependent trend or no trend. This supports another one of our most important conclusions that, when family is locally available, people take ample advantage of their presence to interact with them. Moreover, the time invested in those interactions are not negatively impacted by population size (i.e., the busy lives of residents of big cities do not deter them from spending time with family). Phrased in social network theory terms, our results imply that the static social networks of nclf differ by city and those differences are the main drivers of the differences in face-to-face interaction we observe, and that nclf propensity is roughly independent of the structure of these networks. This robust family-interaction effect may have important consequences in how people shape the rest of their social networks DAVIDBARRETT201720 ; personal-networks . At a more fundamental level, if availability plays a dominant role in nclf interaction, the patterns that characterize this availability are also likely to characterize other critical aspects of life that strongly depend on nclf interaction (say, care-related outcomes COMPTON2014 ; taylor1986patterns ; MCCULLOCH1995 or certain aspects of people’s well-being DunbarSpoor ; MOK2007 ; roberts2015managing ). As suggested in the Introduction, another aspect that depends on interaction is propagation of infectious diseases. Current research in epidemiological modeling does not pay attention to the concepts of availability and propensity when dealing with interaction data. Consider, for example, the large scale European POLYMOD survey concerned with determining population mixing matrices capturing patterns of social contacts Mossong , frequently used in population-level epidemiological modeling in Europe. Availability is not tracked in this survey (or most other population-level surveys of interaction), but such information can prove critical when large disruptions to regular patterns of behavior (i.e. propensity) can take place, such as has been observed during the COVID-19 pandemic. For one, availability in most cases would not change in the rapid way propensity can change. Also, the way that propensity can adjust after the start of a pandemic is constrained by availability; i.e., availability is a template inside of which propensity can adapt to pandemic conditions. While not all study of human interaction is aimed at such extremes, the framework of availability and propensity still applies. Furthermore, because the fundamental concepts of availability and propensity are not restricted merely to family ties, our remarks here are relevant to social interactions generally. All the examples provided above can lead to policy considerations. Many non-pharmaceutical interventions applied during the COVID-19 pandemic have been directed at reducing propensity for face-to-face interaction or aiding those with economic needs (see PERRA20211 for a review of intervention), but they have hardly considered availability, which may reflect structural needs of the populations of those places. In low-income areas, family support to take care of young children COMPTON2014 is an economic necessity. Asking people in such locations to stop seeing family may prove ineffective. In places with low nclf availability, the closure of daycare during the COVID-19 pandemic coupled with the lack of family support could lead to more mothers dropping out of the workforce, perpetuating gender inequality and slowing down economic recovery. Thus, policy interventions that consider these needs at the local level would likely have a better chance at being effective Hale2021 . From the standpoint of other support roles performed by nclf, a better measurement and understanding of the interplay between availability and interaction may inform how to best address the needs of people at a local level in ways not unlike those that address housing, education, or other locally-oriented policy making. The origin of the population pattern displayed by availability, although not our focus here, is likely related to domestic migration litwak1960occupational ; litwak1960geographic . The more pronounced absence of family in larger populations may be driven by the fact that larger cities tend to offer a variety of opportunities for work, education, and access to particular services and amenities not always available in cities with smaller populations, thus generating an in-migration that is, by definition, a source of decreased family availability. In this scenario, one of the main drivers of the population trend in availability would be economic (see similar examples KANCS2011191 ), suggesting a feedback mechanism whereby the in-migration leading to less family availability can be improving or sustaining economic success, in turn perpetuating the incentives for that in-migration over time. However, other mechanisms such as a desire to maintain close ties with family are likely to balance the economic incentives, preventing the feedback from becoming a runaway effect LEWICKA ; Spring ; litwak1960geographic ; litwak1960occupational . Note that this last point does suggest a coupling between availability and propensity that operates at long time scales (say, over one or multiple years), by which the fraction of the population that is not willing to be far from extended family would exercise agency to adjust their living conditions through migration to be closer to family. Our study contains certain limitations. First, while demographic characteristics are incorporated into our model, estimates of propensity and availability for specific demographic strata are not obtained here due to limitations in survey sampling. Such demographic understanding of interaction is important and should be considered in future research if appropriate data can be obtained. Second, although the ATUS tracks a substantial variety of activities, one should not view it as a comprehensive catalog of all possible ways in which people may interact. To name one example, activities that involve more than a single day are not currently collected in the ATUS by design. Other such limitations exist and researchers should be mindful of that. Third, it may prove fruitful to consider material availability that could affect interaction (for example, in a city without a cinema an $\alpha$ for going to the movies would have propensity $=0$). We do not expect this factor to play a major role for the $\alpha$s and locations we study, as our smallest cities have populations of $\approx 130,000$ people. However, investigations focused on more detailed activities in smaller places could require taking this effect into account. Fortunately, the structure of our model can be straightforwardly updated to include such material availability given sufficient data. At a methodological level, there are interesting observations our work suggests. For example, some simple modifications to large-scale surveys of people’s interaction and time-use behavior could lead to extremely beneficial information able to enhance the usefulness of such surveys. The ATUS could add a few simple questions to their questionnaires that would determine whether respondents have family, friends, or other contacts locally available to them even if no activities have been performed with them. Such questions could provide baseline information about not just what people do but their preferential choices, helping to distinguish the effects of both propensity and availability on interaction. As a final reflection, we relate the notions of availability and propensity to the very successful discipline of network science and its application to spreading processes on social networks newman2018networks ; VespignaniReview ; Havlin ; HOLME201297 ; porter2016dynamical . In this theoretical context, our concept of availability translates to a static network structure of social connections that exist independent of contact activity. Propensity, on the other hand, would be represented by a temporal process of contact activity occurring on the static network. In this context, what the results of the present work mean for the study of processes on networks is that realistic models need to take into account the heterogeneity in both the types of ties (network links) and their associated contact processes. In the case of epidemics, the introduction of frameworks such as those in Karrer ; Karsai offers great promise because they are compatible with the non-trivial structure we uncover here. This contrasts with most other literature that basically conceives of what we call propensity as a low-probability, permanent contact process occurring with all contacts of the static network (the Markovian assumption VespignaniReview ), a modeling choice that destroys most of the true complexity of the process. Dedicated data that addresses propensity will go a long way in driving the development of a new generation of more realistic models of propagation processes on networks. In summary, we decompose face-to-face interaction with non-coresident local family at the city level in terms of availability and propensity to interact, and find that while availability decays with the population of those cities, neither the propensity to interact across activities nor the duration of those interactions shows the same decay. The decay in availability is sufficient to lead to an overall decay of interaction with nclf across US cities as their population increases. We arrive at these results by introducing a stochastic model that allows us to combine existing survey data, the American Time Use Survey and Pew Research Center’s Social Trends Survey, to estimate availability and propensity at the US metro level. Analysis of the resulting propensities show that social activities are the most common, especially on weekends. In social networks terms, availability can be thought of as static social networks of family relations and propensity as a process on these networks. Our findings indicate that the availability network differs by city and is the main driver of the variance in observed face-to-face interaction, while propensity is roughly independent of the network structure. We also discuss some of the implications of our framework and offer ideas on how it is relevant for survey-design, scientific research, and policy considerations with some particular attention to the context of the COVID-19 pandemic. 4 Material and methods 4.1 Weighted least squares Since location sample sizes diminish with population, much of our analysis utilizes weightings. One method we employ is weighted least squares (WLS), defined on the basis of the model $$v_{g}=\beta_{1}u_{g}+\beta_{o}+\epsilon_{g},$$ (9) where $g$ indexes the data points, and the error terms $\epsilon_{g}$ do not necessarily have equal variance across $g$ (heteroskedastic). The solution to the model is given by the values of the coefficients $\beta_{1}$ and $\beta_{o}$ such that $\sum_{g}w_{g}\epsilon_{g}^{2}$ over the data points is minimized. The $w_{g}$ correspond to $g$-specific weighing used to adjust how much the regression balances the importance of the data points. The Gauss-Markov Theorem guarantees that the weighted mean error $\sum_{g}w_{g}\epsilon_{g}^{2}$ is minimized if $w_{g}=1/\text{var}(\epsilon_{g})$. On the other hand, it is known that the variance in a survey is inversely proportional to the size of the sample. Therefore, in our analysis we use $$w_{g}=\frac{1}{\text{var}(v_{g})}=\frac{s(g)}{\sum_{g}s(g)}\quad(\text{for }v_{g}=f(g,\alpha),\phi(g),\lambda(g,\alpha),t(g,\alpha)),$$ (10) where $s(g)$ is the number of respondents in $g$. These weights apply to $f(g,\alpha)$ from the ATUS as well as $\phi(g)$ from the PSTS, where the same $g$ may have different relative weights based on the samples collected for each survey in the same location $g$. To compute these weights when the dependent variable is $\lambda(g,\alpha)$, we make use of the technique called propagation of error. For $\lambda(g,\alpha)=f(g,\alpha)/\phi(g)$, its variance is given as a function of the variances of $f(g,\alpha)$ and $\phi(g)$, calculated through a Taylor expansion up to order $1$. In the limit of weak or no correlation between $f(g,\alpha)$ and $\phi(g)$ (which is reasonable here given that the variables are independently collected), $$\text{var}(\lambda(g,\alpha))\approx\left(\frac{\partial\lambda}{\partial f}\right)^{2}\text{var}(f(g,\alpha))+\left(\frac{\partial\lambda}{\partial\phi}\right)^{2}\text{var}(\phi(g)).$$ (11) Assuming the covariance between $f(g,\alpha)$ and $phi(g)$ is negligible, we calculate $$w_{g}\approx\left[\left(\frac{\partial\lambda}{\partial f}\right)^{2}\text{var}(f(g,\alpha))+\left(\frac{\partial\lambda}{\partial\phi}\right)^{2}\text{var}(\phi(g))\right]^{-1}\quad(\text{for }\lambda(g,\alpha)),$$ (12) where both $\text{var}(f(g,\alpha))$ and $\text{var}(\phi(g))$ are given by the Eq. 10. 4.2 Weighted cubic smoothing splines One of the techniques we employ to approximate the functional dependence between $p$ and, separately, $f$, $\phi$, $\lambda$, and $t$ is cubic smoothing splines wang2011smoothing , defined as follows. For two generic variables $u_{g}$ and $v_{g}$, the cubic smoothing spline $v^{(b)}(u_{g})$ is a piecewise smooth function that minimizes $$\sum_{g}w_{g}(v_{g}-v^{(b)}(u_{g}))^{2}+\eta\int_{\min_{g}\{u_{g}\}}^{\max_{g}\{u_{g}\}}\left(\frac{d^{2}v^{(b)}}{d\xi^{2}}\right)^{2}d\xi,$$ (13) where $\eta$ is a penalty parameter that controls how much to discourages $v^{(b)}$ from large amounts of curvature (and thus overfitting), and $\xi$ is a dummy variable. In the limit of $\eta=0$, curvature is not penalized and the algorithm overfits the data; when $\eta\to\infty$, the only possible solution requires $d^{2}v^{(b)}/d\xi^{2}\to 0$, leading to a straight line and thus making the algorithm equivalent to weighted least squares (WLS). In the algorithm we employ WOLTRING1986104 , the function $v^{(b)}$ is constructed from cubic polynomials between the data points along $\{u_{g}\}_{g}$, with the condition that they are smooth up to the second derivative along consecutive pieces. The penalty parameter $\eta$ can be either chosen arbitrarily or, instead, determined on the basis of some selection criterion. In our case, we use generalized cross-validation Golub , which optimizes prediction performance. The algorithm is implemented as the function make_smoothing_spline of the package scipy.interpolate in python splines . 4.3 Modal regression using kernel density estimation Another method we employ to estimate how population $p$ affects $f$, $\phi$, $\lambda$, and $t$ is non-parametric modal regression. ChenMode . Intuitively, the method looks for the typical behavior of a random variable as a function of some independent variable. This method is defined in the following way. Using a smoothing kernel (in our case Gaussian), we construct the $2$-dimensional kernel density estimator $\rho(u,v)$ for the set of data points $\{(u_{g},v_{g})\}_{g}$ hastie2009elements , where we choose bandwidth by inspection in the neighborhood of the Silverman method silverman1986density but favor solutions that tend towards the smoothing spline results as they are a sign of a stable modal regression line (details can be found in Supplementary Section 6). Then, we determine the conditional density $\rho(v\>\lvert\>u)=\rho(u,v)/\rho(u)$ and extract its local mode $v=v^{*}(u)$. Here, we use this method for unimodal $\rho(v\>\lvert\>u)$ (multimodality can be handled by the method, but is not relevant here). 4.4 Data Our interaction data is derived from the American Time Use Survey (ATUS) conducted by the Bureau of Labor Statistics (BLS) ATUS . Each year, the ATUS interviews a US nationally-representative sample regarding their full sequence of activities through the day prior to the interview, termed a diary day. The information collected in this process is recorded in several files, including the “Respondent”, “Activity”, and “Who” data files. We link these files for the period between 2016-2019 to get comprehensive information about each respondent, activities they carried out on the diary day, as well as those who accompanied the respondent during each activity. We consider nclf to be companions of the respondent who are family but do not reside in the same household (see Supplementary Table 2). These may include the respondent’s parents, parents-in-law, own children under 18 years of age, and other family members as long as they do not reside with the respondent. Activities are encoded in the ATUS with codes of $6$ digits, the first two representing activity categories such as ’eating & drinking’, ’personal care’, or ’work’, to name a few. Additional digits provide more specificity about the activity such as the context or some other detail. We restrict our analysis to activities at the two-digit level (called major categories in the ATUS lexicon) which encompass seventeen such codes. The ATUS also captures the day of the week when the respondent has been interviewed, which in known to play an important role in the choices of activities people perform. We encode the combination of activity and type of day with the 2-dimensional vector variable $\alpha$. Therefore, as an example, eating and drinking with family done on a weekend day corresponds to a specific value of $\alpha$. Beyond the use of the ATUS 2-digit codes, we also create aggregate activity categories that serve as baselines to our analysis and capture people’s major social functions. At the most aggregate level, we define $\alpha_{o}$ which combines all activities in the ATUS done on either type of day (weekday or weekend), which we refer to as ’any activity, any day’. We also define $\alpha_{\text{social}}$, the aggregate set of social activities with the 2-digit codes $11-14$ done any day of the week. Finally, we define $\alpha_{\text{care}}$, we combine the 2-digit codes $3-4$ on any day of the week. To estimate local family availability, we use the Pew Social Trends Survey (PSTS) from the Pew Research Center which was conducted in 2008 using a nationally-representative sample of 2,260 adults living in the continental US PEWsocialtrends . The PSTS identifies geographic location of the respondents at the county level and the binned quantity of family members who live within a one-hour driving distance. To work with the data at the CBSA level, we map county FIPS codes to CBSA codes using the crosswalk downloadable from the National Bureau of Economic Research nbercrosswalk . After filtering by the set of CBSAs common to both surveys, our final sample size is $N=$ 30,061 for the ATUS and $N=$ 1,706 for the PSTS. Metro populations are obtained from the American Community Survey (ACS) data (5-year estimates, 2015-2019) published by the US Census Bureau censusacs . We also extract various demographic variables from the US Census population estimates to use in the recalibrations of sampling weights (see Weight re-calibration). 4.5 Weight re-calibration Our analysis of the time variables of the ATUS is performed with consideration to sampling weights. The ATUS provides a sampling weight for each respondent which, in essence, gives an estimated measure of how many people within the US population the respondent represents given the respondent’s demographic and other characteristics. (The unit of these weights is technically persons-day, although for the purpose of our study we normalize by the number of days in the survey period since it simplifies interpretation of the weights to just population, and because our sampling period is fixed.) The use of such weights is meant to reduce bias and improve population-level estimates of quantities captured by the raw survey data. The ATUS respondent weights are calibrated by BLS at the national level which can be reasonable for large cities but are not reliable for smaller CBSAs. For this reason, we perform a re-calibration of these weights at the CBSA level, the main unit of analysis in our study. Our methodology for weights recalibration follows the original BLS procedure which is a 3-stage raking procedure (also known as iterative proportional fitting and is widely used in population geography and survey statistics Deming1940 ; idel2016review ) but with constraints imposed at the CBSA level and without non-response adjustments. The goal of this type of adjustments is to find a joint distribution of weights given dimensions of characteristics (e.g., race by sex by education by age) such that the sum of respondent weights along a given characteristic axis (i.e., the marginals) matches a known control or target population. For each CBSA, the target in each stage of the 3-stage procedure corresponds to the CBSA population by sex by race by ATUS reference day, population by sex by education level by ATUS reference day; and population by sex by age by ATUS reference day, respectively. For the stratification of these characteristics, see Supplementary Table 3. We use estimates from the US Census Bureau’s Population Estimates Program as well as the ACS. Unavoidably, there are small differences in the target population used by BLS and one that is used by us (for example, population estimates from the PEP include those institutionalized and non-civilian, whereas the universe for the ATUS does not include this sub-population). \bmhead Supplementary information The present article is accompanied by a Supplementary Information document. \bmhead Acknowledgments We acknowledge helpful suggestions from Eben Kenah, Robin Dunbar, and Serguei Saavedra. No external funding source has been used in this research. Funding Not applicable Conflict of interest The authors declare no conflicts of interest. Ethics approval Not applicable Availability of data and materials All data sources are publicly available. Code availability Code used in the project is mostly open source software from the python language projects scipy, statsmodels, and pandas. Authors’ contributions Supressed for double-blind review. References \bibcommenthead (1) Bettencourt, L.M.A., Lobo, J., Helbing, D., Kühnert, C., West, G.B.: Growth, innovation, scaling, and the pace of life in cities. Proceedings of the National Academy of Sciences 104(17), 7301–7306 (2007) https://www.pnas.org/doi/pdf/10.1073/pnas.0610172104. https://doi.org/10.1073/pnas.0610172104 (2) Dunbar, R.I.M., Spoors, M.: Social networks, support cliques, and kinship. Human Nature 6(3), 273–290 (1995). https://doi.org/10.1007/BF02734142 (3) Wellman, B.: The community question: The intimate networks of east yorkers. American Journal of Sociology 84(5), 1201–1231 (1979) https://doi.org/10.1086/226906. https://doi.org/10.1086/226906 (4) Roberts, S.G.B., Arrow, H., Gowlett, J.A.J., Lehmann, J., Dunbar, R.I.M.: Close social relationships: An evolutionary perspective. In: R. I. M. Dunbar (ed.), J.A.J.G.e.. Clive Gamble (ed.) (ed.) Lucy to Language: The Benchmark Papers, pp. 151–180. Oxford University Press, Oxford (2014). Chap. 8 (5) Scannell, L., Gifford, R.: Defining place attachment: A tripartite organizing framework. Journal of Environmental Psychology 30(1), 1–10 (2010). https://doi.org/10.1016/j.jenvp.2009.09.006 (6) Lewicka, M.: Place attachment: How far have we come in the last 40 years? Journal of Environmental Psychology 31(3), 207–230 (2011). https://doi.org/10.1016/j.jenvp.2010.10.001 (7) Anderson, R.M., May, R.M.: Infectious Diseases of Humans: Dynamics and Control. Oxford university press, Oxford, UK (1991) (8) Pastor-Satorras, R., Castellano, C., Van Mieghem, P., Vespignani, A.: Epidemic processes in complex networks. Rev. Mod. Phys. 87, 925–979 (2015). https://doi.org/10.1103/RevModPhys.87.925 (9) Altonji, J.G., Hayashi, F., Kotlikoff, L.J.: Is the extended family altruistically linked? direct tests using micro data. The American Economic Review 82(5), 1177–1198 (1992) (10) Compton, J., Pollak, R.A.: Family proximity, childcare, and women’s labor force attachment. Journal of Urban Economics 79, 72–90 (2014). https://doi.org/10.1016/j.jue.2013.03.007. Spatial Dimensions of Labor Markets (11) Taylor, R.J., Chatters, L.M.: Patterns of informal support to elderly black adults: Family, friends, and church members. Social Work 31(6), 432–438 (1986) (12) Rőzer, J., Mollenhorst, G., AR, P.: Family and friends: Which types of personal relationships go together in a network? Social Indicators Research 127, 809–826 (2016). https://doi.org/10.1007/s11205-015-0987-5 (13) Choi, H., Schoeni, R.F., Wiemers, E.E., Hotz, V.J., Seltzer, J.A.: Spatial distance between parents and adult children in the united states. Journal of Marriage and Family 82(2), 822–840 (2020) https://onlinelibrary.wiley.com/doi/pdf/10.1111/jomf.12606. https://doi.org/10.1111/jomf.12606 (14) Spring, A., Ackert, E., Crowder, K., South, S.J.: Influence of Proximity to Kin on Residential Mobility and Destination Choice: Examining Local Movers in Metropolitan Areas. Demography 54(4), 1277–1304 (2017) https://read.dukeupress.edu/demography/article-pdf/54/4/1277/838963/1277spring.pdf. https://doi.org/10.1007/s13524-017-0587-x (15) Furstenberg, F.F.: Kinship reconsidered: Research on a neglected topic. Journal of Marriage and Family 82(1), 364–382 (2020) https://onlinelibrary.wiley.com/doi/pdf/10.1111/jomf.12628. https://doi.org/10.1111/jomf.12628 (16) Bengtson, V.L.: Beyond the nuclear family: The increasing importance of multigenerational bonds. Journal of Marriage and Family 63(1), 1–16 (2001) https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1741-3737.2001.00001.x. https://doi.org/10.1111/j.1741-3737.2001.00001.x (17) Feehan, D.M., Mahmud, A.S.: Quantifying population contact patterns in the united states during the covid-19 pandemic. Nature Communications 12(1), 893 (2021). https://doi.org/10.1038/s41467-021-20990-2 (18) Cheng, H.-Y., Jian, S.-W., Liu, D.-P., Ng, T.-C., Huang, W.-T., Lin, H.-H., for the Taiwan COVID-19 Outbreak Investigation Team: Contact Tracing Assessment of COVID-19 Transmission Dynamics in Taiwan and Risk at Different Exposure Periods Before and After Symptom Onset. JAMA Internal Medicine 180(9), 1156–1163 (2020) https://jamanetwork.com/journals/jamainternalmedicine/articlepdf/2765641/jamainternal_cheng_2020_oi_200031_1599079428.65582.pdf. https://doi.org/10.1001/jamainternmed.2020.2020 (19) Koh, W.C., Naing, L., Chaw, L., Rosledzana, M.A., Alikhan, M.F., Jamaludin, S.A., Amin, F., Omar, A., Shazli, A., Griffith, M., Pastore, R., Wong, J.: What do we know about sars-cov-2 transmission? a systematic review and meta-analysis of the secondary attack rate and associated risk factors. PLOS ONE 15(10), 1–23 (2020). https://doi.org/10.1371/journal.pone.0240205 (20) Kasarda, J.D., Janowitz, M.: Community attachment in mass society. American Sociological Review 39(3), 328–339 (1974). Accessed 2023-03-10 (21) Reia, S.M., Rao, P.S.C., Barthelemy, M., Ukkusuri, S.V.: Spatial structure of city population growth. Nature Communications 13(1), 5931 (2022). https://doi.org/10.1038/s41467-022-33527-y (22) Dobson, A.P., Carper, E.R.: Infectious diseases and human population history. BioScience 46(2), 115–126 (1996). Accessed 2023-04-16 (23) Hale, T., Angrist, N., Goldszmidt, R., Kira, B., Petherick, A., Phillips, T., Webster, S., Cameron-Blake, E., Hallas, L., Majumdar, S., Tatlow, H.: A global panel database of pandemic policies (oxford covid-19 government response tracker). Nature Human Behaviour 5(4), 529–538 (2021). https://doi.org/10.1038/s41562-021-01079-8 (24) Ogawa, N., Ermisch, J.F.: Family structure, home time demands, and the employment patterns of japanese married women. Journal of Labor Economics 14(4), 677–702 (1996) (25) McCulloch, B.J.: The relationship of family proximity and social support to the mental health of older rural adults: The appalachian context. Journal of Aging Studies 9(1), 65–81 (1995). https://doi.org/10.1016/0890-4065(95)90026-8 (26) Malevergne, Y., Pisarenko, V., Sornette, D.: Testing the pareto against the lognormal distributions with the uniformly most powerful unbiased test applied to the distribution of cities. Phys. Rev. E 83, 036111 (2011). https://doi.org/10.1103/PhysRevE.83.036111 (27) Ioannides, Y., Skouras, S.: Us city size distribution: Robustly pareto, but only in the tail. Journal of Urban Economics 73(1), 18–29 (2013). https://doi.org/10.1016/j.jue.2012.06.005 (28) Levy, M.: Gibrat’s law for (all) cities: Comment. American Economic Review 99(4), 1672–75 (2009). https://doi.org/10.1257/aer.99.4.1672 (29) Eeckhout, J.: Gibrat’s law for (all) cities. American Economic Review 94(5), 1429–1451 (2004). https://doi.org/10.1257/0002828043052303 (30) Eeckhout, J.: Gibrat’s law for (all) cities: Reply. American Economic Review 99(4), 1676–83 (2009). https://doi.org/10.1257/aer.99.4.1676 (31) Chen, Y.-C., Genovese, C.R., Tibshirani, R.J., Wasserman, L.: Nonparametric modal regression. The Annals of Statistics 44(2), 489–514 (2016). https://doi.org/10.1214/15-AOS1373 (32) Wang, Y.: Smoothing Splines: Methods and Applications. CRC press, Boca Raton, Florida (2011) (33) Pew Research Center for the People and the Press: The Early October Social Trends Survey. https://www.pewresearch.org/social-trends/dataset/mobility/ (2009) (34) Mok, D., Wellman, B.: Did distance matter before the internet?: Interpersonal contact and support in the 1970s. Social Networks 29(3), 430–461 (2007). https://doi.org/10.1016/j.socnet.2007.01.009. Special Section: Personal Networks (35) David-Barrett, T., Dunbar, R.I.M.: Fertility, kinship and the evolution of mass ideologies. Journal of Theoretical Biology 417, 20–27 (2017). https://doi.org/10.1016/j.jtbi.2017.01.015 (36) Roberts, S.B., Dunbar, R.I.: Managing relationship decay. Human Nature 26(4), 426–450 (2015) (37) Mossong, J., Hens, N., Jit, M., Beutels, P., Auranen, K., Mikolajczyk, R., Massari, M., Salmaso, S., Tomba, G.S., Wallinga, J., Heijne, J., Sadkowska-Todys, M., Rosinska, M., Edmunds, W.J.: Social contacts and mixing patterns relevant to the spread of infectious diseases. PLOS Medicine 5(3), 1–1 (2008). https://doi.org/10.1371/journal.pmed.0050074 (38) Perra, N.: Non-pharmaceutical interventions during the covid-19 pandemic: A review. Physics Reports 913, 1–52 (2021). https://doi.org/10.1016/j.physrep.2021.02.001 (39) Litwak, E.: Occupational mobility and extended family cohension. American sociological review, 9–21 (1960) (40) Litwak, E.: Geographic mobility and extended family cohesion. American Sociological Review, 385–394 (1960) (41) d’Artis Kancs: The economic geography of labour migration: Competition, competitiveness and development. Applied Geography 31(1), 191–200 (2011). https://doi.org/10.1016/j.apgeog.2010.04.003. Hazards (42) Newman, M.: Networks. OUP Oxford, Oxford, UK (2018). https://books.google.com/books?id=YdZjDwAAQBAJ (43) Dickison, M., Havlin, S., Stanley, H.E.: Epidemics on interconnected networks. Phys. Rev. E 85, 066109 (2012). https://doi.org/10.1103/PhysRevE.85.066109 (44) Holme, P., Saramäki, J.: Temporal networks. Physics Reports 519(3), 97–125 (2012). https://doi.org/10.1016/j.physrep.2012.03.001. Temporal Networks (45) Porter, M., Gleeson, J.: Dynamical Systems on Networks: A Tutorial. Frontiers in Applied Dynamical Systems: Reviews and Tutorials. Springer, Darmstadt, Germany (2016). https://books.google.com/books?id=uzDuCwAAQBAJ (46) Karrer, B., Newman, M.E.J.: Message passing approach for general epidemic models. Phys. Rev. E 82, 016101 (2010). https://doi.org/10.1103/PhysRevE.82.016101 (47) Karsai, M., Kivelä, M., Pan, R.K., Kaski, K., Kertész, J., Barabási, A.-L., Saramäki, J.: Small but slow world: How network topology and burstiness slow down spreading. Phys. Rev. E 83, 025102 (2011). https://doi.org/10.1103/PhysRevE.83.025102 (48) Woltring, H.J.: A fortran package for generalized, cross-validatory spline smoothing and differentiation. Advances in Engineering Software (1978) 8(2), 104–113 (1986). https://doi.org/10.1016/0141-1195(86)90098-7 (49) Golub, G.H., Heath, M., Wahba, G.: Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 21(2), 215–223 (1979) https://www.tandfonline.com/doi/pdf/10.1080/00401706.1979.10489751. https://doi.org/10.1080/00401706.1979.10489751 (50) The Scipy Community https://docs.scipy.org/doc/scipy/reference/generated/scipy.interpolate.make_smoothing_spline.html#scipy.interpolate.make_smoothing_spline. Accessed: 2023-03-21 (51) Hastie, T., Tibshirani, R., Friedman, J.H., Friedman, J.H.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction (2nd Ed.). Springer, New York, NY (2009) (52) Silverman, B.W.: Density Estimation for Statistics and Data Analysis vol. 26. CRC press, Boca Raton, FL (1986) (53) Bureau of Labor Statistics: American Time Use Survey. https://www.bls.gov/tus (54) National Bureau of Economic Research: Census Core-Based Statistical Area (CBSA) to Federal Information Processing Series (FIPS) County Crosswalk (55) U.S. Census Bureau: Total population, 2015-2019 American Community Survey 5-year estimates. https://data.census.gov (2020) (56) Deming, W.E., Stephan, F.F.: On a least squares adjustment of a sampled frequency table when the expected marginal totals are known. The Annals of Mathematical Statistics 11(4), 427–444 (1940). Full publication date: Dec., 1940 (57) Idel, M.: A review of matrix scaling and Sinkhorn’s normal form for matrices and positive maps (2016)
AutoMSC: Automatic Assignment of Mathematics Subject Classification Labels Moritz Schubotz Philipp Scharpf University of Konstanz, Germany ({first.last}@uni-konstanz.de) Olaf Teschke FIZ-Karlsruhe, Germany ({first.last}@fiz-karlsruhe.de) Andreas Kühnemund FIZ-Karlsruhe, Germany ({first.last}@fiz-karlsruhe.de) Corinna Breitinger Bela Gipp Abstract Authors of research papers in the fields of mathematics, and other math-heavy disciplines commonly employ the Mathematics Subject Classification (MSC) scheme to search for relevant literature. The MSC is a hierarchical alphanumerical classification scheme that allows librarians to specify one or multiple codes for publications. Digital Libraries in Mathematics, as well as reviewing services, such as zbMATH and Mathematical Reviews (MR) rely on these MSC labels in their workflows to organize the abstracting and reviewing process. Especially, the coarse-grained classification determines the subject editor who is responsible for the actual reviewing process. In this paper, we investigate the feasibility of automatically assigning a coarse-grained primary classification using the MSC scheme, by regarding the problem as a multi class classification machine learning task. We find that the our method achieves an $F_{1}$-score of over 77%, which is remarkably close to the agreement of zbMATH and MR ($F_{1}$-score of 81%). Moreover, we find that the method’s confidence score allows for reducing the effort by 86% compared to the manual coarse-grained classification effort while maintaining a precision of 81% for automatically classified articles. \addbibresource short.bib \addbibresourcereferences.bib \blx@mkhyperref 1 Introduction zbMATH111https://zbmath.org/ has classified more than 135k articles in 2019 using the Mathematics Subject Classification (MSC) scheme [Khnemund2016]. With more than 6,600 MSC codes, this classification task requires significant in-depth knowledge of various sub-fields of mathematics to determine the fitting MSC codes for each article. In summary, the classification procedure of zbMATH and MR is two-fold. First, all articles are pre-classified into one of 63 primary subjects spanning from general topics in mathematics (00), to integral equations (45), to mathematics education (97). In a second step, subject editors assign fine-grained MSC codes in their area of expertise, i.a. with the aim to match potential reviewers. The automated assignments of MSC labels has been analyzed by \citeauthorRehurekS08 [RehurekS08] in \citeyearRehurekS08 on the DML-CZ [SojkaR07] and NUMDAM [BoucheL17] full-text corpus. They report a micro-averaged $F_{1}$ score of 81% for their public corpus. In \citeyearBarthelTB13 \citeauthorBarthelTB13 performed automated subject classification for parts of the zbMATH corpus [BarthelTB13]. They criticized the micro averaged $F_{1}$ measure, especially, if the average is applied only to the best performing classes. However, they report a micro-averaged $F_{1}$ score of $67.1\%$ for the zbMATH corpus. They suggested training classifiers for a precision of $95\%$ and assigning MSC class labels in a semi-automated recommendation setup. Moreover, they suggested to measure the human baseline (inter-annotator agreement) for the classification tasks. Moreover, they found that the combination of mathematical expressions and textual features improves the $F_{1}$ score for certain MSC classes substantially. In \citeyearSchonebergS14, \citeauthorSchonebergS14 [SchonebergS14] implement a method that combined formulae and text using an adapted Part of Speech Tagging approach. Their paper reported a sufficient precision of $>.75$, however, it did not state the recall. The proposed method was implemented and is currently being used especially to pre-classify general journals [zbMATH06353861] with additional information, like references. For a majority of journals, coarse- and fine-grained codes can be found by statistically analyzing the MSC codes from referenced documents matched within the zbMATH corpus. The editor of zbMATH hypothesizes that the reference method outperforms the algorithm developed by \citeauthorSchonebergS14. To confirm or reject this hypothesis was one motivation for this project. The positive effect of mathematical features is confirmed by \citeauthorSuzukiF17 [SuzukiF17], who measured the classification performance based on an arXiv and mathoverflow dataset. In contrast, \citeauthorScharpf2020 [Scharpf2020] could not measure a significant improvement of classification accuracy for the arxiv dataset when incorporating mathematical identifiers. In their experiments \citeauthorScharpf2020 evaluated numerous machine learning methods, which extended [Evans17, SojkaNALS19] in terms of accuracy and run-time performance, and found that complex compute-intensive neural networks do not significantly improve the classification performance. In this paper, we focus on the coarse-grained classification of the primary MSC subject number (pMSCn) and explore how current machine learning approaches can be employed to automate this process. In particular, we compare the current state of the art technology [Scharpf2020] with a part of speech (POS) preprocessing based system customized for the application in zbMATH from 2014 [SchonebergS14]. We define the following research questions: 1. Which evaluation metrics are most useful to assess the classifications? 2. Do mathematical formulae as part of the text improve the classifications? 3. Does POS preprocessing [SchonebergS14] improve the accuracy of classifications? 4. Which features are most important for accurate classification? 5. How well do automated methods perform in comparison to a human baseline? 2 Method To investigate the given set of problems, we first created test and training datasets. We then investigated the different pMSCn encodings, trained our models and evaluated the results, cf Figure 1. 2.1 Generation of a test and training dataset Filter current high quality articles: The zbMATH database has assigned MSC codes to more than 3.6 M articles. However, the way in which mathematical articles are written has changed over the last century, and the classification of historic articles is not something we aim to investigate in this article. The first MSC was created in 1990, and has since been updated every ten years (2000, 2010, and 2020) [MSC2010]. With each update, automated rewrite rules are applied to map the codes from the old MSC to the next MSC version, which is connected with a loss of accuracy of the class labels. To obtain a coherent and high quality dataset for training and testing, we focused on the more recent articles from 2000 to 2019, which were classified using the MCS version 2010, and we only considered selected journals222The list of selected journals is available from https://zbmath.org/?q=dt%3Aj+st%3Aj+py%3A2000-2019.. Additionally, we restricted our selection to English articles and limited ourselves to abstracts rather than reviews of articles. To be able to compare methods that are based on references and methods using text and title, we only selected articles with at least one reference that could be matched to another article. In addition, we excluded articles that were not yet published and processed. The list of articles is available from our website: https://automsceval.formulasearchengine.com Splitting to test and training set: After applying the filter criteria as mentioned above, we split the resulting list of 442,382 articles into test and training sets. For the test set, we aimed to measure the bias of our zbMATH classification labels. Therefore, we used the articles for which we knew the classification labels by the MR service as the training set from a previous research project [Bannister2018]. The resulting test set consisted of $n=32,230$ articles, and the training set contained 410,152 articles. To ensure that this selection did not introduce additional bias, we also computed the standard ten-fold cross validation, cf. Section 3. Definition of article data format: To allow for reproducibility, we created a dedicated dataset from our article selection, which we aim to share with other researchers. However, currently, legal restrictions apply and the dataset can not yet be provided for anonymous download at this date. However, we can grant access for research purposes as done in the past [BarthelTB13]. Each of the 442,382 articles in the dataset contained the following fields: de An eight-digit ID of the document333The fields de and labels must not be used as input to the classification algorithm.. labels The actual MSC codes††footnotemark: . title The English title of the document, with LaTeX macros for mathematical language [Schubotz2019b]. text The text of the abstract with LaTeX macros. mscs A comma separated list of MSC codes generated from the references. These 5 fields were provided as CSV files to the algorithms. The mscs field was generated as follows: For each reference in the document, we looked up the MSC codes of the reference. For example, if a certain document contained the references $A,B,C$ that are also in the documents in zbMATH and the MSC codes of $A,B,C$ are $a_{1}$ and $a_{2}$, $b_{1}$, and $c_{1}-c_{3}$, respectively, then the field mscs will read $a_{1}a_{2},b_{1},c_{1}c_{2}c_{3}.$ After training, we required each of our tested algorithms to return the following fields in CSV format for the test sets: de (integer) Eight-digit ID of the document. method (char(5)) Five-letter ID of the run. pos (integer) Position in the result list. coarse (integer) Coarse-grained MSC subject number. fine (char(5), optional) Fine-grained MSC code. score (numeric, optional) Self-confidence of the algorithm about the result. We ensured that the fields de, method and pos form a primary key, i.e., no two entries in the result can have the same combination of values. Note that for the current multi-class classification problem, pos is always 1, since only the primary MSC subject number is considered. 2.2 Definition of evaluation metrics While the assignment of all MSC codes to each article is a multi-label classification task, the assignment of the primary MSC subject, which we investigate in this paper, is only a multi-class classification problem. With $k=63$ classes, the probability of randomly choosing the correct class of size $c_{i}$ is rather low $P_{i}=\frac{c_{i}}{n}.$ Moreover, the dataset is not balanced. In particular, the entropy $H=-\sum_{i=1}^{k}P_{i}\log P_{i},$ can be used to measure the imbalance $\widehat{H}=\frac{H}{\log k}$ by normalizing it to the maximum entropy $\log{k.}$ To take into account the imbalance of the dataset, we used weighted versions of precision $p$, recall $r,$ and the $F_{1}$ measure $f$. In particular, the precision $p=\frac{\sum_{i=1}^{k}c_{i}p_{i}}{n}$ with the class precision $p_{i}$. $r$ and $F_{1}$ are defined analogously. In the test set, no entries for the pMSCn 97 (Mathematics education) were included, thus $$\widehat{H}=\frac{H}{\log k}=\frac{3.44}{\log 62}=.83$$ Moreover, we eliminate the effect of classes with only few samples by disregarding all classes with less than 200 entries. While pMSCn with few samples have little effect on the average metrics, the individual values are distracting in plots and data tables. Choosing 200 as the minimum evaluation class size reduces the number of effective classes to $k=37,$ which only has a minor effect on the normalized entropy as it is raised to $\widehat{H}=.85.$ The chosen value of 200 can be interactively adjusted in the dynamic result figures we made available online444https://autoMSCeval.formulasearchengine.com. Additionally, the individual values for $P_{i}$ that were used to calculate $H$ are given in the column p in the table on that page. As one can experience in the online version of the figures, the impact on the choice of the minimum class size is insignificant. 2.3 Selection of methods to evaluate In this paper, we compare 12 different methods for (automatically) determining the primary MSC subject in the test dataset: zb1 Reference MSC subject numbers from zbMATH. mr1 Reference MSC subject numbers from MR. titer According to recent research performed on the arXiv dataset [Scharpf2020], we chose a machine learning method with a good trade-off between speed and performance. We combined the title, abstract text, and reference mscs of the articles via string concatenation. We encoded these string sources using the TfidfVectorizer of the Scikit-learn555https://swmath.org/software/8058 [swSciKit] python package. We did not alter the utf-8 encoding, and did not perform accent striping, or other character normalization methods, with the exception of lower-casing. Furthermore, we used the word analyzer without a custom stop word list, selecting tokens of two or more alphanumeric characters, processing unigrams, and ignoring punctuation. The resulting vectors consisted of float64 entries with l2 norm unit output rows. This data was passed to Our encoder. The encoder was trained on the training set to subsequently transform or vectorize the sources from the test set. We chose a lightweight LogisticRegression classifier from the python package Scikit-learn. We employed the l2 penalty norm with a $10^{-4}$ tolerance stopping criterion and a 1.0 regularization. Furthermore, we allowed intercept constant addition and scaling, but no class weight or custom random state seed. We fitted the classifier using the lbfgs (Limited-memory BFGS) solver for 100 convergence iterations. These choices were made based on a previous study in which we clustered arXiv articles. refs Same as titer, but using only the mscs as input666Each of these sources was encoded and classified separately.. titls Same as titer, but using only the title as input††footnotemark: . texts Same as titer, but using only the text as input††footnotemark: . tite Same as titer, but without using the mscs as input††footnotemark: . tiref : Same as titer, but without using the abstract text as input ††footnotemark: . teref : Same as titer, but without using the title as input††footnotemark: . ref1 We used a simple SQL script to suggest the most frequent primary MSC subject based on the mscs input. This method is currently used in production to estimate the primary MSC subject. uT1 We adjusted the JAVA program posLingue [SchonebergS14] to read from the new training and test sets. However, we did not perform a new training and instead reused the model that was trained in 2014. However, for this run, we removed all mathematical formulae from the title and the abstract text to generate a baseline. uM1 The same as uT1 but in this instance, we included the formulae. We slightly adjusted the formula detection mechanism, since the way in which formulae are written in zbMATH had changed [Schubotz2019b]. This method is currently used in production for articles that do not have references with resolvable mscs.
Estimating redshift distributions using Hierarchical Logistic Gaussian processes Markus Michael Rau${}^{1}$, Simon Wilson${}^{2}$, Rachel Mandelbaum${}^{1}$ ${}^{1}$McWilliams Center for Cosmology, Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 ${}^{2}$School of Computer Science and Statistics, Lloyd Institute, Trinity College, Dublin, Ireland E-mail: [email protected] (Accepted XXX. Received YYY; in original form ZZZ) Abstract This work uses hierarchical logistic Gaussian processes to infer true redshift distributions of samples of galaxies, through their cross-correlations with spatially overlapping spectroscopic samples. We demonstrate that this method can accurately estimate these redshift distributions in a fully Bayesian manner jointly with galaxy-dark matter bias models. We forecast how systematic biases in the redshift-dependent galaxy-dark matter bias model affect redshift inference. Using published galaxy-dark matter bias measurements from the Illustris simulation, we compare these systematic biases with the statistical error budget from a forecasted weak gravitational lensing measurement. If the redshift-dependent galaxy-dark matter bias model is mis-specified, redshift inference can be biased. This can propagate into relative biases in the weak lensing convergence power spectrum on the 10% - 30% level. We, therefore, showcase a methodology to detect these sources of error using Bayesian model selection techniques. Furthermore, we discuss the improvements that can be gained from incorporating prior information from Bayesian template fitting into the model, both in redshift prediction accuracy and in the detection of systematic modeling biases. keywords: galaxies: distances and redshifts, catalogues, surveys, correlation functions ††pubyear: 2015††pagerange: Estimating redshift distributions using Hierarchical Logistic Gaussian processes–Estimating redshift distributions using Hierarchical Logistic Gaussian processes 1 Introduction In the era of large area photometric surveys like DES (e.g. Abbott et al., 2018), KiDS (e.g. Hildebrandt et al., 2017), HSC (e.g. Aihara et al., 2018) and future surveys like LSST (e.g. Ivezić et al., 2008), WFIRST (e.g. Spergel et al., 2015) and Euclid (e.g. Laureijs et al., 2011), it becomes increasingly important to accurately model sources of systematic bias in weak gravitational lensing and large-scale structure probes (e.g. Mandelbaum, 2018). One of the most important systematics in the context of photometric surveys is associated with inaccurate measurements of distance, or redshift, using the galaxies’ photometry (e.g. Ma et al., 2006; Bernstein & Huterer, 2010). These photometric redshifts can be inferred by a variety of techniques. In ‘template fitting’, we fit models for the spectral energy distribution (SED) of galaxies to their photometry (e.g. Arnouts et al., 1999; Benítez, 2000; Ilbert et al., 2006; Feldmann et al., 2006; Greisel et al., 2015; Leistedt et al., 2016). Alternative techniques use machine learning to infer a flexible mapping between the galaxies’ photometry and redshift, using a spectroscopic training set (Tagliaferri et al., 2003; Collister & Lahav, 2004; Gerdes et al., 2010; Carrasco Kind & Brunner, 2013; Bonnett, 2015; Rau et al., 2015; Hoyle, 2016). More recently, a combination of these approaches has also been investigated (Speagle & Eisenstein, 2015; Hoyle et al., 2015). Traditionally, photometric redshifts are validated using redshifts derived from spectroscopic observations, which requires long exposure times. Since spectroscopic observations are consequently quite costly, the completeness of these surveys strongly depend on the galaxy type and it can be impractical to obtain complete spectroscopic validation samples that extend towards faint magnitudes (see e.g. Huterer et al., 2014; Newman et al., 2015). Cross-correlation techniques are a practical alternative to the aforementioned inference techniques that use information of the galaxies photometry (e.g. Newman, 2008a; Ménard et al., 2013; McQuinn & White, 2013; Scottez et al., 2016; Raccanelli et al., 2017; B. Morrison et al., 2016; Davis et al., 2017; Gatti et al., 2018). These methods use galaxy samples for which spectroscopic or highly accurate photometric redshifts are available, and cross correlate them with the sample \colorblack for which redshifts need to be inferred. Since this sample typically has only photometric information available, we will refer to it as the ‘photometric sample’, even though we are not using its photometry in the cross-correlation technique. Except for the effect of cosmic magnification (see e.g. Scranton et al., 2005), this cross-correlation signal is only non-zero if both samples are at the same redshift. \colorblack Therefore this methodology allows one to learn about the ensemble redshift distribution of the photometric sample. In contrast, the aforementioned techniques that use galaxy photometry constrain both the redshifts of the individual galaxies and the ensemble redshift distribution of the sample. In this paper we use the term ‘photo-z’ distribution to denote ensemble redshift distributions of photometric samples of galaxies, i.e. not individual galaxies, even in the context of cross-correlation techniques that do not use the photometry of the galaxies. Extensions of this method employ different cosmological observables like weak gravitational lensing (e.g. Benjamin et al., 2013), or jointly marginalize over cosmological parameters, while assuming Gaussian photometric selection functions (McLeod et al., 2017). In more recent developments, Hoyle & Rau (2018) demonstrated that redshift information can be extracted even without the use of a spectroscopic reference sample solely from the correlation functions of the photometric sample itself\colorblack, if the photo-z distributions follow a known family of distributions like Gaussians or simple mixtures thereof. The first steps towards combining template fitting with cross-correlations have been made by Sánchez & Bernstein (2019), who use a simplified model for the redshift distribution and the cross-correlation measurement which fixes the galaxy-dark matter bias. They demonstrate nicely the improvements that can be gained by combining both complementary approaches in a Bayesian manner. Recently a similar approach has been applied to obtain redshift information for blended sources (Jones & Heavens, 2019). In this paper we extend and complement the aforementioned techniques by introducing the logistic Gaussian process as a flexible prior distribution to model photometric redshift distributions in a cross-correlation setting. This model easily allows the incorporation of prior knowledge of the redshift distribution and can be efficiently sampled with the methodology presented in this paper. As the redshift-dependent galaxy-dark matter bias of the photometric sample is degenerate with a flexible parametrization of the photometric redshift distribution, we marginalize over our uncertainty in this function. \colorblack Gaussian processes are a popular machine learning method in cosmology that was used for example for machine learning-based photometric redshift estimation (e.g. Almosallam et al., 2016) or to interpolate, and smooth, redshift histograms obtained using cross-correlation measurements (Johnson et al., 2017). Our method differs from these previous applications of Gaussian processes, as we use the logistic Gaussian process as a prior to the shape of the logarithm of the redshift distribution and not as a regression model. In this way we can jointly marginalize over the parametrization of the photometric redshift distribution and other systematic uncertainties like galaxy-dark matter bias. We specifically discuss how systematic biases in the modelling of the redshift-dependent galaxy-dark matter bias affects redshift inference. Specifically we showcase how Bayesian model comparison can be used to detect these biases and investigate the effect of incorporating additional information from e.g. template fitting on both the predictive accuracy of our method and the robustness with respect to the aforementioned modelling systematics. \colorblack We demonstrate this framework using a simulated mock data vector that uses published correlation function estimates from the Illustris simulation that provide galaxy-dark matter bias information. In this work we apply our methodology to two galaxy samples, i.e. the spectroscopic and photometric sample, that have different galaxy-dark matter bias properties. In the future we will apply our methodology to more realistic scenarios that accurately model the galaxy type selection of DESI-like spectroscopic surveys. This paper is structured as follows: in §2 we describe our methodology. This includes our modelling of the angular correlation functions, covariances, galaxy-dark matter bias and the design of our statistical model. In §4 we showcase the performance and accuracy of our methodology and study how systematic errors in the redshift dependent galaxy-dark matter bias impact redshift inference. We also propose Bayesian model comparison techniques as a way to detect these biases, which allows us to mitigate them by refining our modelling. We summarize and conclude in §5. 2 Methodology \color black Measurements of statistics of the density field are important for constraining cosmological parameters. Consider the product density $\rho(\mathbf{x_{1}},\mathbf{x_{2}})$ of finding two galaxies with positions $\mathbf{x_{1}}$ and $\mathbf{x_{2}}$ in the infinitesimal volumes $\mathrm{d}V(\mathbf{x_{1}})$ and $\mathrm{d}V(\mathbf{x_{2}})$. If the point process that describes the galaxy density field at redshift $z$ is stationary and isotropic, this quantity only depends on the distance between the two galaxies $r=||\mathbf{x_{1}}-\mathbf{x_{2}}||$ and we can write (see e.g. Kerscher, 1999) $$\rho(\mathbf{x_{1}},\mathbf{x_{2}})=\overline{\rho}^{2}\,(1+\xi(r))\,.$$ (1) Here, $\xi(r)$ denotes the two-point correlation function of galaxies111Note that the function $g(r)=1+\xi(r)$ is referred to as the ‘pair correlation function’ in statistics (Stoyan & Stoyan, 1994). and $\overline{\rho}$ denotes the average density of galaxies (per volume). The two-point correlation function is a statistic of the density field and can be used to constrain cosmological parameters. On the plane of the sky, we can define the angular correlation function $w(\theta)$ in analogy to $\xi(r)$ in Eq. (1) by replacing $\overline{\rho}$ by the average density of galaxies per unit area and $\xi(r)$ with the angular correlation function $w(\theta)$, where $\theta$ denotes the angular distance between the galaxy pair. The basis of cross correlation redshift techniques are angular correlation functions between photometric and spectroscopic samples. The cross correlation amplitude222Here to be understood as the angular correlation function averaged over an angular interval according to Eq. (9). $\widehat{w}_{\rm phot-spec}(z)$ between the photometric ($\rm phot$) and spectroscopic sample ($\rm spec$) at redshift $z$ can be written as $$\widehat{w}_{\rm phot-spec}(z)\propto p_{\rm phot}(z)\,p_{\rm spec}(z)\,b_{\rm phot% }(z)\,b_{\rm spec}(z)\,w_{\rm DM}(z)\,.$$ (2) The probability density functions $p_{\rm phot/spec}$ of the photometric and spectroscopic samples denote the probability of finding a galaxy in the sample at redshift $z$. These functions are therefore normalized to integrate to unity333The unit of the probability density function is the reciprocal unit of it’s argument. While the redshift has no unit, we will later use the probability density function of the comoving distance of the galaxies. Then the unit of the probability density will be inverse distance.. The terms $b_{\rm phot/spec}$ denote the galaxy-dark matter bias from the respective samples and $w_{\rm DM}(z)$ the amplitude of the dark matter density component. The squared galaxy-dark matter bias denotes the ratio between the two-point correlation function measured on the density field of tracers, i.e. the galaxies in the respective sample, and the two-point correlation function measured on the underlying dark-matter density field. From Eq. (2) we expect a significant angular cross correlation signal only between samples that overlap both spatially as well as in redshift. We then measure the cross-correlation between the photometric and spectroscopic samples binned in redshift intervals, e.g. tophat bins. In this way we can derive redshift information for the photometric sample from these cross-correlation signals. However as can be seen in Eq. (2), a redshift-dependent galaxy-dark matter bias evolution of the samples is degenerate with the photometric redshift distribution $p_{\rm phot}(z)$ that we want to infer. We therefore need to model these factors accurately. The goal of this paper is to test our Bayesian redshift inference methodology specifically with respect to systematic errors in the modelling of a redshift-dependent galaxy-dark matter bias. Furthermore we investigate how these errors can be detected and corrected in a practical analysis. We note that our goal is not to forecast the redshift quality of any particular survey. We will make a number of simplifications in the modelling of the angular correlations function in §2.1, the assumed redshift distribution and scale length model of the photometric and spectroscopic samples in §2.2, and the modelling of the covariance matrix in §2.3. For easy reference we summarize these assumptions in §2.5. The fiducial cosmological model used in this work is shown in Tab. 1. 2.1 Modelling the angular galaxy correlation function We use the simple modelling of the angular correlation function used in previous cross-correlation analyses (Newman, 2008b; Matthews & Newman, 2010) that exploit the Limber approximation (Limber, 1953) for the redshift projection and assume a power law shape of the correlation function at small scales. Using both approximations we are able to obtain an analytical solution to model the clustering signal. The gained computational efficiency allows us to more conveniently experiment with complex photometric redshift parametrizations, while still maintaining a practical methodology within the testable and well known limits of the imposed approximations. For small scales below $\sim 15~{}{\rm Mpc/h}$ (Zehavi et al., 2002; Simon, 2007; Springel et al., 2018), but above $\sim 1~{}{\rm Mpc/h}$ (Springel et al., 2018, Fig. 1) we can approximate the correlation function $\xi(r)$ as a power law $$\xi(r)=\left(\frac{r}{r_{0}}\right)^{-\gamma}\,,$$ (3) where the clustering scale length $r_{0}$ and exponent $\gamma$ are a function of redshift $z$, or equivalently comoving distance. As explained in the next section, we will model the redshift dependence of these values using the measured functions from the Illustris simulation, for different galaxy samples, from Springel et al. (2018). The Limber approximation assumes that the redshift distribution varies little compared with the coherence length of the considered clustering density field (Bartelmann & Schneider, 2001, p. 43). Following Simon (2007) we write the angular correlation function as $$w(\theta)=\int_{0}^{\infty}d\overline{r}p_{1}(\overline{r})p_{2}(\overline{r})% \int d\Delta r\,\xi(R,\overline{r})\,,$$ (4) where we used the variables $R=\sqrt{\overline{r}^{2}\theta^{2}+\Delta r^{2}}$ and $\overline{r}=\frac{r_{1}+r_{2}}{2}$ and $\Delta r=r_{2}-r_{1}$ and denote with $r_{1}$ and $r_{2}$ the radial distances of a pair of galaxies. \colorblack The functions $p_{1/2}(\overline{r})$ denote the sample comoving distance distributions that are connected with the redshift distributions $p(z)$ as $p(\overline{r})=p(z)\left|dz\large/d\overline{r}\right|$. Using the power law approximation in Eq. (3), the angular correlation function can be expressed as $$w(\theta)=\int_{0}^{\infty}\,d\overline{r}\,A_{w}(\overline{r})\,\left(\frac{% \theta}{{\rm 1RAD}}\right)^{1-\gamma(\overline{r})}\,,$$ (5) where $$A_{w}(\overline{r})=\sqrt{\pi}\,r_{0}(\overline{r})^{\gamma(\overline{r})}\,% \left(\frac{\Gamma(\gamma(\overline{r})/2-1/2)}{\Gamma(\gamma(\overline{r})/2)% }\right)\,p_{1}(\overline{r})\,p_{2}(\overline{r})\,d_{A}(\overline{r})^{1-% \gamma(\overline{r})}\,.$$ (6) Here, $d_{A}(\overline{r})$ denotes the angular diameter distance, $\Gamma(x)$ are gamma functions. To simplify the calculation, we discretize the comoving distance-dependence of the scale length $r_{0}(\overline{r})$, the exponent $\gamma(\overline{r})$ and the photometric redshift distribution into the same $N$ comoving distance bins $$w(\theta)=\sum_{i=1}^{N}\overline{A}_{w}^{i}\left(\frac{\theta}{\rm 1\,RAD}% \right)^{1-\gamma_{i}}\,,$$ (7) where $$\overline{A}_{w}^{i}=\sqrt{\pi}\,r_{0,i}^{\gamma_{i}}\,\left(\frac{\Gamma(% \gamma_{i}/2-1/2)}{\Gamma(\gamma_{i}/2)}\right)\,p_{1}^{i}\,p_{2}^{i}\,\int_{r% _{\rm L,i}}^{r_{\rm R,i}}d_{A}(\overline{r})^{1-\gamma_{i}}d\,\overline{r}\,.$$ (8) Here $r_{\rm L,i}$ and $r_{\rm R,i}$ denote the left and right edges of comoving distance bin $i$. As discussed in more detail in the next section, we bin the angular correlation function in $\theta$ space (see e.g. Salazar-Albornoz et al., 2017) $$\widehat{w}=\frac{1}{\Delta\Omega}\int\,d\Omega\,w(\theta)\,,$$ (9) where $\widehat{w}$ denotes the binned angular correlation function, and $\Omega$ denotes the solid angle, where $$\Delta\Omega=2\pi\int_{\theta_{L}}^{\theta_{R}}d\theta^{{}^{\prime}}\sin{% \theta^{{}^{\prime}}}\approx\pi\,(\theta_{R}^{2}-\theta_{L}^{2})\,.$$ (10) Here we used the small angle approximation $\sin(\theta)\sim\theta$, which is valid to good accuracy within small angular intervals $\theta\in[\theta_{L},\theta_{R}]$. We thus obtain $$\widehat{w}=\frac{2}{\theta_{R}^{2}-\theta_{L}^{2}}\sum_{i=1}^{N}\,\overline{A% }_{w}^{i}\left(\frac{\theta_{R}^{3-\gamma_{i}}-\theta_{L}^{3-\gamma_{i}}}{3-% \gamma_{i}}\right)$$ (11) To model the scale length of the photometric and spectroscopic samples, we assume linear biasing (see e.g. Matthews & Newman, 2010): $$r_{\rm 0,sp}(\overline{r})^{\gamma_{\rm sp}(\overline{r})}=\sqrt{r_{\rm 0,ss}(% \overline{r})^{\gamma_{\rm ss}(\overline{r})}r_{\rm 0,pp}(\overline{r})^{% \gamma_{\rm pp}(\overline{r})}}\,,$$ (12) where $r_{\rm 0,sp}(\overline{r})$, $r_{\rm 0,ss}(\overline{r})$, $r_{\rm 0,pp}(\overline{r})$ and $\gamma_{\rm sp}(\overline{r})$, $\gamma_{\rm ss}(\overline{r})$, $\gamma_{\rm pp}(\overline{r})$ denote the comoving distance-dependent scale length and slope of the two point correlation function (Eq. 3) for the cross-correlation between the spectroscopic and photometric samples, the spectroscopic sample and the photometric sample. Linear biasing is also assumed to obtain the redshift-dependent galaxy bias models for the modelling of the covariance matrix in §2.3, that employ angular correlation power spectra of the cross- and auto-correlations $$b_{\rm sp,ss,pp}(\overline{r})^{2}=\left(\frac{r_{\rm 0,sp,ss,pp}(\overline{r}% )}{r_{\rm 0,tot}(\overline{r})}\right)^{\gamma(\overline{r})}\,.$$ (13) \color black Here $r_{\rm 0,tot}(\overline{r})$ denotes the scale length of the total matter component measured in Springel et al. (2018) on scales $5h^{-1}{\rm Mpc}-20h^{-1}{\rm Mpc}$. In this range, the total matter power spectrum is very similar to the dark-matter only power spectrum (Springel et al., 2018, Fig. 9). We therefore use it for the definition of the galaxy-dark matter bias in Eq. (13). Within the considered scales, we expect that the influence of scale-dependent biasing is quite low (Springel et al., 2018, Fig. 19, top right panel). We therefore use the same redshift-dependent power law slope $\gamma$ for the dark matter and galaxy components. Similarly, since the redshift-dependent power law slope $\gamma$ does not depend much on stellar mass, as found in Springel et al. (2018, Fig. 14), we will use the same power spectrum slope for all galaxy samples considered in this forecast. The used model corresponds to the stellar mass sample $8.5<\log{(M_{*}[h^{-2}M_{\odot}])}<9.0$ shown in Springel et al. (2018, Fig. 14). Furthermore we will assume that the slope and scale length of the two point correlation function $\gamma$ and $r_{\rm 0,ss}$ can be measured sufficiently accurately in the spectroscopic sample, compared with the uncertainty in the redshift-dependent scale length of the photometric sample. We will therefore fix these values in the statistical analysis described in §3, based on the high signal-to-noise ratio expected for next generation spectroscopic surveys (see, e.g., DESI Collaboration et al., 2016). \colorblack This is similar to the approach used in traditional cross-correlation redshift estimates, where one fits the scale length and power-law slope of the spectroscopic sample and subsequently uses these fitted parameters in the cross-correlation analysis assuming linear biasing (e.g. Matthews & Newman, 2012). 2.2 Modelling the redshift distribution and scale length of the galaxy samples The left panels of Fig. 1 and Fig. 2 show the assumed redshift and comoving distance distributions. The black line shows the redshift distribution of the photometric sample and the red bins those of the spectroscopic sample. The right panels of Fig. 1 and Fig. 2 plot the scale length models of our mock galaxies. \colorblack The redshift distribution covers a relatively small redshift range up to $z<1.5$ that is designed to ensure that the assumptions we impose on the redshift evolution of the galaxy-dark matter bias and the power law exponent can be expected to be valid to a good degree. We refer to the following sections §2.4 and §2.5 for a more detailed discussion of these assumptions. We picked a tailed distribution to ensure that our methodology is tested to be robust against cross-correlation measurements with very different signal-to-noise levels. We note however that the particular shape of the photometric redshift distribution does not influence our results on a fundamental level, as we parametrize the photometric redshift distribution using a logistic Gaussian process on the (log) histogram bins, which is a very flexible non-parametric approach. It has to be noted however, that the redshift binning of the spectroscopic sample can introduce an undesired smoothing effect, if these bins are chosen to be too large. For more peaked distributions, we therefore might have to consider a finer binning. This effect has been studied in Rau et al. (2017), where recommendations about bin width selection, as well as on methods to detect and correct the oversmoothing effect are described. Since our distribution is quite smooth, we expect that our binning scheme is fine enough to capture the structure of the distribution well. The redshift evolution of the scale length models has been extracted from Springel et al. (2018, Fig. 13)444We extract these functions directly from the paper plots using the WebPlotDigitizer software https://automeris.io/WebPlotDigitizer/.. The redshift evolution of the exponent $\gamma$ is less dependent on stellar mass as discussed in Springel et al. (2018). For simplicity we therefore choose a common function for these samples that corresponds to the stellar mass bin of Model 5 in Fig. 1, i.e. $8.5<\log{(M_{*}[h^{-2}M_{\odot}])}<9.0$. The different scale length models correspond to the stellar mass ranges of the different galaxy samples that we want to consider in this work. In the following we will denote these models with the numbers quoted in the plot. Since we will compare different combinations of galaxy-dark matter bias models for the spectroscopic and photometric samples, we will use different scale length factor models for each sample. As the redshift, or comoving distance, evolution of the scale factor cannot be measured directly for the photometric sample, we need to parametrize our uncertainty in this quantity. Matthews & Newman (2012) use a constant factor to correct the scale length models from the measured spectroscopic sample to match the photometric one, i.e. $r_{\rm 0,spec}(z)\propto r_{\rm 0,phot}(z)$. However we found that a constant shift $\Delta$, i.e. $r_{\rm 0,spec}(z)=\Delta+r_{\rm 0,phot}(z)$ provides a better fit to offset the spectroscopic scale length model (Model 5) to the assumed photometric scale factor model (Model 4). \colorblack Furthermore we would like to study more complicated cases, where the difference in the photometric/spectroscopic scale length models is not a simple offset, but depends in a more complicated manner on the comoving distance. This motivated us to parametrize the comoving distance-dependence of the scale length as Chebychev polynomials $$r_{\rm 0,pp}(\overline{r})=\sum_{i=0}^{M}\Delta_{i}T_{i}(\overline{r})\,$$ (14) where the $T_{i}(\overline{r})$ are given by the recurrence relation \colorblack $$\displaystyle T_{0}(\overline{r})$$ $$\displaystyle=1$$ (15) $$\displaystyle T_{1}(\overline{r})$$ $$\displaystyle=\overline{r}$$ (16) $$\displaystyle T_{n+1}(\overline{r})$$ $$\displaystyle=2\,\overline{r}\,T_{n}(\overline{r})-T_{n-1}(\overline{r})\,.$$ (17) If the coefficients of the higher order terms $T_{i}$ for $i>0$ are the same for the spectroscopic and photometric samples, this parametrization is equivalent to the previously discussed fit between Model 5 and 4. We found that the expansion in Chebychev polynomials converged rapidly to the true function. By the properties of this basis, omitting higher order terms in this expansion leads to an intrinsic sparsity in the approximation and they provide good fits to the scale length for $M=2$. The constant offset $\Delta$ then corresponds to the $\Delta_{0}$ parameter in this expansion. We note that we use the Chebychev expansion \colorblack only as a way to interpolate the comoving distance-dependence of the scale length. It is no substitute for a physically-motivated galaxy-dark matter bias model. 2.3 Modelling the Covariance Matrix \color black The following section describes the modelling of the theoretical covariance matrix. The previous section used a simplified modelling of the angular correlation function. Since computational speed and modelling simplicity is not an advantage for the modelling of the covariance matrix, we use a more accurate approach here. We model the covariance matrix of the angular correlation functions binned in angular bins $i$, $j$ for the redshift bins $a$ and $b$ by transforming the covariance matrix of the corresponding angular correlation power spectra for redshift bins $a$ and $b$ as (see e.g. Crocce et al., 2011; Salazar-Albornoz et al., 2017) $$\Sigma_{(a,b),(i,j)}=\sum_{l\geq 2}\left(\frac{2\ell+1}{4\pi}\right)^{2}\left[% \widehat{L}_{\ell}^{i}\,\widehat{L}^{j}\,\Sigma_{l,l^{{}^{\prime}}}^{(a,b)}% \right]\,,$$ (18) where $\Sigma_{l,l^{{}^{\prime}}}^{a,b}$ denotes the covariance matrix of the angular correlation power spectra for a pair of redshift bins $a$ and $b$ and $$\displaystyle\begin{split}\displaystyle\widehat{L}_{\ell}^{i}&\displaystyle=% \frac{2\pi}{\Delta\Omega_{i}}\frac{1}{2\ell+1}\,\tilde{L}\\ \displaystyle\tilde{L}&\displaystyle=L_{\ell-1}\left(\cos{(\theta_{R}^{i})}% \right)-L_{\ell+1}\left(\cos{(\theta_{R}^{i})}\right)\\ &\displaystyle-L_{\ell-1}\left(\cos{(\theta_{L}^{i})}\right)+L_{\ell+1}\left(% \cos{(\theta_{L}^{i})}\right)\,,\end{split}$$ (19) are Legendre polynomials $L_{\ell}\left(x\right)$ binned in the respective angular bins $i$ or $j$. Here $\theta_{L}^{i}$ and $\theta_{R}^{i}$ denote the left and right edges of the angular bins and $\Delta\Omega_{i}$ the solid angle integral defined in Eq. (10). We model the covariance matrix between two angular correlation power spectra, that correspond to a pair of redshift bins $(i,j)$ and $(k,l)$ as $$\Sigma_{(i,j)}^{(k,l)}(\ell)=A(\ell)\,\left(\bar{C}^{(i,k)}(\ell)\,\bar{C}^{(j% ,l)}(\ell)+\bar{C}^{(i,l)}(\ell)\,\bar{C}^{(j,k)}(\ell)\right)\,\,,$$ (20) \color black which neglects correlations between neighboring angular modes $\ell$ and non-Gaussian contributions. The cosmic variance factor $$A(\ell)=\frac{\delta_{\ell,\ell^{{}^{\prime}}}}{(2\ell+1)f_{\rm sky}}$$ (21) is inversely proportional to the fractional sky coverage $f_{\rm sky}$ and $\bar{C}^{(i,j)}(\ell)$ denotes the shot noise contribution to the angular power spectra $$\bar{C}^{(i,j)}(\ell)=C^{(i,j)}(\ell)+\frac{\delta_{i,j}}{\bar{n}_{g}^{i}}\,.$$ (22) The shot noise $\bar{n}_{g}^{i}$ denotes the number of galaxies per steradian in the respective sample. The correlation power spectrum of a cosmological density field, e.g. the density field of galaxy positions or the corresponding field of galaxy shapes in lensing, can be defined as (e.g. Kirk et al., 2015) $$C_{\ell}^{i,j}=\frac{2}{\pi}\int W^{i}(\ell,k)W^{i}(\ell,k)k^{2}P(k)dk\,$$ (23) where $P(k)$ denotes the matter power spectrum and $W^{i}(\ell,k)$ are weighting functions that ‘project’ the matter power spectrum along the redshift dimension. For galaxy clustering the weighting function is defined as $$W_{\rm clus}(\ell,k)=\int b_{g}(k,z)n(z)j_{\ell}(kr(z))D(z)dz\,$$ (24) where $b_{g}(k,z)$ denote the (potentially) redshift and scale dependent galaxy-dark matter bias, $n(z)$ denotes the photometric redshift distribution, $j_{\ell}(kr(z))$ denote the spherical bessel functions and $D(z)$ the growth function. For weak gravitational lensing the weighting function takes the form $$W_{\rm lens}(\ell,k)=\int q(z)j_{\ell}(kr(z))D(z)dz\,,$$ (25) where $q(z)$ denotes the lensing weight $$q(z)=\frac{3H_{0}^{2}\Omega_{m}}{2c^{2}}\frac{r(z)}{a(z)}\int_{r_{\rm hor}}^{r% }dr^{{}^{\prime}}n(r(z^{{}^{\prime}}))\left(\frac{r(z^{{}^{\prime}})-r(z)}{r(z% ^{{}^{\prime}})}\right)\,,$$ (26) where $a(z)$ is the scale factor evaluated at redshift $z$ and $r_{\rm hor}$ is the comoving horizon. The cosmological parameter values and forecast assumptions made in this work are listed in Tab. 1 \colorblack and the assumed redshift distributions are shown in Fig. 1. The assumptions on number density and sky coverage are selected in analogy to Kirk et al. (2015) and would correspond to a DES Y5-like survey. However the redshift distribution covers a smaller redshift range, as discussed in the previous section. This implies that our signal-to-noise will likely be overestimated, which will make systematic biases due to a misspecified redshift dependent galaxy-dark matter model more significant, compared with the statistical error budget. We reiterate that we do not intend to forecast the performance of a particular survey, but rather to test the robustness of our redshift inference to a redshift-dependent galaxy-dark matter bias mis-specification. We use the cosmosis555https://bitbucket.org/joezuntz/cosmosis/wiki/Home (Zuntz et al., 2015) software to estimate the non-linear matter power spectrum for these parameters and the ‘LimberJack’ code 666https://github.com/damonge/LimberJack to calculate the galaxy angular power spectra, where we use the Limber approximation for $\ell>60$ and the exact calculation for smaller modes (see e.g. Thomas et al., 2011, §4). We perform the summation in Eq. (18) up to $\ell=5000$ to ensure convergence. \colorblack We note that the covariance matrix of the angular correlation power spectra exhibits correlations between the auto-correlation of the photometric bin and the cross-correlations with the spectroscopic tophat bins. These correlations are largest for low angular modes and decrease for larger modes. As these cross-correlations are typically ignored in similar cross-correlation analyses, where the auto-correlation and the cross-correlations are fit individually (e.g. Newman, 2008b; Matthews & Newman, 2010), we will only consider the diagonal component of the covariance matrix for simplicity. However we do stress that these cross terms have been used in previous applications of the cross-correlation technique (e.g. Matthews & Newman, 2012) and should be included in a practical application to data. \color black The left panel of Fig. 3 shows the correlation matrix of the angular correlation function for the example of the auto-correlation of the photometric sample. Similarly the right panel of Fig. 3 shows the correlation matrix of the angular correlation function for the cross-correlation between the photometric sample and the last spectroscopic tophat bin. We see in both panels, that the correlation of sub-degree angular scales is very high and decreases for correlations on larger angular scales. We note however that this result strongly depends on the impact of shot noise on the covariance, and therefore on the considered galaxy sample. Due to the high correlation on sub-degree scales, we consider a single angular bin from $\theta\in[0.1,1.0]\,{\rm arcmin}$, which is in the regime where the Limber approximation (see §2.1) is quite accurate. These scales are also comparable to the angular bins within $0.06-6\,[\rm{arcmin}]$ that have been used in Matthews & Newman (2010). \colorblack We note that this excludes larger scales that could be used to constrain cosmological parameters in a subsequent analysis. However the significant covariance between these larger (degree) scales and the sub-degree regime implies that we need to account for this correlation. We will further comment on this point in §2.5. \color black 2.4 Generating the mock data vector As discussed in the previous section, we find a large correlation between angular correlation function bins on small angular scales. Similar to Ménard et al. (2013) we therefore generate our mock data for the photometric galaxy angular auto- and cross-correlations with the spectroscopic tophat bins using a single angular bin within $[0.1,1.0]\,\rm{arcmin}$ following Eq. (11). This correlation function is generated assuming a redshift-dependent scale length and power spectrum exponent $\gamma(r)$ model that we discretize within the comoving distance bins shown in Fig. 1 as described in §2.1. We note that the binning of the redshift distribution and the discretization of $r_{0}(r)$ and $\gamma(r)$ are performed in comoving distance. We reiterate that the creation of the covariance matrix uses a more accurate modelling of the correlation functions. Since the software we use to obtain galaxy angular correlation power spectra in § 2.3 assumes a redshift distribution, we transform the histograms defined as comoving distance distributions $p(r)$ into redshift space $p(z)$ according to $p(z)=p(r)|dr/dz|$. Since the histogram assumes a uniform distribution in comoving distance, this transformation leads to tilted histogram bins and tophat functions. Since this is an artifact of the histogram discretization, we re-bin at the histogram midpoints for plots like Fig. 1. However, to ensure consistency with the modelling of the angular correlation function data vector, we transform all comoving distance tophat functions and histogram bins into redshift space without re-binning before the angular correlation power spectra are obtained. We reiterate that the scale length model strongly varies as a function of stellar mass, while the model for $\gamma(z)$ is quite similar for different galaxy populations within the redshift range ($z<1.5$) considered in this work. Thus, while we will use different scale length models in our mock data vector, we will choose the same model for $\gamma(r)$ that corresponds to Model 5 in Fig. 1. We note that the long-tailed shape of the photometric redshift distribution shown in the left panel of Fig. 1 ensures that the sampling tested in the following sections has to prove its robustness against significant variations in signal-to-noise in the parameters that describe the photo-z distribution. As this work does not consider inhomogeneous spectroscopic samples, i.e. spectroscopic samples that consist of very different galaxy populations across redshift, we do not extend our redshift range beyond $z>1.5$, where a realistic spectroscopic sample will consist of high-redshift quasar samples with quite different clustering properties compared with the rest of the sample. The modelling of the data vector also uses Eq. (11), however the scale length offset $\Delta_{0}$ (see § 2.2) is treated as a random variable. In § 4.2 we study the effect of biased scale length models on the accuracy of the reconstructed photo-z distribution. \colorblack 2.5 Summary of modelling assumptions In the following we summarize the modelling assumptions made when generating the mock data vector, the modelling of this data vector and the modelling of the covariance matrix. These points are also listed in Tab. 2 for easy reference. We list the assumption in the first column and list if this affects the generation of the mock data vector, the modelling of the data vector and the modelling of the covariance matrix in the subsequent columns. Limber Approximation and Power Law correlation function We use the Limber approximation to model the angular correlation functions within a single angular bin of $\theta\in[0.1,1.0]\,{\rm arcmin}$. The Limber approximation in combination with a power-law correlation function is the basis of many cross correlation methods (e.g. Newman, 2008b; Matthews & Newman, 2010; Ménard et al., 2013). We use this assumption in both the generation and modelling of the data vector. This analytical model allows us to quickly evaluate the correlation functions, which is very beneficial for efficient sampling. The Limber approximation can be expected to be accurate on the percent level within the considered scales (see e.g. Simon, 2007). We expect that deviations from the power-law correlation function can introduce a larger source of modelling bias, especially on smaller scales and high redshift $z>1.5$, where scale-dependent galaxy-dark matter bias will become important. For the modelling of the covariance matrix, we do not assume a power law correlation function and do not impose the Limber approximation. Since the advantages of the Limber approximation, i.e. convenient modelling and computational speed, do not affect the generation of the covariance matrix, we use the more exact treatment here. Galaxy-Dark Matter bias While we investigate the effect of a redshift-dependent galaxy-dark matter bias, we do not address the issue of a possible scale dependence. We neglect the stellar mass dependence of the correlation exponent $\gamma$ and use the same model for all galaxy samples. Furthermore we assume linear biasing throughout the paper. These assumptions are made for the creation of the mock data vector and its modelling. Since we use the full modelling of the power spectrum for the modelling of the covariance matrix, we need to convert the scale length and power spectrum exponent into a redshift-dependent galaxy-dark matter bias model according to Eq. (13). We use the aforementioned ‘global’ correlation exponent model, as well as the assumption of linear biasing. Covariance modelling The modelling of our covariance matrix includes both the shot noise contribution and cosmic variance. However for simplicity we neglect correlations between angular modes and cross terms between the auto- and cross-correlations of the data vector. As shown in Fig. 3, correlations between auto- and cross-correlations become important at small angular modes, i.e. larger scales. A more complete modelling should therefore include these terms. Cosmology dependence We must assume a fiducial cosmological model to convert from line-of-sight comoving distances to redshift. The dependence of the results on this assumption is expected to be mild (see e.g. Newman, 2008b). However in a more complete treatment, we will have to marginalize over cosmological parameters in the cross correlation analysis. Furthermore while the covariance matrix can, depending on the shot noise of the considered sample, show significant correlation between sub-degree and degree scales. This poses a problem if small scales are used to obtain cross correlation redshift posteriors that are used at larger scales in a cosmological analysis. We then need to consider a conditional likelihood of the large scale clustering measurement, given the small scale data vector. This is left for future work. Galaxy Sample Selection In this work we use a very simplistic selection of galaxy samples based on their stellar mass. Furthermore we assumed that we will be able to control the error contribution from uncertainties in the galaxy-dark matter bias of the spectroscopic sample to sufficient accuracy to not significantly widen our redshift posteriors. In future applications to data, we will have to include the spectroscopic measurement into the data likelihood and incorporate the complex galaxy selection function of spectroscopic surveys into the modelling. Especially at higher redshift, we must account for inhomogeneous galaxy populations in the spectroscopic reference sample and a more complex galaxy-dark matter bias model than presented in this work. 3 Statistical Modelling In this section we will introduce our statistical model and discuss our sampling methodology. We use a graphical notation in the form of a directed graphical model shown in Fig. 4. This diagram summarizes the dependency between the different random variables in a concise manner. 3.1 Directed Graphical Models A directed graph, such as Fig. 4, illustrates the dependency between different random variables schematically as a set of boxes or circles that are connected with arrows. Boxes denote variables with fixed values, circles represent random variables and shading represents a known random variable, e.g. a data vector. Arrows represent dependencies between these variables. For example, if two random variables $a$ and $b$ are connected by an arrow $a\rightarrow b$, they are not independent, i.e. $p(a|b)\neq p(a)p(b)$ and we need to define the conditional probability $p(a|b)$ of a random variable $a$ given the value of $b$. The likelihood is shown in a box that represents the dimension of the random variable, in our case of the data vector \colorblack of $N_{\rm bins}+1$ angular correlation functions. A filled circle within such a box thus represents the measured data, i.e. the angular correlation function measurement $\widehat{w}$, where the dimension is specified in the larger box. The modelled data vector is again a random variable, as it depends on a set of parameters that are themselves random variables. Accordingly the model for the measured data, i.e. $w$, is shown as an open circle, where the model depends on a set of model parameters. The boxed nodes shown in Fig. 4 therefore represent the likelihood term $\mathcal{L}=p(\widehat{w}|w)$, i.e. the probability of the data given the model. In the following we will describe the different components of the particular model shown in Fig. 4. 3.2 Logistic Gaussian processes for Redshift Inference We model the redshift distribution of the photometric sample as a histogram in comoving distance shown in Fig. 1, where the histogram bin midpoints are placed at the position of the $N_{\rm bins}$ spectroscopic tophat bins that are shown in red. We note that the bin sizes are selected such that they span equal-sized bins in redshift of size $\Delta z=0.1$. We discretize the photometric redshift distribution, the distance dependent scale factor $r_{0}(r)$ and the exponent $\gamma(r)$ on this grid to model the angular correlation functions as described in § 2.1. The data vector $\widehat{w}$ therefore consists of the auto-correlation of the photometric redshift bin and the cross-correlations between the photometric redshift bins and the spectroscopic tophat distributions. In this paper we will treat each bin height of the photometric sample, and the first order term in the Chebychev expansion (‘scale length offset $\Delta_{0}$’), as random variables. We fix the scale length model of the spectroscopic sample to its fiducial value and marginalize over the scale length offset $\Delta_{0}$ of the photometric scale length model. We note that the data covariance of the cross-correlation measurements implicitly contains our uncertainty in this parameter via the spectroscopic auto-correlation terms in Eq. (20). The scale length models of the spectroscopic and photometric samples are strongly degenerate via the linear biasing assumption Eq. (12). For the considered data vector it is therefore practical to fix these spectroscopic terms, that will likely be constrained to good precision by future spectroscopic surveys like DESI. A more complete treatment would explicitly break this degeneracy by including the measurement of the spectroscopic correlation functions into the data vector. We leave this for future work. We assume a Gaussian likelihood for the data vector $\widehat{w}$ $$p(\widehat{w}|w_{\rm model},\mathcal{C})=\prod_{i=1}^{N_{\rm bins}+1}\mathcal{% N}(\widehat{w}_{i}|w_{\rm model,i},\sigma_{\rm w,i})\,,$$ (27) where the product runs over the auto-correlation and the $N_{\rm bins}$ cross-correlations with the spectroscopic tophat bins. The modelling of the angular correlation function is then given by Eq. (11). We illustrate the likelihood in Fig. 4 as the boxed node at the lower right corner, where the data/model of the angular correlation function is represented as the empty/filled nodes. The box represents the joint likelihood of the $N_{\rm bin}+1$ dimensional data vector. We set a logistic Gaussian process prior on the $N_{\rm bins}$ photometric redshift histogram heights denoted as the empty circle ‘nz’ in Fig. 4. Accordingly, the logarithm of the $N_{\rm bins}$ dimensional vector of histogram heights $\log{\left(\mathbf{nz}\right)}$ is assumed to be drawn from a multivariate normal distribution with mean vector $\mathbf{\mu}$ and $N_{\rm bins}\times N_{\rm bins}$ dimensional covariance matrix $\Sigma$ $$\log{\left(\boldsymbol{nz}\right)}\sim\mathcal{N}\left(\boldsymbol{\mu},% \boldsymbol{\Sigma}\right)\,.$$ (28) If not mentioned otherwise, we fix the mean ‘$\boldsymbol{\mu}$’ of the logistic Gaussian process to zero and marginalize over the uncertainty in the covariance matrix. For this we choose the following kernel $$\mathbf{\Sigma}_{i,j}(\sigma,l)=\sigma^{2}\exp{\left(-\frac{|r_{i}-r_{j}|}{l^{% 2}}\right)}\,,$$ (29) \color black which is a special case of the Matérn kernel (Rasmussen & Williams, 2006), a common choice for Gaussian processes. We also tried the squared exponential kernel, however we found that this choice produced numerically more stable covariance matrices. The two parameters $\sigma$ and $l$ govern the magnitude of the diagonal components, as well as the correlation, or smoothness, of the histogram. In our model we therefore add a hierarchy that marginalizes over these parameters using broad uniform priors $l\sim\mathcal{U}(0,3000)$ and $\sigma\sim\mathcal{U}(0,2000)$. \colorblack The units of these parameters are $[l]=\sqrt{{\rm Mpc}}$ and $[\sigma]={\rm Mpc}$ respectively. These random variables, denoted ‘l’ and ‘$\sigma$’ are shown in Fig. 4 in the top hierarchy. As detailed in §2.1, we fix the redshift dependence of the power law exponent ‘$\gamma$’ and choose an unrestricted flat prior on the first coefficient of the Chebychev expansion, the ‘scale length offset $\Delta_{0}$’. 3.3 Sampling Methodology We sample the model shown in Fig. 4 using a combination of Metropolis-Hastings sampling and Elliptical Slice sampling steps. The following sections will introduce these sampling schemes. We will then detail the concrete sampling implementation in § 3.3.3. 3.3.1 Metropolis-Hastings sampling Assuming a set of random variables $\mathbf{\theta}$, Metropolis-Hastings sampling (see e.g. Gelman et al., 2004) iteratively samples from the distribution of each variable $\theta_{i}$ conditional on all other variables $\theta_{-i}$, i.e. $p(\theta_{i}|\theta_{-i}^{t-1},\mathcal{D})$, where $\theta_{-i}^{t-1}=\left(\theta_{1}^{t},\dots,\theta_{i-1}^{t},\theta_{i+1}^{t-% 1},\dots,\theta_{d}^{t-1}\right)$. Here $\mathcal{D}$ denotes the data vector, $t$ counts the number of iterations and $i$ denotes the index of the variable that is currently updated. If this conditional distribution is not known, we draw samples for a new parameter $\theta^{\star}$ given a previous state of the parameter $\theta^{t-1}$ from a proposal distribution $J_{t}(\theta^{\star}|\theta^{t-1})$. These samples are then accepted with probability $$r=\frac{p(\theta_{i}^{\star}|\theta_{-i},\mathcal{D})/J_{t}(\theta_{i}^{\star}% |\theta^{t-1})}{p(\theta_{i}^{t-1}|\theta_{-i},\mathcal{D})/J_{t}(\theta_{i}^{% \star}|\theta_{i}^{t-1})}\,.$$ (30) We can connect the posterior probabilities $p(\theta_{i}|\theta_{-i},\mathcal{D})$ via Bayes rule with the likelihood $p(\mathcal{D}|\theta_{i},\theta_{-i})$ and the prior $p(\theta_{i})$ as $$p(\theta_{i}|\theta_{-i},\mathcal{D})\propto p(\mathcal{D}|\theta_{i},\theta_{% -i})\,p(\theta_{i}|\theta_{-i})\,.$$ (31) We select a normal distribution as the proposal distribution $J_{t}(\theta_{i}^{\star}|\theta_{i}^{t-1})=\mathcal{N}(\theta_{i}^{\star}|\mu=% \theta_{i}^{t-1},\sigma)$, where we center the Gaussian at the previously proposed value $\theta_{i}^{t-1}$ and tune the standard deviation $\sigma$ such that we obtain acceptance rates between 20%-30% for the respective parameters. Since this distribution is symmetric, it cancels in Eq. (30). We refer the interested reader to Gelman et al. (2004) for a more detailed description of the Metropolis-Hastings sampling methodology. 3.3.2 Elliptical Slice Sampling Elliptical Slice sampling is a sampling technique for target distributions of the form $$p^{\star}(\theta)=\frac{1}{Z}p(\mathcal{D}|\theta)\mathcal{N}(\theta,0,\Sigma)\,,$$ (32) where $\theta$ denotes the free parameters of the model, $\mathcal{N}(\theta,0,\Sigma)$ a Gaussian prior and $p(\mathcal{D}|\theta)$ the likelihood. \colorblack Here $Z$ denotes a normalization factor that corresponds to the marginal likelihood, or model evidence, in Bayes rule. The method proposes new values $\theta^{\star}$ on an ellipse in parameter space as $$\theta^{\star}=\chi\sin{\left(\tau\right)}+\theta^{t-1}\cos{\left(\tau\right)}\,,$$ (33) where $\chi$ denotes an auxiliary variable randomly drawn, e.g. from the prior. The corresponding proposals are then accepted by the Metropolis-Hastings acceptance rule (see Eq. 30) introduced in the previous section. The angle parameter $\tau$ selects the step size of this proposal similar to the standard deviation of the Gaussian proposal distribution introduced in the previous section. We refer the interested reader to Murray et al. (2009) for a more detailed description of the method. 3.3.3 Sampling the Logistic Gaussian process Starting from the top end of Fig. 4, we use three main Metropolis-Hastings steps to sample the hyperparameters of the logistic Gaussian process covariance, the (log) histogram heights of the photometric redshift distribution and the scale length offset $\Delta_{0}$ that describes the scale length model. The sampling of the hyperparameters $l$ and $\sigma$ is performed in two separate Metropolis-Hastings steps. Then, we sample the histogram heights using Elliptical Slice sampling. In the third Metropolis-Hastings step we sample the scale length offset $\Delta_{0}$. In each of these steps we hold the parameters that are not currently updated fixed, as described in § 3.3.1. We note that the parameters that describe the histogram heights and scale length model can be quite correlated and Elliptical Slice sampling allows us to take larger steps along the degeneracy direction, which leads to better sampling performance for the logistic Gaussian process model. We further note that this methodology does not require gradients, which will not be available if we sample over a likelihood, which models the correlation functions in terms of cosmological parameters. In the following we will discuss how we construct a prior on the redshift distribution from a Bayesian histogram of sampled redshift values. In the experiments we performed in the next section we found that Metropolis-Hastings sampling performed better in combination with this redshift prior. The reason is that this prior weakens the correlation between the aforementioned parameters, in which case Metropolis-Hastings sampling becomes more efficient. We therefore used Metropolis-Hastings sampling steps for the individual log histogram heights instead of Elliptical Slice sampling. We note that this might not be an optimal choice for different priors and data likelihoods. 3.3.4 Prior on the Redshift distribution To set a prior on the redshift distribution, we require an independent set of data that constrains the histogram bins represented as the logistic Gaussian process. This information will typically come from photometric redshift estimation based on template fitting or machine learning, that constrain the redshifts of individual galaxies. A prior on the population of redshift values can therefore be represented as a Bayesian histogram and samples can be drawn from a Dirichlet distribution given the counts of galaxy redshifts in the respective histogram bins. The distribution over the set of histogram heights $\widehat{nz}$ is therefore given as $$\widehat{nz}\sim{\rm Dir}(\Phi+\widehat{c})\,,$$ (34) where Dir denotes the Dirichlet distribution and $\Phi$ its parameter that is set to a unity vector to represent a flat prior. The vector $\widehat{c}$ denotes the counts of galaxy redshifts in the respective histogram bin. Here $\widehat{c}$ is a vector of length $M$, where $M$ denotes the number of histogram bins. Accordingly, a sample $\widehat{nz}$ from the Dirichlet distribution is an $M$ dimensional vector of histogram heights. If the sample size is large, i.e. if \colorblack each element of $\widehat{c}$ is high, the scatter of new samples around the mean histogram heights is small. However if there are very few galaxies in the sample, the Dirichlet distribution will sample around this mean value of histogram heights more broadly. Since the treatment of photometric redshift error in the context of machine learning or template based photometric redshift techniques is beyond the scope of this paper, we will use a lower bound on the prior width by assuming true redshift values for all galaxies in the photometric dataset. For our case, we find that the logarithm of these sampled redshift histogram heights can be approximated as a multivariate normal distribution to sufficient accuracy for our purposes. For this we use the sample mean and covariance estimated on a large number ($>10^{8}$) of (log) Dirichlet samples for our approximation. This multivariate normal distribution can then be readily used as a prior distribution for the logistic Gaussian process model. 3.4 Model Evaluation In many practical application of inference there will be a level of uncertainty in the underlying modelling and, as a result, a level of ambiguity between reasonable model choices. It is therefore of paramount importance to compare these models efficiently and combine them if various models provide equally good descriptions of the data. We evaluate and compare the models using goodness of fit statistics and the expected biases in the Lensing convergence power spectrum. The latter is especially useful as it allows judgment about the possible impact of residual photometric redshift errors on the weak lensing measurement, which is the primary cosmological probe to test theories of dark energy in the context of large area photometric surveys. 3.4.1 Information Criteria In classical statistics the maximum of the log-likelihood is often penalized by the number of free parameters in the model, to measure the ‘goodness of fit’ and avoid the risk of specifying too many parameters (‘overfitting’). This approach, represented by the Akaike information criterion (AIC; Akaike, 1973), has the disadvantage in the Bayesian analysis, that it will not change with different prior choices that clearly affect model complexity and posterior. In this work we will therefore use the Deviance Information Criterion (DIC; Spiegelhalter et al., 2002) that provides an extension of the AIC towards the Bayesian paradigm. We use the definition of the DIC based on the predictive density777\colorblack The original definition based on the deviance differs slightly by a factor of -2. This is further explained in Gelman et al. (2013), where both definitions are detailed. (Gelman et al., 2013) as $${\rm DIC}=\log{\left(p\left(\mathcal{D}|\widehat{\theta}_{\rm Bayes}\right)% \right)}-p_{\rm DIC}\,$$ (35) where $\mathcal{D}$ denotes the data vector and $p\left(\mathcal{D}|\widehat{\theta}_{\rm Bayes}\right)$ denotes the likelihood evaluated at the posterior expectation $$\widehat{\theta}_{\rm Bayes}=E\left[\theta|\mathcal{D}\right]\,.$$ (36) Model complexity is then penalized as $$p_{\rm DIC}=2\,{\rm var}_{\rm Post}{\log{\left(\mathcal{D}|\theta\right)}}\,,$$ (37) where ${\rm var}_{\rm Post}{\log{\left(\mathcal{D}|\theta\right)}}$ denotes the posterior variance of the likelihood. More complex models will have a larger posterior variance due to their additional degrees of freedom in parameter space. As both terms in the DIC are evaluated with respect to the posterior, its value will depend on the choices of prior. 3.4.2 Bias in lensing convergence power spectrum To be able to better evaluate the significance of photometric redshift biases, we evaluate the quality of the reconstructed photometric redshift distribution in terms of the expected relative bias in the lensing convergence power spectrum $$\Delta_{\rm rel}=\frac{\widehat{C}_{\ell}^{\rm fid.}-\widehat{C}_{\ell}^{\rm bias% }}{\widehat{C}_{\ell}^{\rm fid.}}\,,$$ (38) where $\widehat{C}$ denotes the binned lensing power spectrum $$\widehat{C}=\frac{1}{N_{\ell}}\sum_{\ell}C_{\ell}\,,$$ (39) and $N_{\ell}$ denotes the number of integer angular modes. Here, $\widehat{C}_{\ell}^{\rm fid.}$ denotes the lensing convergence power spectrum obtained using the unbiased, fiducial, redshift distribution. $\widehat{C}_{\ell}^{\rm bias}$ denotes the photometric redshift distribution obtained assuming the (potentially) biased galaxy-dark matter bias model in the inference. We choose to bin the angular correlation power spectra $\ell\in[10,\ell_{\rm max}]$, where we select $\ell_{\rm max}=3000$ in analogy to Kirk et al. (2015). We compare the resulting systematic biases with the expected statistical measurement error from a weak gravitational lensing measurement with the specifications given in Tab. 1 and a shape noise of $\sigma_{\epsilon}=0.23$ comparable with Kirk et al. (2015). The error in the binned measurement is then $${\sigma}(\widehat{C}_{\ell})=\frac{1}{N_{\ell}}\sqrt{\sum_{\ell}\frac{2}{(2% \ell+1)f_{\rm sky}}\left(C_{\rm\ell}+\frac{\sigma_{\epsilon}^{2}}{2\overline{n% }_{g}}\right)^{2}}\,,$$ (40) which implements Eq. (20) with a different, weak gravitational lensing specific, shot noise term. \colorblack Eq. (40) also includes the cosmic variance contribution to the covariance matrix, that is inversely proportional to the fractional sky coverage $f_{\rm sky}$. Neglected are the covariance between the different modes $\ell$. This is done in analogy to other forecasts like Kirk et al. (2015). The $1\ \sigma$ error in $\Delta_{\rm rel}$ is then given as $$\sigma_{\Delta_{\rm rel}}=\frac{{\sigma}(\widehat{C}_{\ell})}{\widehat{C_{\ell% }^{\rm fid}}}$$ (41) We note that this simple metric does not substitute an accurate forecast that evaluates the impact of photometric redshift bias on cosmological parameter inference. This would require a tomographic analysis that is representative of modern surveys like DES or LSST, which is beyond the scope of this work. However, if the mean systematic bias in the reconstructed \colorblack lensing convergence power spectrum is significantly below the statistical error budget given in Eq. (40), we can assume that its impact on cosmological parameters will be small. It therefore allows us to approximately judge the practical implications and relevance of the photometric redshift error on cosmological parameter inference, even without a more precise forecasting exercise. 4 Analysis and Results In the following section we investigate the performance of our logistic Gaussian process model in recovering the photometric redshift distribution using the cross-correlation likelihood (Eq. 27) and test how different scale-length models affect the inference. Tab. 3 summarizes the nomenclature used in our analysis. The first column quotes the name tag of the particular setup, the second column the scale length model assumed for the spectroscopic sample, the third our ansatz for the scale length model of the photometric sample and the fourth column lists the true scale length model of the photometric sample (that is unknown to us in practise). The final column indicates if a prior on the photometric redshift distribution is used. We refer to Fig. 1 for an overview over the different scale length models. We note that we adapt the covariance matrix of the likelihood to changes in the corresponding galaxy-dark matter bias according to Eq. (13). 4.1 Fiducial Model In this first section we study a test case where our assumption that the scale length model of the photometric and spectroscopic samples only differs by a constant offset is valid to a good degree. This is the case for example in Model 4 and 5 shown in Fig. 1. As can be seen this implies that both samples have similar stellar mass ranges, which would require a spectroscopic survey that does not exhibit an extreme selection towards highly biased tracers. We therefore study this scenario assuming Model 4 as the scale length model of the photometric sample and Model 5 as the scale length model of the spectroscopic sample. As discussed in § 2.2, we use a constant offset to parametrize our uncertainty in the scale length, which corresponds to the first (constant) term in the Chebychev expansion. In the following, we will refer to this case as ‘S5A5P4’. To test if the small differences between Model 4 and 5 will introduce biases in our inference, we compare with the results in the absence of systematics. For this unbiased, fiducial test case, we fix the higher order terms of the Chebychev expansion to their true, unbiased, values from Model 4. As before, we will marginalize over an offset in the scale length and will refer to this case as ‘S5A4P4’. Note that this scenario is not the same as assuming scale length model 4 for both the spectroscopic and the photometric sample, because our covariance matrix will assume scale length model 5 for the spectroscopic sample. To simplify the notation, the following will refer to the offset in the scale length as ‘scale length offset $\Delta_{0}$’. The left panel of Fig. 5 shows the posterior of the difference between the true photometric redshift distribution and the redshift distribution inferred in these models. All distributions shown in this work have been normalized to unit area. We show the results for ‘S5A4P4’ in grey contours and results for ‘S5A5P4’ in red. The right panel plots the corresponding posterior distributions for the scale length offset $\Delta_{0}$, where the dashed blue line denotes its true value. We see that we can recover the redshift distribution to great accuracy for both models. For the unbiased scale length model we obtain, as expected, unbiased constraints on the photometric redshift distribution and scale length parameter, which we regard as a test of our sampling and methodology. The $[5,95]$ posterior percentile range of the inferred photometric redshift distribution assuming the slightly biased case S5A5P4 are largely consistent with the true result, however, as expected, inconsistent for the scale length offset $\Delta_{0}$. We can therefore conclude that a slight modelling bias in the scale length, or galaxy-dm bias model, has a small effect on the accuracy of the recovered redshift distribution. It has to be noted that we make quite optimistic assumptions for the photometric and spectroscopic surveys (see Tab. 1). For less optimistic assumptions, e.g. a smaller area of overlap between spectroscopic and photometric survey, we can expect the statistical error to be larger. The statistical error then quickly becomes even more dominant compared with the small photometric redshift systematic shown in Fig. 5. We note that there is considerable degeneracy between the scale length offset and the redshift distribution, particularly for the low redshift bins where the signal-to-noise ratio in the clustering measurement is large. This is illustrated in Fig. 6 for the unbiased model S5A4P4, which shows the posterior distributions of the scale length offset $\Delta_{0}$ and the first two histogram bin heights of the photometric redshift distribution. We see that the scale length offset $\Delta_{0}$ and the photometric redshift distribution bin heights are indeed strongly correlated. For the bins at higher comoving distance, or redshift, this correlation decreases. In the next section we will discuss a more extreme case, where larger differences in the stellar mass of the photometric and spectroscopic galaxy population translate into larger differences in the scale length models. 4.2 Impact of biased scale length models We test the impact of biased scale length models on the redshift distribution inference in a more pessimistic setup: we \colorblack make the ansatz that the scale length of the photometric/spectroscopic sample is given by scale length Model 4/1 and denote this setup as S1A1P4. As one can see from Fig. 1, a simple constant shift in the scale length model is no longer a sound approximation, as both models have a significantly different slope at large comoving distances or high redshift. This can be seen as an extreme case of scale length model mis-specification and we will likely be able to calibrate the model to better accuracy either using simulations or by incorporating information from e.g. galaxy-galaxy lensing (e.g. Prat et al., 2018) in future surveys. We therefore include a more optimistic setup into this comparison. Here we make the ansatz for Models 2/5, that are more similar to the true photometric model (Model 4). This means that instead of fixing the higher order coefficients of the Chebychev expansion to the parameters of Model 1, we fix them to the parameters of Model 2/5. We will denote the Model 1-4 with an ansatz of Model 2/5 as ‘S1A2P4’ and ‘S1A5P4’ respectively. As in the previous section we will marginalize over the constant term in the Chebychev expansion, i.e. the ‘scale length offset $\Delta_{0}$’. Fig. 7 shows the resulting difference in the recovered redshift distributions, where the ‘pessimistic’ case of Model 1-4 shows considerable biases that are reduced by S1A2P4 and to better accuracy by S1A5P4. This recovered bias is naturally similar to the fiducial case discussed in the previous section888Making the ansatz of Model 5 for the photometric sample is very similar to the case studied in the previous section. However as the true scale length model of the spectroscopic sample is Model 1, and not Model 5 as in the previous section, the data covariance matrix is slightly different.. Fig. 8 compares the relative error in the lensing convergence power spectrum for the different model configurations with the statistical error budget to be expected for the lensing survey defined in Tab. 1. The corresponding numbers are listed in Tab. 4. We see that model S1A1P4 has the largest error in the convergence power spectrum and is clearly rejected by the DIC criterion. However even though S1A2P4 yields a systematic bias that is larger than the statistical error budget, it has a similar DIC as the much better-performing model S1A5P4. This is a result of the degeneracy between different scale length models with the redshift distribution of the photometric sample. To highlight this, we impose a prior on the redshift distribution bin heights as described in §3.3.4. The corresponding results for models denoted as ‘Nz prior’ are shown in Fig. 8 and Fig. 7. We see that the inclusion of a prior improves S1A2P4 and S1A5P4 in terms of the relative error in the convergence power spectrum. Most importantly however, the difference in DIC is much larger compared with the case that does not incorporate prior information. The worse S1A2P4 model is now more clearly rejected by the DIC criterion. We note that imposing a prior on the scale length offset $\Delta_{0}$ was not sufficient in our experiments to produce this effect. This is sensible, as the redshift distribution in our setup contains many more degrees of freedom compared with the parametrization of the scale length model. This indicates that combining the clustering measurement with external redshift constraints from e.g. template fitting will be necessary to break the degeneracy between our uncertainty in the redshift-dependent scale factor, i.e. the redshift-dependent galaxy-dark matter bias, and the redshift distribution of the photometric sample. 5 Summary and Conclusions We have presented a hierarchical logistic Gaussian process model to estimate photometric redshift distributions from cross-correlations between the photometric sample and spatially overlapping spectroscopic samples. Using a simulated data vector and a covariance matrix that mimics a current photometric survey, we demonstrated that this model is able to accurately estimate photometric redshift distributions in a fully Bayesian manner. More specifically, we tested the robustness of our approach to biases in the redshift-dependent galaxy-dark matter bias model. For this we use several published scale length model measurements from the Illustris simulation in our likelihood simulation. We then investigated several scenarios that range from small systematic biases in our parametrization of the redshift-dependent scale length, to more extreme cases where the assumed scale length model does not capture the true underlying function. In our model, we match the scale length model in the spectroscopic sample to the scale length of the photometric sample. This is a good approximation, if the spectroscopic and photometric samples have a roughly similar galaxy population as measured by e.g. their stellar mass. A test case that would mimic this situation selects a spectroscopic and photometric sample in stellar mass bins from $8.5<\log{M_{*}[h^{-2}M_{\odot}])}<9.0$ and $9.0<\log{M_{*}[h^{-2}M_{\odot}])}<9.5$ respectively. For these galaxy samples, the scale length parametrization had a low systematic bias and the recovered redshift distribution posteriors were consistent with the truth within the error bars. To probe the robustness of our redshift inference we then tested a spectroscopic scale length model that corresponds to a galaxy sample in a much higher stellar mass bin of $10.5<\log{M_{*}[h^{-2}M_{\odot}]}<11.0$, where we expect our parametrization to under-perform. For this more pessimistic case we found indeed that the redshift posterior distribution is significantly biased. Forecasting the systematic biases in the lensing convergence power spectrum, we find that these biases exceed the statistical error budget in the measurement by a factor of 3, assuming a DES Y5-like lensing survey. In order to detect these biases we propose to evaluate a range of models in a Bayesian model comparison to find the ‘best fitting’ candidate. We tested several different scale length models and found that our model comparison statistic reliably detects models with a large systematic modelling bias. We finally test how the incorporation of additional information from e.g. template-based redshift inference affects the prediction performance and the results of the model comparison. We set a prior on our logistic Gaussian process using a Gaussian approximation to a Bayesian histogram posterior. This mimics a photometric redshift distribution obtained by a strongly idealized photometric redshift code. By sampling our model using this prior information we find that the quality of our redshift inferences is significantly improved. Furthermore we find that the sensitivity of our model comparison statistic to systematic biases in the scale length model is also significantly enhanced. We therefore conclude that the combination of SED-based redshift estimation and cross-correlation will strongly benefit the accuracy and robustness of the overall redshift inference. In a future publication we will extend this model to include a template fitting likelihood. \colorblack To improve the sampling efficiency, specifically in the low signal-to-noise tails of the distribution, we will also include a variable selection that reduces the number of free parameters that need to be sampled. More importantly this will enable us to marginalize over cosmological parameters in a joint analysis, for which computationally more expensive likelihoods must be evaluated. We will also apply our methodology to more realistic spectroscopic calibration samples with a more realistic galaxy type selection. Acknowledgements MMR is supported by DOE grant DESC0011114 and NSF grant IIS-1563887. RM is partially supported by NSF grant IIS-1563887. References Abbott et al. (2018) Abbott T. M. C., et al., 2018, The Astrophysical Journal Supplement Series, 239, 18 Aihara et al. (2018) Aihara H., et al., 2018, Publications of the Astronomical Society of Japan, 70, S4 Akaike (1973) Akaike H., 1973, Information Theory and an Extension of the Maximum Likelihood Principle. Springer New York, New York, NY, pp 199–213 Almosallam et al. (2016) Almosallam I. A., Jarvis M. J., Roberts S. J., 2016, MNRAS, 462, 726 Arnouts et al. (1999) Arnouts S., Cristiani S., Moscardini L., Matarrese S., Lucchin F., Fontana A., Giallongo E., 1999, MNRAS, 310, 540 B. Morrison et al. (2016) B. Morrison C., Hildebrandt H., J. Schmidt S., K. Baldry I., Bilicki M., Choi A., Erben T., Schneider P., 2016, Monthly Notices of the Royal Astronomical Society, 467 Bartelmann & Schneider (2001) Bartelmann M., Schneider P., 2001, Phys. Rep., 340, 291 Benítez (2000) Benítez N., 2000, ApJ, 536, 571 Benjamin et al. (2013) Benjamin J., et al., 2013, MNRAS, 431, 1547 Bernstein & Huterer (2010) Bernstein G., Huterer D., 2010, MNRAS, 401, 1399 Bonnett (2015) Bonnett C., 2015, MNRAS, 449, 1043 Carrasco Kind & Brunner (2013) Carrasco Kind M., Brunner R. J., 2013, MNRAS, 432, 1483 Collister & Lahav (2004) Collister A. A., Lahav O., 2004, Publications of the Astronomical Society of the Pacific, 116, 345 Crocce et al. (2011) Crocce M., Cabré A., Gaztañaga E., 2011, MNRAS, 414, 329 DESI Collaboration et al. (2016) DESI Collaboration et al., 2016, arXiv e-prints, p. arXiv:1611.00036 Davis et al. (2017) Davis C., et al., 2017, arXiv e-prints, p. arXiv:1710.02517 Feldmann et al. (2006) Feldmann R., et al., 2006, MNRAS, 372, 565 Gatti et al. (2018) Gatti M., et al., 2018, MNRAS, 477, 1664 Gelman et al. (2004) Gelman A., Carlin J. B., Stern H. S., Rubin D. B., 2004, Bayesian Data Analysis, 2nd ed. edn. Chapman and Hall/CRC Gelman et al. (2013) Gelman A., Hwang J., Vehtari A., 2013, arXiv e-prints, p. arXiv:1307.5928 Gerdes et al. (2010) Gerdes D. W., Sypniewski A. J., McKay T. A., Hao J., Weis M. R., Wechsler R. H., Busha M. T., 2010, ApJ, 715, 823 Greisel et al. (2015) Greisel N., Seitz S., Drory N., Bender R., Saglia R. P., Snigula J., 2015, MNRAS, 451, 1848 Hildebrandt et al. (2017) Hildebrandt H., et al., 2017, MNRAS, 465, 1454 Hoyle (2016) Hoyle B., 2016, Astronomy and Computing, 16, 34 Hoyle & Rau (2018) Hoyle B., Rau M. M., 2018, arXiv e-prints, p. arXiv:1802.02581 Hoyle et al. (2015) Hoyle B., Rau M. M., Bonnett C., Seitz S., Weller J., 2015, MNRAS, 450, 305 Huterer et al. (2014) Huterer D., Lin H., Busha M. T., Wechsler R. H., Cunha C. E., 2014, Monthly Notices of the Royal Astronomical Society, 444, 129 Ilbert et al. (2006) Ilbert O., et al., 2006, A&A, 457, 841 Ivezić et al. (2008) Ivezić Ž., et al., 2008, arXiv e-prints, Johnson et al. (2017) Johnson A., et al., 2017, MNRAS, 465, 4118 Jones & Heavens (2019) Jones D. M., Heavens A. F., 2019, MNRAS, 483, 2487 Kerscher (1999) Kerscher M., 1999, A&A, 343, 333 Kirk et al. (2015) Kirk D., Lahav O., Bridle S., Jouvel S., Abdalla F. B., Frieman J. A., 2015, MNRAS, 451, 4424 Laureijs et al. (2011) Laureijs R., et al., 2011, arXiv e-prints, p. arXiv:1110.3193 Leistedt et al. (2016) Leistedt B., Mortlock D. J., Peiris H. V., 2016, MNRAS, 460, 4258 Limber (1953) Limber D. N., 1953, ApJ, 117, 134 Ma et al. (2006) Ma Z., Hu W., Huterer D., 2006, ApJ, 636, 21 Mandelbaum (2018) Mandelbaum R., 2018, Annual Review of Astronomy and Astrophysics, 56, 393 Matthews & Newman (2010) Matthews D. J., Newman J. A., 2010, ApJ, 721, 456 Matthews & Newman (2012) Matthews D. J., Newman J. A., 2012, ApJ, 745, 180 McLeod et al. (2017) McLeod M., Balan S. T., Abdalla F. B., 2017, MNRAS, 466, 3558 McQuinn & White (2013) McQuinn M., White M., 2013, MNRAS, 433, 2857 Ménard et al. (2013) Ménard B., Scranton R., Schmidt S., Morrison C., Jeong D., Budavari T., Rahman M., 2013, arXiv e-prints, p. arXiv:1303.4722 Murray et al. (2009) Murray I., Prescott Adams R., MacKay D. J. C., 2009, preprint, p. arXiv:1001.0175 (arXiv:1001.0175) Newman (2008a) Newman J. A., 2008a, ApJ, 684, 88 Newman (2008b) Newman J. A., 2008b, The Astrophysical Journal, 684, 88 Newman et al. (2015) Newman J. A., et al., 2015, Astroparticle Physics, 63, 81 Prat et al. (2018) Prat J., et al., 2018, Phys. Rev. D, 98, 042005 Raccanelli et al. (2017) Raccanelli A., Rahman M., Kovetz E. D., 2017, Monthly Notices of the Royal Astronomical Society, 468, 3650 Rasmussen & Williams (2006) Rasmussen C. E., Williams C. K. I., 2006, Gaussian Processes for Machine Learning. The MIT Press Rau et al. (2015) Rau M. M., Seitz S., Brimioulle F., Frank E., Friedrich O., Gruen D., Hoyle B., 2015, MNRAS, 452, 3710 Rau et al. (2017) Rau M. M., Hoyle B., Paech K., Seitz S., 2017, MNRAS, 466, 2927 Salazar-Albornoz et al. (2017) Salazar-Albornoz S., et al., 2017, MNRAS, 468, 2938 Sánchez & Bernstein (2019) Sánchez C., Bernstein G. M., 2019, MNRAS, 483, 2801 Scottez et al. (2016) Scottez V., et al., 2016, MNRAS, 462, 1683 Scranton et al. (2005) Scranton R., et al., 2005, ApJ, 633, 589 Simon (2007) Simon P., 2007, A&A, 473, 711 Speagle & Eisenstein (2015) Speagle J. S., Eisenstein D. J., 2015, arXiv e-prints, p. arXiv:1510.08073 Spergel et al. (2015) Spergel D., et al., 2015, arXiv e-prints, p. arXiv:1503.03757 Spiegelhalter et al. (2002) Spiegelhalter D. J., Best N. G., Carlin B. P., Van Der Linde A., 2002, Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64, 583 Springel et al. (2018) Springel V., et al., 2018, MNRAS, 475, 676 Stoyan & Stoyan (1994) Stoyan D., Stoyan H., 1994, Fractals, random shapes, and point fields: methods of geometrical statistics. Wiley series in probability and mathematical statistics: Applied probability and statistics, Wiley, https://books.google.com/books?id=Dw3vAAAAMAAJ Tagliaferri et al. (2003) Tagliaferri R., Longo G., Andreon S., Capozziello S., Donalek C., Giordano G., 2003, Neural Networks for Photometric Redshifts Evaluation. pp 226–234, doi:10.1007/978-3-540-45216-4_26 Thomas et al. (2011) Thomas S. A., Abdalla F. B., Lahav O., 2011, MNRAS, 412, 1669 Zehavi et al. (2002) Zehavi I., et al., 2002, ApJ, 571, 172 Zuntz et al. (2015) Zuntz J., et al., 2015, Astronomy and Computing, 12, 45
Conductance spectroscopy of a correlated superconductor in a magnetic field in the Pauli limit: Evidence for strong correlations Jan Kaczmarczyk [email protected]    Mariusz Sadzikowski [email protected] Marian Smoluchowski Institute of Physics, Jagiellonian University, Reymonta 4, 30-059 Kraków, Poland    Jozef Spałek [email protected] Marian Smoluchowski Institute of Physics, Jagiellonian University, Reymonta 4, 30-059 Kraków, Poland Faculty of Physics and Applied Computer Science, AGH University of Science and Technology, Reymonta 19, 30-059 Kraków, Poland (December 6, 2020) Abstract We study conductance spectroscopy of a two-dimensional junction between a normal metal and a strongly-correlated superconductor in an applied magnetic field in the Pauli limit. Depending on the field strength the superconductor is either in the Bardeen-Cooper-Schrieffer (BCS), or in the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) state of the Fulde-Ferrell (FF) type. The strong correlations are accounted for by means of the Gutzwiller method what leads naturally to the emergence of the spin-dependent masses (SDM) of quasiparticles when the system is spin-polarized. The case without strong correlations (with the spin-independent masses, SIM) is analyzed for comparison. We consider both the $s$-wave and the $d$-wave symmetries of the superconducting gap and concentrate on the parallel orientation of the Cooper pair momentum $\mathbf{Q}$ with respect to the junction interface. The junction conductance is presented for selected barrier strengths (i.e., in the contact, intermediate, and tunneling limits). The conductance spectra in the cases with and without strong correlations differ essentially. Our analysis provides thus an experimentally accessible test for the presence of strong-correlations in the superconducting state. Namely, correlations alter the distance between the conductance peaks (or related conductance features) for carriers with spin-up and spin-down. In the uncorrelated case, this distance is twice the Zeeman energy. In the correlated case, the corresponding distance is about 30-50$\%$ smaller, but other models may provide even stronger difference, depending on details of the system electronic structure. It turns out that the strong correlations manifest themselves most clearly in the case of the junction with the BCS, rather than the FFLO superconductor, what should make the experimental verification of the present results simpler. pacs: 74.45.+c, 71.27.+a, 71.10.Ca, 74.50.+r ††preprint: APS/123-QED I Introduction The search for evidence of strong electron correlations in condensed matter has concentrated in recent years on superconducting state in unconventional materials and its coexistence with magnetism. One of such examples is the search for experimental evidence for the Fulde-Ferrell-Larkin-Ovchinnikov (FFLO) superconducting state. The FFLO state was proposed theoretically in the 1960s. FF ; LO ; *LO2 In this unconventional superconducting state the Fermi wave vector difference for the electrons with spin-up and -down due to the presence of Zeeman term makes it favorable for the Cooper pair to acquire a nonzero total momentum $\mathbf{Q}=2\mathbf{q}$. Consequently, the phase of the superconducting gap parameter oscillates spatially with the wave vector $\mathbf{Q}$. By forming such a condensate of moving Cooper pairs, the superconducting state persists to magnetic fields remarkably higher than the Pauli $H_{c2}$ limit. The FFLO state has suddenly gained renewed interest recently (for a review see Ref.  Matsuda-Rev, ) because of its possible detection in the heavy fermion superconductor CeCoIn${}_{5}$, Kakuyanagi ; Kumagai2 ; Bianchi ; Kumagai although the nature of the high-field low-temperature phase observed in this system is still under an intensive debate after antiferromagnetism has been observed in the vicinity of this phase. Young ; Kenzelmann ; Koutroulakis ; Ikeda ; Hatakeyama The FFLO state has also been proposed for $\kappa$-(BEDT-TTF)${}_{2}$Cu(NCS)${}_{2}$, Singleton ; Bergk $\beta^{\prime\prime}$-(ET)${}_{2}$SF${}_{5}$CH${}_{2}$CF${}_{2}$SO${}_{3}$, Cho and other layered organic superconductors (see References in Ref. Bergk, ). Also, existence of the FFLO state has been indicated in other heavy-fermion systems: PuRhGa${}_{5}$, Javorsky Ce${}_{2}$PdIn${}_{8}$ Dong (see Ref. Pfleiderer, , Sec. V.B.1 for a more detailed account), as well as in the pnictide superconductor LiFeAs. Cho2 The FFLO state has also been investigated in high density quark and nuclear matter, Casalbuoni as well as in optical lattices. Machida ; Kinnunen ; Koponen All the systems considered so far to be a host to the FFLO phase have a reduced dimensionality, what is crucial for the FFLO phase stability, as then the orbital effects are suppressed and the Pauli effect (Zeeman splitting) may become the dominant factor. Another obvious feature, which suppresses the orbital effects, is the heavy quasiparticle mass. These characteristics of possible FFLO hosts indicate that these systems are likely to have strong electron (fermion) correlations and thus also possess specific features resulting from them. The role of strong correlations in the most likely candidate for the FFLO state, CeCoIn${}_{5}$ is essential not only because this system is a heavy fermion superconductor, with very narrow bands originating from $4f$ electrons hybridized with $5d-6s$ states. What is equally important, the spin-dependent effective masses (SDM) of quasiparticles have been directly observed in this system McCollam by means of the de Haas-van Alphen oscillations in a strong applied magnetic field. SDM are one of the hallmarks of strong correlations, as they appear naturally in theories incorporating correlations (Gutzwiller, JS_Gopalan slave-bosons, Korbel ; SBREVIEW dynamical mean field theory, Bauer fluctuation-exchange approximation Onari ), when the system is spin-polarized.111SDM have also been observed in other heavy-fermion systems. Sheikin ; Takashita One should note at the outset that by spin dependent masses (or their enhancement) we understand a $\mathbf{k}$-independent feature of a very narrow band which is derived solely from the effect of correlation. The ordinary splitting into spin subbands in the Zeeman field is a separate effect. Because of the above reasons, it is important to study the effect of correlations on the FFLO phase. Such analysis has already been performed in a few cases, JKJS ; JKJS2 ; Maska ; Acta ; Ying and it indicates, among others, that the interelectronic correlations play an important role in forming and stabilizing the FFLO phase. In the present paper we concentrate on providing an experimentally-accessible concrete characteristics of a superconducting state with strong correlations. Namely, we study conductance of a normal metal - superconductor junction (NSJ) with the strongly-correlated superconductor in either the Fulde-Ferrell (FF) type of the FFLO state, or the Bardeen-Cooper-Schrieffer (BCS) state (the latter is stable in lower fields). Conductance spectroscopy of such junction is an experiment sensitive to both the phase and amplitude modulation of the superconducting order parameter, and therefore it is a candidate technique for providing a direct evidence for the presence of the FFLO phase. In that situation, a crucial role is played by the Andreev reflection (AR) processes. Andreev ; *Andreev2 In the simplest view of the Andreev reflection, an incident electron entering from the normal metal into the superconductor (SC) is converted at the NSJ interface into a hole moving in the opposite direction (to the incident particle) and Cooper pair inside SC. Such processes increase conductance of the junction (in an ideal case by a factor of two), which is analyzed in the framework provided by Blonder, Tinkham, and Klapwijk. BTK The conductance characteristics for a NSJ with superconductor in the FFLO state has already been investigated for both the cases of the FF (with $\Delta(\mathbf{r})=\Delta_{\mathbf{Q}}e^{i\mathbf{Q}\mathbf{r}}$) Cui ; JKC ; Partyka and the Larkin-Ovchinnikov ($\Delta(\mathbf{r})=\Delta_{\mathbf{Q}}\cos(\mathbf{Q}\mathbf{r})$) Tanaka2 types of FFLO states, as well as for the case of superconductor with a supercurrent Zhang ; Lukic (i.e. the situation similar to ours from formal point of view). See also Refs. Tanaka1, ; Bruder, ; Kashiwaya, ; Argyropoulos, for the case of NSJ with BCS state of the $d$-wave symmetry. None of the above papers have taken into account strong electron correlations. Here we consider both the cases of $s$-wave and $d$-wave strongly-correlated superconductor in magnetic field and in the Pauli limiting situation (i.e., we neglect orbital effects, as the Maki parameter Maki in the systems of interest is quite high Kumagai2 ). The strong correlations are taken into account by assuming dispersion relations with SDM of quasiparticles and with the correlation field, as given e.g. by the Gutzwiller approximation, JS_Gopalan or slave-boson theory. Korbel The case without strong correlations (with spin-independent masses, SIM) is analyzed for comparison. In low magnetic fields the superconductor is in the BCS state, and in higher magnetic fields a transition to the FFLO state takes place. We consider only the simpler FF type of FFLO state as we intend to single-out novel features of the situation with strong correlations in the simplest case (the analysis of LO state is much more complex Tanaka2 ). Our study already leads to interesting, novel results. We set the direction of the Cooper pair momentum $\mathbf{Q}$ as either perpendicular, or parallel to the junction interface, with more attention paid to the latter situation. The analysis is performed in a fully self-consistent manner. Namely, we select Cooper pair momentum $\mathbf{Q}$ minimizing the free energy of the system and we determine the chemical potential $\mu$ in each phase separately so that the particle number $n$ is kept constant. Such an adjustment of $\mu$ is required even for the BCS state for the narrow-band case. Also, such a careful examination of the superconductor properties is important, and non-self-consistent calculations may lead to important alterations of the conductance spectrum. JKC As we deal with heavy quasiparticles on the superconducting side of NSJ, we should in principle take into account the Fermi-velocity-mismatch effects. Under those circumstances, the AR processes would be severely limited by a high effective barrier strength $Z$. On the other hand, AR is clearly observed in junctions with heavy-fermion superconductors Park ; Goll and theoretical efforts have been made to understand why this is the case. Deutscher ; Araujo ; Araujo2 Based on these studies, we disregard the Fermi-velocity mismatch by assuming equal chemical potentials and equal average masses of quasiparticles on both sides of the junction. Namely, we choose masses on the normal side as $m_{av}$, and on the superconductor side we have that $(m_{\uparrow}+m_{\downarrow})/2=m_{av}$, with $m_{av}=100\textrm{ m}_{0}$ (where $\textrm{ m}_{0}$ is the electron mass in vacuum), which roughly corresponds to the heaviest band of CeCoIn${}_{5}$. McCollam This assumption is, in our view, a justifiable simplification, as we would like to single out the novel features in their clearest form. Note also, that we consider a model situation with its parameters taken from the experiment for CeCoIn${}_{5}$. In brief, we study conductance of NSJ with superconductor exhibiting strong electron correlations (SDM case). To single out novel features of such situation, we also study the uncorrelated case (SIM) and compare those results. The paper is organized as follows. In Sec. II we discuss the superconducting state of quasiparticles with SDM and SIM for a two-dimensional electron gas. In Sec. III we present the theory concerning conductance of a normal metal - strongly-correlated superconductor junction. In Sec. IV we show conductance spectra for the cases with SDM and SIM. In Sec. V we discuss relation of our results to experiments and suggest their possible experimental verification. Finally, in Sec. VI we provide a brief summary. II Fulde-Ferrell superconducting state basic characteristics: model and method As said above, here we consider a two-dimensional system of paired quasiparticles in the situations with SDM and SIM. The system of self-consistent equations describing such superconducting state has already been presented in detail in Refs. JKJS, ; JKJS2, . For the sake of completeness, we provide here a brief summary of our procedure. We start with the Hamiltonian $$\hat{\mathcal{H}}=\sum_{\mathbf{k}\sigma}\xi_{\mathbf{k}\sigma}a^{\dagger}_{% \mathbf{k}\sigma}a_{\mathbf{k}\sigma}+\frac{1}{N}\sum_{\mathbf{k}\mathbf{k}^{% \prime}\mathbf{q}}V_{\mathbf{k},\mathbf{k}^{\prime}}a^{\dagger}_{\mathbf{k}+% \mathbf{q}\uparrow}a^{\dagger}_{-\mathbf{k}+\mathbf{q}\downarrow}a_{-\mathbf{k% }^{\prime}+\mathbf{q}\downarrow}a_{\mathbf{k}^{\prime}+\mathbf{q}\uparrow}+% \frac{N}{n}\overline{m}h_{cor},$$ (1) where $\mathbf{Q}=2\mathbf{q}$ is the wave vector of the Cooper pair center of mass, $n\equiv n_{\uparrow}+n_{\downarrow}$ is the band filling, $\overline{m}\equiv n_{\uparrow}-n_{\downarrow}$ is the spin-polarization of the system, and $N$ is the total number of particles. The dispersion relation for the cases with SDM and SIM is chosen, respectively, as $$\displaystyle\xi_{\mathbf{k}\sigma}$$ $$\displaystyle=$$ $$\displaystyle\frac{\hbar^{2}k^{2}}{2m_{\sigma}}-\sigma(h+h_{cor})-\mu,$$ (2) $$\displaystyle\xi_{\mathbf{k}\sigma}^{(SIM)}$$ $$\displaystyle=$$ $$\displaystyle\frac{\hbar^{2}k^{2}}{2m_{av}}-\sigma h-\mu,$$ (3) where $h\equiv g\mu_{B}H/2$ with $H$ being the applied magnetic field. The quantity $h_{cor}$ is the correlation field which appears naturally in both the slave-boson theory (it is equivalent to $-\beta$ of Ref. Korbel, ) and Gutzwiller approximation if this approximation is performed with care. SGA ; Wang ; Yang Justification of a Hamiltonian with both the pairing part and SDM can be found in Ref.  Maska, (Appendix A) and in Ref. JS_RSP1, ; *JS_RSP2; *JS_RSP3. The spin-dependent quasiparticle mass is equal to $m_{\sigma}\equiv m_{B}/q_{\sigma}(n,\overline{m})$, where $m_{B}$ is the bare band mass and $q_{\sigma}(n,\overline{m})$ is the band-narrowing factor. Explicitly (in the Hubbard $U\to\infty$ limit) the quasiparticle masses are given by JS_Gopalan ; Korbel $$\frac{m_{\sigma}}{m_{B}}=\frac{1-n_{\sigma}}{1-n}=\frac{1-n/2}{1-n}-\sigma\,% \frac{\overline{m}}{2(1-n)}\equiv\frac{1}{m_{B}}(m_{av}-\sigma\Delta m/2),$$ (4) with $\Delta m\equiv m_{\downarrow}-m_{\uparrow}$. Next, as in the BCS theory, we take the pairing potential in a separable form and assume it is nonzero in a small region around the Fermi surface (for details see Refs. JKJS, , JKJS2, , and JKPHD, ) $$V_{\mathbf{k},\mathbf{k}^{\prime}}=-V_{0}\eta_{\mathbf{k}}\eta_{\mathbf{k}^{% \prime}},$$ (5) where $\eta_{\mathbf{k}}\equiv\cos{(ak_{x})}-\cos{(ak_{y})}$ for the $d$-wave case (with $a=4.62\textrm{ \AA}$ being the lattice constant for CeCoIn${}_{5}$ Petrovic ) and $\eta_{\mathbf{k}}\equiv 1$ for the $s$-wave case. Under such assumptions, the superconducting gap can be factorized as $$\Delta_{\mathbf{k},\mathbf{Q}}=\Delta_{\mathbf{Q}}\eta_{\mathbf{k}}.$$ (6) Following the standard mean-field approach to Hamiltonian (1), we obtain the generalized free-energy functional $\mathcal{F}$ and the system of self-consistent equations as follows JKJS ; JKJS2 $$\displaystyle\mathcal{F}$$ $$\displaystyle=$$ $$\displaystyle-k_{B}T\sum_{\mathbf{k}\sigma}\ln(1+e^{-\beta E_{\mathbf{k}\sigma% }})+\sum_{\mathbf{k}}(\xi^{(s)}_{\mathbf{k}}-E_{\mathbf{k}})+N\frac{\Delta_{% \mathbf{Q}}^{2}}{V_{0}}+\mu N+\frac{N}{n}\overline{m}h_{cor},$$ (7) $$\displaystyle h_{cor}$$ $$\displaystyle=$$ $$\displaystyle-\frac{n}{N}\sum_{\mathbf{k}\sigma}f(E_{\mathbf{k}\sigma})\frac{% \partial E_{\mathbf{k}\sigma}}{\partial\overline{m}}+\frac{n}{N}\sum_{\mathbf{% k}}\frac{\partial\xi_{\mathbf{k}}^{(s)}}{\partial\overline{m}}\Big{(}1-\frac{% \xi_{\mathbf{k}}^{(s)}}{E_{\mathbf{k}}}\Big{)},$$ (8) $$\displaystyle\overline{m}$$ $$\displaystyle=$$ $$\displaystyle\frac{n}{N}\sum_{\mathbf{k}\sigma}\sigma f(E_{\mathbf{k}\sigma}),$$ (9) $$\displaystyle\Delta_{\mathbf{Q}}$$ $$\displaystyle=$$ $$\displaystyle\frac{V_{0}}{N}\sum_{\mathbf{k}}\eta_{\mathbf{k}}^{2}\frac{1-f(E_% {\mathbf{k}\uparrow})-f(E_{\mathbf{k}\downarrow})}{2E_{\mathbf{k}}}\Delta_{% \mathbf{Q}},$$ (10) $$\displaystyle n$$ $$\displaystyle=$$ $$\displaystyle n_{\uparrow}+n_{\downarrow}=\frac{n}{N}\sum_{\mathbf{k}\sigma}\{% u_{\mathbf{k}}^{2}f(E_{\mathbf{k}\sigma})+v_{\mathbf{k}}^{2}[1-f(E_{\mathbf{k}% ,-\sigma})]\},$$ (11) where $\mathcal{F}(T,H,\mu;\overline{m},h_{cor},\Delta_{\mathbf{Q}},n)$ is the system free-energy functional for the case of a fixed number of particles Koponen (we fix the band filling at the value $n=0.97$), $V_{0}$ is the interaction potential, $u_{\mathbf{k}}$, $v_{\mathbf{k}}$ are the Bogolyubov coherence coefficients, $f(E_{\mathbf{k}\sigma})$ is the Fermi distribution, and $n_{\sigma}$ is the spin-subband filling. The physical solution is that with a particular $\mathbf{Q}$ which minimizes the free energy $F$, which in turn is obtained from $\mathcal{F}$ by evaluating the latter at the values of parameters being solution to Eqs. (8)-(11). The state with $\mathbf{Q}=0$ is called the BCS state, and that with $\mathbf{Q}\neq 0$ - the FF state. The quasiparticle spectrum in the paired state is characterized by the energies (cf. also Ref. Shimahara, ) $$\displaystyle E_{\mathbf{k}\sigma}$$ $$\displaystyle\equiv$$ $$\displaystyle E_{\mathbf{k}}+\sigma\xi^{(a)}_{\mathbf{k}},\quad\quad\quad\quad% \quad E_{\mathbf{k}}\equiv\sqrt{\xi^{(s)2}_{\mathbf{k}}+|\Delta_{\mathbf{k},% \mathbf{Q}}|^{2}},$$ (12) $$\displaystyle\xi^{(s)}_{\mathbf{k}}$$ $$\displaystyle\equiv$$ $$\displaystyle\frac{1}{2}(\xi_{\mathbf{k}+\mathbf{q}\uparrow}+\xi_{-\mathbf{k}+% \mathbf{q}\downarrow}),\quad\xi^{(a)}_{\mathbf{k}}\equiv\frac{1}{2}(\xi_{% \mathbf{k}+\mathbf{q}\uparrow}-\xi_{-\mathbf{k}+\mathbf{q}\downarrow}).$$ (13) Eqs. (8)-(11) are solved by numerical integration over the reciprocal space. We use procedures from GNU Scientific Library GSL as solvers. For the SIM case $h_{cor}=0$ and we solve only Eqs. (9)-(11). The numerical procedure has been elaborated in detail elsewhere. JKPHD Here, for completeness, we also provide in Tables I and II the numerical values of selected parameters for the situations with the $s$-wave and the $d$-wave symmetries of the superconducting gap, respectively. The quantity $F_{NS}$ is the free energy of the normal state, and therefore $\Delta F$ is the condensation energy. Also, $\Delta m\equiv m_{2}-m_{1}$ is the mass difference and $h_{cor\,FS}$ is the correlation field value in the normal state. The free energies are calculated per elementary cell. The numerical accuracy is not smaller than on the level of the last digit specified. Table I. Equilibrium values of mean-field variables and related quantities for the $$s$$-wave solution with $$H=10.01\textrm{ T}$$ and $$T=0.02\textrm{ K}$$. Variable Value Variable Value $$\overline{m}$$ 0.0129431 $$\Delta m\,(m_{0})$$ 2.51322 $$h_{cor}\,(\mathrm{K})$$ -3.08230 $$h_{cor\,FS}\,(\mathrm{K})$$ -3.26546 $$\Delta_{\mathbf{Q}}\,(\mathrm{K})$$ 1.38922 $$|\mathbf{Q}|\,(\textrm{\AA}^{-1})$$ 0.00947 $$\mu\,(\mathrm{K})$$ 126.287 $$|\mathbf{Q}|/\Delta k_{F}$$ 1.08 $$F\,(\mathrm{K})$$ 61.18200288 $$\Delta F\,(\mathrm{K})\equiv F_{NS}-F$$ -0.00111351 Table II. Equilibrium values of mean-field variables and related quantities for the $$d$$-wave solution with $$H=20.01\textrm{ T}$$ and $$T=0.1\textrm{ K}$$. Variable Value Variable Value $$\overline{m}$$ 0.0268690 $$\Delta m\,(m_{0})$$ 5.21729 $$h_{cor}\,(\mathrm{K})$$ -6.40870 $$h_{cor\,FS}\,(\mathrm{K})$$ -6.53133 $$\Delta_{\mathbf{Q}}\,(\mathrm{K})$$ 1.27455 $$|\mathbf{Q}|\,(\textrm{\AA}^{-1})$$ 0.0183 $$\mu\,(\mathrm{K})$$ 126.416 $$|\mathbf{Q}|/\Delta k_{F}$$ 1.15 $$F\,(\mathrm{K})$$ 61.04342125 $$\Delta F\,(\mathrm{K})\equiv F_{NS}-F$$ -0.00142436 The input parameters in our method have the following values: the band filling $n=0.97$, the lattice constant $a=4.62\textrm{ \AA}$, the interaction potential strength $V_{0}/n=90\textrm{ K}$ ($d$-wave) and $V_{0}/n=110\textrm{ K}$ ($s$-wave), the interaction potential width (cutoff) $\hbar\omega_{C}=17\textrm{ K}$, the quasiparticle average mass $m_{av}=100\textrm{ m}_{0}$. The other parameters (in particular: $\overline{m}$, $h_{cor}$, $\Delta_{\mathbf{Q}}$, $\mu$, $\mathbf{Q}$, and $\theta_{\mathbf{Q}}$) are determined from the solution procedure. Exemplary phase diagrams obtained on the applied field $H$ and temperature $T$ plane are exhibited in Figs. 1 and 2 for the $s$-wave and the $d$-wave cases, respectively. The angle $\theta_{\mathbf{Q}}$ is the angle between the maximum-gap (antinodal) direction and the Cooper pair momentum $\mathbf{Q}$. Note that in both situations the FF state is more robust (i.e. the FF state fills a wider field-temperature range on the phase diagram) in the SDM case than in the SIM case. The mechanism of the FF state stabilization by strong correlations has been analyzed in detail in Refs. JKJS, ; JKJS2, ; Maska, . For the sake of completeness, let us mention that this mechanism is based on a smaller Fermi wave vector splitting ($\Delta k_{F}\equiv k_{F\uparrow}-k_{F\downarrow}$) in the SDM situation. In such case, the system can resist more efficiently the destabilizing influence of the applied magnetic field (hence higher critical fields in the SDM case). Also, it turns out that the FF state can benefit to a greater extent than the BCS state from the smaller $\Delta k_{F}$, as the FF state has higher spin-polarization, which is necessary for the appearance of SDM (for details see Refs. JKJS, , JKJS2, , and JKPHD, ). For further analysis of the Andreev reflection, we take the parameters obtained along the $T=0.02K\approx 0$ line in Figs. 1 and 2. Therefore, the results will have, strictly speaking, practical relevance for $T\ll T_{sc}$, with the superconducting transition temperature $T_{sc}~{}\approx~{}2~{}-~{}3~{}\textrm{ K}$, as can be seen from Figs. 1 and 2. III Junction conductance: theoretical analysis For the analysis of the NSJ conductance we take the superconducting state parameters obtained self-consistently (from the procedure presented above). We consider here only two-dimensional NSJ for simplicity. Kinematics of the reflection may be analyzed by means of the Bogolyubov-de Gennes (BdG) equations BdG $$\displaystyle Eu_{\sigma}(\mathbf{x})$$ $$\displaystyle=$$ $$\displaystyle\hat{\mathcal{H}}_{0}u_{\sigma}(\mathbf{x})+\int d\mathbf{x}^{% \prime}\Delta(\mathbf{s},\mathbf{r})v_{\sigma}(\mathbf{x}^{\prime}),$$ (14) $$\displaystyle Ev_{\sigma}(\mathbf{x})$$ $$\displaystyle=$$ $$\displaystyle-\hat{\mathcal{H}}_{0}v_{\sigma}(\mathbf{x})+\int d\mathbf{x}^{% \prime}\Delta^{*}(\mathbf{s},\mathbf{r})u_{\sigma}(\mathbf{x}^{\prime}),$$ (15) where $\mathbf{s}=\mathbf{x}-\mathbf{x}^{\prime}$, $\mathbf{r}=(\mathbf{x}+\mathbf{x}^{\prime})/2$, and $\sigma=\pm 1$ is the spin quantum number of the incoming quasiparticle and $u_{\sigma}(\mathbf{x})$ and $v_{\sigma}(\mathbf{x})$ are the particle and hole wave-function components. The one-particle Hamiltonian is given by $$\hat{\mathcal{H}}_{0}(\mathbf{r})=-\nabla\frac{\hbar^{2}}{2\,m(\mathbf{r})}% \nabla-\sigma h-\sigma h_{cor}(\mathbf{r})-\mu+V(\mathbf{r}),$$ (16) where we have used the effective mass approximation Burt ; Mortensen to express the kinetic part as $\nabla\frac{\hbar^{2}}{2\,m(x)}\nabla$ with $m(\mathbf{r})\equiv m(x)=m_{av}\Theta(-x)+m_{\sigma}\Theta(x)$, similarly as in Refs.  Annunziata, ; Annunziata2, ; Annunziata3, ; Mortensen, . The correlation field is nonzero only on the superconducting side of the junction ($h_{cor}(\mathbf{r})=h_{cor}\Theta(x)$). Also, $\mathbf{r}=(x,y)$ and the interface scattering potential is chosen as a delta function of strength $\tilde{H}$, i.e. $V(\mathbf{r})~{}=~{}\tilde{H}~{}\delta(x)$. The gap function can be Fourier transformed as following $$\Delta(\mathbf{s},\mathbf{r})=\int d\mathbf{k}e^{i\mathbf{k}\mathbf{s}}\tilde{% \Delta}(\mathbf{k},\mathbf{r})=\int d\mathbf{k}e^{i\mathbf{k}\mathbf{s}}\Delta% _{\mathbf{k},\mathbf{Q}}\,e^{i\mathbf{Q}\mathbf{r}}\,\Theta(x),$$ (17) with $\Delta_{\mathbf{k},\mathbf{Q}}$ as in Eq. (6) but with the original set of coordinates rotated by $\alpha$ (cf. Fig. 3). Explicitly, the superconducting gap we use from now on has the form (in the new coordinates) $$\Delta_{\mathbf{k},\mathbf{Q}}=\Delta_{\mathbf{Q}}\Big{(}\cos{(ak_{x}\cos{% \alpha}-ak_{y}\sin{\alpha})}-\cos{(ak_{y}\cos{\alpha}+ak_{x}\sin{\alpha})}\Big% {)}.$$ (18) We neglect the proximity effects by assuming a step-like gap function. To solve the BdG equations we make the plane-wave ansatz. Namely, we assume that the two-component pair wave function has the form $$\psi(\mathbf{r},\sigma)\equiv\left(\begin{array}[]{c}u_{\sigma}(\mathbf{r})|% \sigma\rangle\\ v_{\sigma}(\mathbf{r})|\overline{\sigma}\rangle\end{array}\right)=e^{i\mathbf{% k}\mathbf{r}}\left(\begin{array}[]{c}\tilde{u}e^{i\mathbf{q}\mathbf{r}}\,|% \sigma\rangle\\ \tilde{v}e^{-i\mathbf{q}\mathbf{r}}\,|\overline{\sigma}\rangle\end{array}% \right),$$ (19) with $\tilde{u}$ and $\tilde{v}$ as constants and with $\overline{\sigma}\equiv-\sigma$ (we have also dropped the $\sigma$ indices of $\tilde{u}$ and $\tilde{v}$). We also remind that $\mathbf{q}=\mathbf{Q}/2$. By substituting (17) and (19) into BdG equations (14), (15) and after some algebra we obtain the following matrix equation $$\left(\begin{array}[]{cc}-E+\xi_{\mathbf{k}+\mathbf{q},\sigma}&\Delta_{\mathbf% {k},\mathbf{Q}}\\ \Delta_{-\mathbf{k},\mathbf{Q}}^{*}&-E-\xi_{\mathbf{k}-\mathbf{q},\overline{% \sigma}}\end{array}\right)\left(\begin{array}[]{c}\tilde{u}\,|\sigma\rangle\\ \tilde{v}\,|\overline{\sigma}\rangle\end{array}\right)=0,$$ (20) where unpaired quasiparticle energies $\xi_{\mathbf{k}\sigma}$ are given by Eq. (2) or (3). Eq. (20) gives the dispersion relations for quasiparticles and quasiholes in the superconductor $$E=E_{\mathbf{k}\pm}=\left\{\begin{array}[]{c}\xi_{\mathbf{k}}^{(a)}\pm\sqrt{% \xi_{\mathbf{k}}^{(s)2}+\Delta_{\mathbf{k},\mathbf{Q}}\Delta^{*}_{-\mathbf{k},% \mathbf{Q}}}\quad\,\textrm{for }\sigma=\uparrow,\\ -\xi_{-\mathbf{k}}^{(a)}\pm\sqrt{\xi_{-\mathbf{k}}^{(s)2}+\Delta_{\mathbf{k},% \mathbf{Q}}\Delta^{*}_{-\mathbf{k},\mathbf{Q}}}\quad\textrm{for }\sigma=% \downarrow,\end{array}\right.$$ (21) where $\xi_{\mathbf{k}}^{(s,a)}$ have been defined in Eq. (13). One may check that the above equation is in accordance with Eq. (12), as $E_{\mathbf{k}+}=E_{\mathbf{k}\uparrow}$ (quasiparticle) and $E_{\mathbf{k}-}=-E_{\mathbf{k}\downarrow}$ (quasihole) for incoming particle with spin $\sigma=\uparrow$, as well as $E_{\mathbf{k}+}=E_{-\mathbf{k}\downarrow}$ (quasiparticle) and $E_{\mathbf{k}-}=-E_{-\mathbf{k}\uparrow}$ (quasihole) for incoming particle with spin $\sigma=\downarrow$. This holds as long as $\Delta^{*}_{-\mathbf{k},\mathbf{Q}}=\Delta^{*}_{\mathbf{k},\mathbf{Q}}$, which is true for any real $\mathbf{k}$. As already mentioned, we study the FF type of the FFLO superconducting state, in which $\Delta(\mathbf{r})~{}=~{}\Delta_{\mathbf{Q}}e^{i2\mathbf{q}\mathbf{r}}$ and set the direction of the Cooper pair momentum $\mathbf{Q}=2\mathbf{q}$ as either perpendicular ($\mathbf{Q}=(Q,0)$), or parallel ($\mathbf{Q}=(0,Q)$) to the junction interface. The perpendicular configuration ($\mathbf{Q}=(Q,0)$) may lead to accumulating of charge at the NSJ interface due to normal and/or supercurrent present in the FF state. Therefore we pay principal attention to the parallel configuration. Parenthetically, the accumulation processes are very slow for the case of heavy quasiparticles. As we consider electron injected from the conductor side of the junction (junction geometry is presented in Fig. 3), the corresponding wave functions can be expressed as (we have omitted the spin part for clarity) $$\displaystyle\psi_{<}(\mathbf{r})=\left(\begin{array}[]{c}1\\ 0\end{array}\right)e^{i\mathbf{k}\mathbf{r}}+a\left(\begin{array}[]{c}0\\ 1\end{array}\right)e^{i\bf{p}\mathbf{r}}+b\left(\begin{array}[]{c}1\\ 0\end{array}\right)e^{i\mathbf{k}^{\prime}\mathbf{r}},$$ (22) $$\displaystyle\psi_{>}(\mathbf{r})=d\left(\begin{array}[]{c}u_{1}e^{iq_{x}x}\\ v_{1}e^{-iq_{x}x}\end{array}\right)e^{i\mathbf{k}_{1}^{+}\mathbf{r}}+c\left(% \begin{array}[]{c}u_{2}e^{iq_{x}x}\\ v_{2}e^{-iq_{x}x}\end{array}\right)e^{i\mathbf{k}_{2}^{+}\mathbf{r}},$$ (23) where $\psi_{<}(\mathbf{r})$ and $\psi_{>}(\mathbf{r})$ describe wave function on the normal-metal and superconductor sides, respectively. The quasimomenta $\mathbf{k}_{1}^{+}$ (for quasihole) and $\mathbf{k}_{2}^{+}$ (for quasiparticle) are solutions of Eq. (21) for a given incident energy $E$ propagating in the positive x direction. From the translational symmetry of the junction along the $y$ direction comes conservation of the $y$ momentum component. Namely, $k_{y}=k^{\prime}_{y}=p_{y}=k_{1y}^{+}=k_{2y}^{+}$. All the wave vectors are presented in Fig. 4. We use boundary conditions with the appropriate masses 222Boundary conditions with different masses have been used before e.g. in Refs.  Annunziata, ; Annunziata2, ; Annunziata3, . and the interface potential jump $\tilde{H}$; they are as follows $$\displaystyle\psi_{<}(\mathbf{r})|_{x=0}=\psi_{>}(\mathbf{r})|_{x=0},$$ (24) $$\displaystyle\frac{1}{m_{av}}\frac{\partial\psi_{<}(\mathbf{r})}{\partial x}|_% {x=0}=\frac{1}{m_{\sigma}}\frac{\partial\psi_{>}(\mathbf{r})}{\partial x}|_{x=% 0}-\frac{2\tilde{H}}{\hbar^{2}}\psi_{<}(\mathbf{r})|_{x=0}.$$ (25) Those conditions lead to the following set of 4 equations 333These equations are written for $y=0$. If $y\neq 0$ additional terms $e^{\pm iq_{y}y}$ appear, but they do not alter the solution, so they are usually omitted for clarity. for the amplitudes ($a,b,c,d$) $$\displaystyle 1+b-cu_{2}-du_{1}=0,$$ (26) $$\displaystyle a-cv_{2}-dv_{1}=0,$$ (27) $$\displaystyle\frac{ik_{x}(1-b)}{m_{av}}-\frac{cu_{2}i(q_{x}+k_{2x}^{+})}{m_{% \sigma}}-\frac{du_{1}i(q_{x}+k_{1x}^{+})}{m_{\sigma}}+\frac{2\tilde{H}}{\hbar^% {2}}(1+b)=0,$$ (28) $$\displaystyle\frac{aip_{x}}{m_{av}}-\frac{cv_{2}i(k_{2x}^{+}-q_{x})}{m_{% \overline{\sigma}}}-\frac{dv_{1}i(k_{1x}^{+}-q_{x})}{m_{\overline{\sigma}}}+% \frac{2\tilde{H}}{\hbar^{2}}a=0,$$ (29) which are similar to those in e.g. Ref. Mortensen, , except in our case vectors are replaced by their $x$-components: e.g. $k\leftrightarrow k_{x}$, $p\leftrightarrow p_{x}$, and also SDM are properly accounted for (obviously in the SIM case we have that $m_{\uparrow}=m_{\downarrow}=m_{av}$). From the solution of Eqs. (26)-(29) one can obtain probabilities of the hole reflection $p^{\sigma}_{rh}=|a|^{2}\frac{\Re[p_{x}]}{k_{x}}$, particle reflection $p^{\sigma}_{re}=|b|^{2}$, quasiparticle transmission $$p^{\sigma}_{te}=|c|^{2}m_{av}\frac{(\frac{|u_{2}|^{2}}{m_{\sigma}}-\frac{|v_{2% }|^{2}}{m_{\overline{\sigma}}})\Re[k^{+}_{2x}]+(\frac{|u_{2}|^{2}}{m_{\sigma}}% +\frac{|v_{2}|^{2}}{m_{\overline{\sigma}}})q_{x}}{k_{x}},$$ (30) and quasihole transmission $$p^{\sigma}_{th}=|d|^{2}m_{av}\frac{(\frac{|u_{1}|^{2}}{m_{\sigma}}-\frac{|v_{1% }|^{2}}{m_{\overline{\sigma}}})\Re[k^{+}_{1x}]+(\frac{|u_{1}|^{2}}{m_{\sigma}}% +\frac{|v_{1}|^{2}}{m_{\overline{\sigma}}})q_{x}}{k_{x}},$$ (31) where the $\sigma$ superscript indicates the spin of the incoming electron. In the following we use the dimensionless barrier strength $Z~{}\equiv~{}2m_{av}\tilde{H}/(k_{F}\hbar^{2})$, where we define Fermi wave vector $k_{F}$ using the zero-field value $k_{F}=\frac{1}{\hbar}\sqrt{2m_{av}\mu}$. Note also that we do not use the assumption $k=k^{\prime}=p=k_{1}^{+}=k_{2}^{+}\approx k_{F}$ utilized at this point in majority of the papers on Andreev reflection spectroscopy, because we deal with heavy quasiparticles for which $\mu$ is of the order of $100\textrm{ K}$. Therefore the usual assumption $\mu\gg E$ is not, strictly speaking, applicable in the present situation. IV Results and physical discussion Differential conductance ($G\equiv dI/dV$) can be obtained from the reflection and transmission probabilities BTK ; Chaudhuri in a straightforward manner $$G_{ns}^{\sigma}=\frac{1}{2}\int_{-\pi/2}^{\pi/2}d\theta\cos{\theta}[1-p^{% \sigma}_{re}(E,\theta)+p^{\sigma}_{rh}(E,\theta)].$$ (32) The final result of our calculation is the total conductance $G$ averaged over spin and normalized with respect to the conductance $G_{nn}^{\sigma}$ of the junction with $\Delta=0$ but still with the same other parameters ($m_{\sigma}$, $\mu$, $h_{cor}$), as the superconducting state. Namely, $$G=\frac{G_{ns}^{\uparrow}+G_{ns}^{\downarrow}}{G_{nn}^{\uparrow}+G_{nn}^{% \downarrow}}.$$ (33) This quantity is exhibited in the following figures, sometimes with the spin-resolved conductance $G^{\sigma}~{}\equiv~{}G_{ns}^{\sigma}/G_{nn}^{\sigma}$. We assume the barrier strength equal to $Z=0$ (contact limit), $Z=0.5$ (intermediate limit), and $Z=5$ (tunneling limit). The case of $Z=5$ reflects not only the situation for planar NSJ with a thick insulating layer, but also that encountered in Scanning Tunneling Spectroscopy (STS) experiments. Giaever ; *Eschrig; *Fischer Our goal in the following is to identify novel, model-independent features of the strongly-correlated situation (i.e., with SDM). Namely, those features should not depend on the assumed dispersion relation or the pairing-potential strength. IV.1 $s$-wave pairing symmetry In Fig. 5 the conductance for the $s$-wave gap symmetry and $\mathbf{Q}$ vector oriented perpendicular to the junction, is presented. It can be seen that there are peaks in the conductance originating from AR processes of quasiparticles having different spins, that take place when the energy $E$ of the incoming electron fits into the so-called Andreev Window (AW), see Refs. JKC, (Fig. 3), Partyka, , and Ref. JKPHD, (Chapter 5, Figs. 5.1d, 5.4b) for more details. These peaks are separated by a distance equal to twice the Zeeman energy ($2h=g\mu_{B}H$) only in the case without strong correlations (SIM). For the SDM case the correlations compensate the Zeeman splitting (by means of $h_{cor}$ and $m_{\sigma}$, cf. Refs. JKJS, ; JKJS2, ) and as result the conductance peaks are closer than twice the Zeeman energy. We identify this feature as a hallmark of strong correlations in the superconducting state. Another interesting feature differentiating the SIM and SDM cases is absence of the $\sigma=\uparrow$ peak for SDM when magnetic field $H\gtrsim 12\textrm{ T}$. For such fields the junction is transparent to incoming particles with $\sigma=\uparrow$ because the Andreev window JKC ; Partyka falls below $E=0$. In other words, the quasiparticle energy $E_{\mathbf{k}+}$ within FF superconductor is below zero around the whole Fermi surface. This leads to breaking of Cooper pairs and produces normal state region filling whole angular space around the Fermi surface (see Fig. 4b, SDM case). Since there are normal particles with $\sigma=\uparrow$ within the FF superconductor, the incoming $\sigma=\uparrow$ quasiparticle does not feel the superconducting gap presence, and the junction is transparent, what yields $G_{\uparrow}\approx 1$. In all the following figures the parallel orientation of the $\mathbf{Q}$ vector with respect to the NSJ interface has been assumed. In Fig.  6 the NSJ conductance for the $s$-wave gap symmetry has been presented. Again, at high magnetic fields $H\gtrsim 12\textrm{ T}$ the junction is transparent to incoming quasiparticles with $\sigma=\uparrow$. In the present case it is difficult to discern characteristic features of the conductance from the spin-up and spin-down channels in such a manner, that the splitting of the peaks could be measured. For this purpose, the spin-resolved signals $G_{\sigma}$ would have to be singled out, as shown in Fig. 6bd, because the spin-specific features of the total conductance are subtle and could be smeared out at finite temperature or due to other effects (e.g. inelastic scattering). Again, the characteristic features of spin-up and spin-down signals are separated by a distance equal to twice the Zeeman energy for SIM (Fig. 6a) and are closer for SDM (Fig. 6d). IV.2 $d$-wave pairing symmetry In Fig. 7 the conductance in the case of FF state with $\theta_{\mathbf{Q}}=0$ is presented. Such phase is stable in the high-field regime (see Figs. 1 and 2). Note that by fixing the direction of $\mathbf{Q}$ with respect to the NSJ interface we fix also the angle $\alpha$ (see Fig. 3), as $\theta_{\mathbf{Q}}$ is determined from the results presented in Sec. II. Namely, the parallel vector $\mathbf{Q}$ orientation with respect to the junction interface implies that $\alpha=0$ for $\theta_{\mathbf{Q}}=0$ (cf. Fig. 3b) and $\alpha=\pi/4$ for $\theta_{\mathbf{Q}}=\pi/4$ (cf. Fig. 3c). In the case with $\theta_{\mathbf{Q}}=0$ no remarkable, model-independent differences between the SDM and the SIM cases appear, as all peaks present in Fig. 7 come from $\sigma=\downarrow$ electrons (see Fig. 7a, where the $\sigma=\uparrow$ signal has been plotted). The conductance spectra for the $d$-wave FF phase with $\theta_{\mathbf{Q}}=\pi/4$ (with $\alpha=\pi/4$) have been presented in Fig. 8. As in the $s$-wave case, and for the same reasons, at high magnetic fields the junction is transparent to spin-up quasiparticles in the SDM case. Only at $H~{}\lesssim~{}14.4~{}\textrm{ T}$ we were able to discern characteristic, spin-specific features of the spectra (see Fig. 8ad for the spin-resolved spectra). These features are again split by twice the Zeeman energy for SIM, and are closer for SDM. To identify the spin-specific features, spin-resolved spectra have to be analyzed, similarly as in the $s$-wave case. Finally, in Fig. 9 we show the conductance spectra for the $d$-wave BCS state with ($100$) contact. In this case, in the tunneling limit ($Z=5$) the peaks originating from AR of quasiparticles with different spins, are most clearly visible. As previously, these peaks are split by twice the Zeeman energy for SIM, and are closer for SDM. We identify this case as the most promising for experimental verification, as discussed in the following. V Relation to experiment Our results imply that the splitting between the spin-up and the spin-down features of the conductance spectra is equal to twice the Zeeman energy only in the non-correlated case (SIM). In the strongly-correlated case, due to the presence of spin-dependent masses (SDM) $m_{\sigma}$ and correlation field $h_{cor}$, the separation of the spin-up and the spin-down features differs essentially. In the present case of a two-dimensional, correlated electron gas, this separation is smaller (because $m_{\sigma}$ and $h_{cor}$ compensate the Zeeman term; typically $h_{cor}\approx 0.5\times(-h)$, cf. Refs. JKJS, and JKJS2, ), but in general it may be larger. For example, in the two-dimensional Hubbard model, our recent calculations SGA yield typically $h_{cor}\approx 5\times h$, and therefore in that model correlations enhance splitting of the conductance peaks. It should be in principle possible to measure the conductance-peaks splitting experimentally. Especially, the BCS case with ($100$) contact and high barrier strength $Z$ (Fig. 9cf) looks promising, as the peaks are clearly visible, and the BCS state exists in lower magnetic fields than FFLO, what should make the whole analysis simpler (the orbital effects, Benistant ; *Takagaki; *Asano; *Hoppe which may be essential especially on the normal-metal side, are less important in that regime). Another feature differentiating the SIM case from the SDM situation is the absence of the spin-up features of conductance spectra for high magnetic fields and for the FF state. It is difficult to say, if this feature is model-independent or characteristic to the model with dispersion relation of a free-electron gas with renormalized masses. Andreev reflection spectroscopy in magnetic field has already been reported in a few compounds. Moore ; Kacmarcik ; Park2 ; Park3 ; Dmitriev1 ; *Dmitriev2 For example in Mo${}_{3}$Sb${}_{7}$ point contact AR spectroscopy lead to identification of this compound as an unconventional superconductor. Dmitriev1 ; *Dmitriev2 Such measurements have also been performed on pure and Cd-doped CeCoIn${}_{5}$. Park2 ; Park3 This compound, as a heavy-fermion superconductor and possibly host to the FFLO phase, is a natural candidate for verification of the present results. Spectra presented in Fig. 4 of Ref. Park2, resemble our Fig. 9e, with splitting between the spin-up and the spin-down features of the order of $8\textrm{ T}$ in fields of approximately $2\textrm{ T}$. This might indicate that $h_{cor}\upuparrows h$ ($h_{cor}$ enhances $h$), but in CeCoIn${}_{5}$ the one-band model assumed in our calculations may not be sufficient Fogelstrom and therefore, our interpretation is only a speculation. 444Additionally, in aforementioned Fig. 4 of Ref. Park2, also antiferromagnetism plays a role in the presented spectra. On the other hand, for a two-band model with strong correlations the $h_{cor}$ terms are also present (for both bands), and our conclusions should also hold. Let us note that, in view of the present results, the AR spectra for the case of the BCS state with ($100$) contact and in the tunneling limit (high $Z$) would be most helpful in detecting the effect of strong correlations in superconductors. Such configuration can be studied by both Andreev reflection spectroscopy of a planar junction, as well as by the Scanning Tunneling Spectroscopy technique. VI Conclusions In this paper we have provided a detailed analysis of the conductance spectra of a normal metal - strongly-correlated superconductor junction. The splitting of conductance peaks in the strongly correlated case differs from that in the uncorrelated case. It is equal to twice the Zeeman energy only in the latter case and in the correlated case it may be smaller or larger depending on the details of the electronic structure. We identify this feature as one of the hallmarks of strong correlations in the superconducting phase, as it should hold true for other models with different dispersion relations. It is most clearly visible in the case of BCS superconductor with ($100$) contact and in the tunneling regime (high $Z$). In other cases it is also present, but the spin-resolved conductances must be analyzed in order to identify the splitting unambiguously. It would be interesting to examine other spectroscopic methods, such as the Josephson tunneling in the SQUID geometry for the systems with strong correlations (and specific features resulting from them: the spin-dependent masses and the correlation field). Such analysis should be carried out separately as it may lead to a decisively distinct interference pattern in an applied magnetic field. Acknowledgements The work was supported by Ministry of Science and Higher Education, Grants Nos. N N202 128736 and N N202 173735, as well as by the Foundation for Polish Science under the ”TEAM” program. References (1) P. Fulde and R. A. Ferrell, Phys. Rev. 135, A550 (1964) (2) A. Larkin and Y. Ovchinnikov, J. Exp. Theor. Phys. 47, 1136 (1964) (3) A. Larkin and Y. Ovchinnikov, Sov. Phys. JETP 20, 762 (1965) (4) Y. Matsuda and H. Shimahara, J. Phys. Soc. Jpn. 76, 051005 (2007) (5) K. Kakuyanagi, M. Saitoh, K. Kumagai, S. Takashima, M. Nohara, H. Takagi, and Y. Matsuda, Phys. Rev. Lett. 94, 047602 (2005) (6) K. Kumagai, M. Saitoh, T. Oyaizu, Y. Furukawa, S. Takashima, M. Nohara, H. Takagi, and Y. Matsuda, Phys. Rev. Lett. 97, 227002 (2006) (7) A. Bianchi, R. Movshovich, C. Capan, P. G. Pagliuso, and J. L. Sarrao, Phys. Rev. Lett. 91, 187004 (2003) (8) K. Kumagai, H. Shishido, T. Shibauchi, and Y. Matsuda, Phys. Rev. Lett. 106, 137004 (2011) (9) B.-L. Young, R. R. Urbano, N. J. Curro, J. D. Thompson, J. L. Sarrao, A. B. Vorontsov, and M. J. Graf, Phys. Rev. Lett. 98, 036402 (2007) (10) M. Kenzelmann, T. Strässle, C. Niedermayer, M. Sigrist, B. Padmanabhan, M. Zolliker, A. D. Bianchi, R. Movshovich, E. D. Bauer, J. L. Sarrao, and J. D. Thompson, Science 321, 1652 (2008) (11) G. Koutroulakis, M. D. Stewart, V. F. Mitrović, M. Horvatić, C. Berthier, G. Lapertot, and J. Flouquet, Phys. Rev. Lett. 104, 087001 (2010) (12) R. Ikeda, Phys. Rev. B 81, 060510 (2010) (13) Y. Hatakeyama and R. Ikeda, Phys. Rev. B 83, 224518 (2011) (14) J. Singleton, J. A. Symington, M.-S. Nam, A. Ardavan, M. Kurmoo, and P. Day, J. Phys.: Condens. Matter 12, L641 (2000) (15) B. Bergk, A. Demuer, I. Sheikin, Y. Wang, J. Wosnitza, Y. Nakazawa, and R. Lortz, Phys. Rev. B 83, 064506 (2011) (16) K. Cho, B. E. Smith, W. A. Coniglio, L. E. Winter, C. C. Agosta, and J. A. Schlueter, Phys. Rev. B 79, 220507 (2009) (17) P. Javorský, E. Colineau, F. Wastin, F. Jutier, J.-C. Griveau, P. Boulet, R. Jardin, and J. Rebizant, Phys. Rev. B 75, 184501 (2007) (18) J. K. Dong, H. Zhang, X. Qiu, B. Y. Pan, Y. F. Dai, T. Y. Guan, S. Y. Zhou, D. Gnida, D. Kaczorowski, and S. Y. Li, arXiv:1008.0679 (19) C. Pfleiderer, Rev. Mod. Phys. 81, 1551 (2009) (20) K. Cho, H. Kim, M. A. Tanatar, Y. J. Song, Y. S. Kwon, W. A. Coniglio, C. C. Agosta, A. Gurevich, and R. Prozorov, Phys. Rev. B 83, 060502 (2011) (21) R. Casalbuoni and G. Nardulli, Rev. Mod. Phys. 76, 263 (2004) (22) K. Machida, T. Mizushima, and M. Ichioka, Phys. Rev. Lett. 97, 120407 (2006) (23) J. Kinnunen, L. M. Jensen, and P. Törmä, Phys. Rev. Lett. 96, 110403 (2006) (24) T. K. Koponen, T. Paananen, J.-P. Martikainen, M. R. Bakhtiari, and P. Törmä, New Journal of Physics 10, 045014 (2008) (25) A. McCollam, S. R. Julian, P. M. C. Rourke, D. Aoki, and J. Flouquet, Phys. Rev. Lett. 94, 186401 (2005) (26) J. Spałek and P. Gopalan, Phys. Rev. Lett. 64, 2823 (1990) (27) P. Korbel, J. Spałek, W. Wójcik, and M. Acquarone, Phys. Rev. B 52, R2213 (1995) (28) J. Spałek and W. Wójcik, Spectroscopy of the Mott Insulators and Correlated Metals, Vol. 119 (Springer-Verlag, Berlin, 1995) pp. 41–65 (29) J. Bauer and A. C. Hewson, Phys. Rev. B 76, 035118 (2007) (30) S. Onari, H. Kontani, and Y. Tanaka, J. Phys. Soc. Jpn. 77, 023703 (2008) (31) SDM have also been observed in other heavy-fermion systems. Sheikin ; Takashita One should note at the outset that by spin dependent masses (or their enhancement) we understand a $\mathbf{k}$-independent feature of a very narrow band which is derived solely from the effect of correlation. The ordinary splitting into spin subbands in the Zeeman field is a separate effect. (32) J. Kaczmarczyk and J. Spałek, Phys. Rev. B 79, 214519 (2009) (33) J. Kaczmarczyk and J. Spałek, J. Phys.: Condens. Matter 22, 355702 (2010) (34) M. M. Maśka, M. Mierzejewski, J. Kaczmarczyk, and J. Spałek, Phys. Rev. B 82, 054509 (2010) (35) J. Kaczmarczyk, J. Jȩdrak, and J. Spałek, Acta Phys. Polon. A 118, 261 (2010) (36) Z.-J. Ying, M. Cuoco, C. Noce, and H.-Q. Zhou, Phys. Rev. B 78, 104523 (2008) (37) A. Andreev, Zh. Eksp. Teor. Fiz. 46, 1823 (1964) (38) A. Andreev, Sov. Phys. JETP 19, 1228 (1964) (39) G. E. Blonder, M. Tinkham, and T. M. Klapwijk, Phys. Rev. B 25, 4515 (1982) (40) Q. Cui, C.-R. Hu, J. Y. T. Wei, and K. Yang, Phys. Rev. B 73, 214514 (2006) (41) J. Kaczmarczyk, M. Sadzikowski, and J. Spałek, Physica C 471, 193 (2011) (42) T. Partyka, M. Sadzikowski, and M. Tachibana, Physica C 470, 277 (2010) (43) Y. Tanaka, Y. Asano, M. Ichioka, and S. Kashiwaya, Phys. Rev. Lett. 98, 077001 (2007) (44) D. Zhang, C. S. Ting, and C.-R. Hu, Phys. Rev. B 70, 172508 (2004) (45) V. Lukic and E. J. Nicol, Phys. Rev. B 76, 144508 (2007) (46) Y. Tanaka and S. Kashiwaya, Phys. Rev. Lett. 74, 3451 (1995) (47) C. Bruder, Phys. Rev. B 41, 4017 (1990) (48) S. Kashiwaya and Y. Tanaka, Rep. Prog. Phys. 63, 1641 (2000) (49) K. Argyropoulos and A. Dimoulas, Physica C 405, 77 (2004) (50) K. Maki, Phys. Rev. 148, 362 (1966) (51) W. K. Park and L. H. Greene, J. Phys.: Condens. Matter 21, 103203 (2009) (52) G. Goll, Adv. Solid State Phys. 45, 213 (2005) (53) G. Deutscher and P. Nozières, Phys. Rev. B 50, 13557 (1994) (54) M. A. N. Araújo and P. D. Sacramento, Phys. Rev. B 77, 134519 (2008) (55) M. A. N. Araújo and A. H. Castro Neto, Phys. Rev. B 75, 115133 (2007) (56) J. Jȩdrak, J. Kaczmarczyk, and J. Spałek, arXiv:1008.0021 (57) Q.-H. Wang, Z. D. Wang, Y. Chen, and F. C. Zhang, Phys. Rev. B 73, 092507 (2006) (58) K.-Y. Yang, W. Q. Chen, T. M. Rice, M. Sigrist, and F.-C. Zhang, New J. Phys. 11, 055053 (2009) (59) J. Spałek, Phys. Rev. B 38, 208 (1988) (60) J. Spałek and P. Gopalan, J. Phys. (France) 50, 2869 (1989) (61) J. Karbowski and J. Spałek, Phys. Rev. B 49, 1454 (1994) (62) J. Kaczmarczyk, Ph. D. Thesis, Jagiellonian University, Kraków, 2011 http://th-www.if.uj.edu.pl/ztms/download/phdTheses/Jan_Kaczmarczyk_dokt%orat.pdf (63) C. Petrovic, P. G. Pagliuso, M. F. Hundley, R. Movshovich, J. L. Sarrao, J. D. Thompson, Z. Fisk, and P. Monthoux, J. Phys. Condens. Matter 13, L337 (2001) (64) H. Shimahara, Phys. Rev. B 50, 12760 (1994) (65) M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, P. Alken, M. Booth, and F. Rossi, GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078. (66) P. G. de Gennes, Superconductivity of Metals and Alloys (Westview Press, 1999) (67) M. G. Burt, Phys. Rev. B 50, 7518 (1994) (68) N. A. Mortensen, K. Flensberg, and A.-P. Jauho, Phys. Rev. B 59, 10176 (1999) (69) G. Annunziata, M. Cuoco, C. Noce, A. Romano, and P. Gentile, Phys. Rev. B 80, 012503 (2009) (70) G. Annunziata, M. Cuoco, P. Gentile, A. Romano, and C. Noce, Phys. Rev. B 83, 094507 (2011) (71) G. Annunziata, H. Enoksen, J. Linder, M. Cuoco, C. Noce, and A. Sudbø, Phys. Rev. B 83, 144520 (2011) (72) Boundary conditions with different masses have been used before e.g. in Refs.  \rev@citealpnumAnnunziata,Annunziata2,Annunziata3. (73) These equations are written for $y=0$. If $y\not=0$ additional terms $e^{\pm iq_{y}y}$ appear, but they do not alter the solution, so they are usually omitted for clarity. (74) S. Chaudhuri and P. F. Bagwell, Phys. Rev. B 51, 16936 (1995) (75) I. Giaever, Phys. Rev. Lett. 5, 147 (1960) (76) M. Eschrig, C. Iniotakis, and Y. Tanaka, arXiv:1001.2486 (77) O. Fischer, M. Kugler, I. Maggio-Aprile, C. Berthod, and C. Renner, Rev. Mod. Phys. 79, 353 (2007) (78) P. A. M. Benistant, H. van Kempen, and P. Wyder, Phys. Rev. Lett. 51, 817 (1983) (79) Y. Takagaki, Phys. Rev. B 57, 4009 (1998) (80) Y. Asano, Phys. Rev. B 61, 1732 (2000) (81) H. Hoppe, U. Zülicke, and G. Schön, Phys. Rev. Lett. 84, 1804 (2000) (82) T. D. Moore and D. A. Williams, Phys. Rev. B 59, 7308 (1999) (83) J. Kačmarčík, P. Szabó, P. Samuely, K. Flachbart, A. Nader, and A. Briggs, Physica B 259-261, 985 (1999) (84) W. K. Park, J. L. Sarrao, J. D. Thompson, L. D. Pham, Z. Fisk, and L. H. Greene, Physica B 403, 731 (2008) (85) W. K. Park, L. D. Pham, A. D. Bianchi, C. Capan, Z. Fisk, and L. H. Greene, J. Phys.: Conf. Ser. 150, 052208 (2009) (86) V. Dmitriev, L. Rybaltchenko, E. Khristenko, L. Ishchenko, Z. Bukowski, and R. Troć, Acta Phys. Polon. A 114, 263 (2008) (87) V. Dmitriev, L. Rybaltchenko, L. Ishchenko, E. Khristenko, Z. Bukowski, and R. Troć, Low Temp. Phys. 33, 1009 (2007) (88) M. Fogelström, W. K. Park, L. H. Greene, G. Goll, and M. J. Graf, Phys. Rev. B 82, 014527 (2010) (89) Additionally, in aforementioned Fig. 4 of Ref. \rev@citealpnumPark2 also antiferromagnetism plays a role in the presented spectra. (90) I. Sheikin, A. Gröger, S. Raymond, D. Jaccard, D. Aoki, H. Harima, and J. Flouquet, Phys. Rev. B 67, 094420 (2003) (91) M. Takashita, H. Aoki, T. Terashima, S. Uji, K. Maezawa, R. Settai, and Y. Ōnuki, J. Phys. Soc. Jpn. 65, 515 (1996)
Modified gravity as a diagravitational medium José A. R. Cembranos [email protected] Mario Coma Díaz [email protected] Prado Martín-Moruno [email protected] Departamento de Física Teórica and UPARCOS, Universidad Complutense de Madrid, E-28040 Madrid, Spain Abstract In this letter we reflect on the propagation of gravitational waves in alternative theories of gravity, which are typically formulated using extra gravitational degrees of freedom in comparison to General Relativity. We propose to understand that additional structure as forming a diagravitational medium for gravitational waves characterized by a refractive index. Furthermore, we shall argue that the most general diagravitational medium has associated an anisotropic dispersion relation. In some situations a refractive index tensor, which takes into account both the deflection of gravitational waves due to the curvature of a non-flat spacetime and the modifications of the general relativistic predictions, can be defined. The most general media, however, entail the consideration of at least two independent tensors. keywords: Modified gravity, gravitational waves, refractive index 1 Introduction Nobody in our community is unaware of the first direct measurement of Gravitational Waves (GWs) announced already two years ago Abbott:2016blz . The confirmation of this general relativistic prediction reinforced the glory of Einstein’s theory and our confidence in its power to describe the gravitational phenomena. Even so, it did not prevent researchers from continuing to investigate on alternative theories of gravity (ATGs), as they may describe the early inflationary period and provide an option to the dark energy paradigm. Moreover, those theories, which typically have more degrees of freedom than General Relativity (GR), also predict gravitational radiation. This radiation, however, has more than two polarizations and propagate in a different way Will:2014kxa . The tensorial modes correspond to what is properly called the GW. They can have a propagation speed different from the speed of light and from that of the vector or scalar modes. Gravitational radiation offers, therefore, new routes of proving modifications of the predictions of GR or finding additional support for it. In fact, the recent measurement of GWs and their electromagnetic counterpart TheLIGOScientific:2017qsa has evidenced that they propagate at the speed of light in the nearby universe; hence it has allowed to rule out a large number of ATGs as dark energy mimickers Lombriser:2015sxa ; Lombriser:2016yzn ; Ezquiaga:2017ekz ; Creminelli:2017sry ; Sakstein:2017xjx ; Baker:2017hug ; Akrami:2018yjz . In this context, it is an interesting exercise to reflect on the fundamentals of GWs in ATGs to shed some light on unexplored phenomena in these theories. The propagation of GWs in cosmological spacetimes, as predicted by different promising ATGs, has been thoroughly investigated, see e. g. deRham:2014zqa ; Saltas:2014dha ; Caprini:2018mtu ; Akrami:2018yjz . From those studies and basic knowledge on electrodynamics, it is easy to conclude that the modifications of the general relativistic predictions introduced by ATGs affect the propagation of GWs as if they were moving through a medium, as compared with the motion through vacuum where those modifications are negligible. Therefore, the additional “structure” introduced by ATGs, which alter the dialog between matter and curvature, affects the propagation of GWs producing what we will call a diagravitational medium, in analogy with a dielectric medium in electrodynamics. On the other hand, it is worth mentioning that the deflection of light in curved geometries can be investigated using an effective refractive index that is a $3\times 3$ tensor defined in terms of the metric Boonserm:2004wp . In a general relativistic context, a scalar effective refractive index has already been defined to describe the deflection of GWs in some studies Szekeres ; Peters ; Giovannini:2015kfa . In this letter we will fully develop the analogy between the propagation of light through a dielectric medium and the propagation of GWs predicted by ATGs, as if they were moving through a diagravitational medium. We will consider that in the context of ATGs and general backgrounds, the effective refractive index should be a $3\times 3$ tensor with entries depending on the background metric and on the new gravitational terms introduced by the ATG. Developments on the dispersion relation for electromagnetic waves in anisotropic media Itin:2009fj suggest us that it is more appropriate to consider two independent tensors characterizing the medium instead of one refractive index. However, we will first work in detail the simplest scenario, summarizing previous advances on ATGs, to acquire intuition about the problem. As it could be expected, in Minkowski space the effective refractive index of GWs can be characterized by a scalar quantity. It should be noted that a first approach to include modified gravity effects in the scalar refractive index of highly symmetric backgrounds has been recently considered Giovannini:2018zbf ; Giovannini:2018nkt during the final stage of development of this work (focusing on cosmological scenarios in that case) . 2 Propagation through isotropic diagravitational media Let us first consider the propagation of GWs in the simplest scenario. A large number of ATGs predict tensor perturbations that propagate in a Minkowski background according to Saltas:2014dha 111Equation (1) is similar to that presented in reference Saltas:2014dha for cosmological spacetimes, although our definition of $\nu$ is adapted to Minkowski, we have not used a Fourier decomposition for the wave, and we have not (yet) considered bigravity. $$\ddot{h}_{ij}(t,\vec{x})+\nu\,\dot{h}_{ij}(t,\vec{x})-c_{T}^{2}\vec{\nabla}^{2% }h_{ij}(t,\vec{x})+m_{\rm g}^{2}\,h_{ij}(t,\vec{x})=0,$$ (1) where $h_{ij}(t,\vec{x})$ is the transverse and traceless part of the metric perturbations and we use natural units throughout this work. The term $\nu$ takes into account the potential run rate of the effective Planck mass, $c_{T}$ is the speed of GWs, and $m_{\rm g}$ is the graviton mass ($\nu=0$, $c_{T}=1$, and $m_{\rm g}=0$ in GR). Although $\nu$ and $c_{T}$ typically depend on the additional gravitational degrees of freedom, we assume that they are slowly varying functions along the intervals of interest and, therefore, we take them as having a constant value. The solution of equation (1) can be expressed in terms of the frequency $\omega$ and wave number vector $\vec{k}$ of the wave. We assume that $$k=n\,\omega,$$ (2) with $k=\sqrt{\vec{k}^{2}}$, which is just the standard definition of refractive index used in electrodynamics. Taking a plane-wave solution of equation (1), we obtain $$n^{2}=\frac{1}{c^{2}_{T}}\left(1+i\frac{\nu}{\omega}-\frac{m_{\rm g}^{2}}{% \omega^{2}}\right).$$ (3) This equation is exact in Minkowski space. It can also be applied to propagation through small distances of any curved geometry. In particular it is valid during the early epochs of our Universe for which relevant constraints on the speed of GW propagation have not yet being obtained. Now we focus on situations where the ATG implies only slight departures from the general relativistic predictions, that is $n\simeq 1$. This will certainly be satisfied in the recent Universe Ezquiaga:2017ekz ; Baker:2017hug ; Creminelli:2017sry . Furthermore, we consider that $n$ can be generalized by taking into account the possibility of having a modified dispersion relation with the extra term $Ak^{\alpha}$ Mirshekari:2011yq , where $A$ is a dimensionful constant. Therefore, the effective refractive index is given by $$n=1+i\frac{\nu}{2\,\omega}+(1-c_{T})-\frac{m_{\rm g}^{2}}{2\,\omega^{2}}-\frac% {A}{2}\omega^{\alpha-2},$$ (4) up to first order in $\nu/\omega$, $1-c_{T}$, $m_{g}^{2}/\omega^{2}$, and $A\,\omega^{\alpha-2}$. So, GWs in ATGs propagate as if they were in a medium formed by the extra gravitational degrees of freedom, whereas GWs in GR are analogous to electromagnetic waves propagating in vacuum. Using this analogy with waves propagating through dielectric materials, we interpret this phenomenon as the presence of a diagravitational medium for GWs. Through this medium the wave has a phase velocity, which is the speed of the phase of the monochromatic GW, and a group velocity given by $$V_{\rm p}=\frac{1}{n}=c_{T}+\frac{m_{\rm g}^{2}}{2\,\omega^{2}}+\frac{A}{2}% \omega^{\alpha-2},$$ (5) and $$V_{\rm g}=\frac{{\rm d}\omega}{{\rm d}k}=c_{\rm T}-\frac{m_{\rm g}^{2}}{2\,% \omega^{2}}+\frac{(\alpha-1)A}{2}\omega^{\alpha-2},$$ (6) respectively. It is easy to understand that the following effects on the propagation of GWs can be obtained: Subluminal speed For $c_{T}\simeq{\rm constant}$, this is the only term from those included in equation (4) that does not induce any spread of the wave packet. Therefore, it corresponds to the simple case of a wave moving through a non-dispersive medium, that is $V_{\rm p}=V_{\rm g}$. This modification on the propagation of GWs can be found, for example, when considering scalar-tensor theories with a kinetic coupling to gravity Kobayashi:2011nu . Mass This is the simplest case of a dispersive medium, as it introduces frequency-dependence. Therefore, each component of the wavepacket having a different frequency will propagate at a different speed $V_{\rm p}$, whereas the graviton propagates at $V_{{\rm g},0}\equiv V_{\rm g}(\omega_{0})$ if the wavepacket is sharply peaked around $\omega_{0}$. Different theories can equipe the graviton with a non-vanishing mass, although those avoiding the introduction of ghosts are of particular interest deRham:2014zqa . Attenuation The term $\nu$ implies that the refractive index has a non-vanishing imaginary part. Light is absorbed by the medium for $\nu>0$ and amplified for $\nu<0$, and the wave packet is spread. In the context of GWs in ATGs, such amplification was first discussed in scalar-tensor theories of gravity Barrow:1993ad . This attenuation seems typically linked to the existence of an effective non-constant gravitational coupling Saltas:2014dha and, just considering a formal and dimensional argument, one could think that it has to be always based on the variation of some fundamental quantity of the ATG. Lorentz violation Modified dispersion relations of the form $\omega^{2}=k^{2}+A\,k^{\alpha}$, where $\alpha\neq 0,\,2$ and $A\neq 0$, are commonly found encapsulating the quantum gravitational phenomenology of different theories Mirshekari:2011yq ; Yunes:2016jcc . The term $A\,k^{\alpha}$ produces an effect qualitatively similar to that discussed in the previous paragraph, although for the most common cases with $\alpha>1$ the correction on the phase and group velocities have the same sign. For that case, one needs $A<0$ to avoid problematic superluminalities. Discussions about how to constrain the group velocity (and, therefore, $A$) of these theories using GWs can be found, for example, in references Yunes:2016jcc ; Sotiriou:2017obf . Birefrigence One could further generalize this scenario considering that each polarization ($h_{\times,+}$) propagates according to a different refractive index ($n_{\times,+}$) and with different velocities. This effect appears, for example, in ATGs that induce parity violations Alexander:2004wk ; Yunes:2010yf . GW oscillation In ATGs with more than one dynamical metric, the tensor perturbations are coupled. Linear combinations of those perturbations propagate as if they were in media described by refractive indexes of the form (4), which are not corresponding to the gravitational medium where we measure. This phenomenon, which is similar in nature to neutrino oscillation, has been studied in detail in bigravity Max:2017flc . We expect that any ATG will predict GWs that propagate in Minkowski as if they were moving through a medium with a refractive index with a form qualitatively similar to that discussed here. Thus, the presented arguments would apply to any possible ATG, although we can think that the particular dependence of the attenuation on the frequency term may change for unexplored theories. On the other hand, when considering GWs in more general backgrounds, one should first note that those backgrounds must have two separate scales to allow the definition of GWs. Moreover, in order to consider a refractive index for the diagravitational medium, the symmetry of the spacetime should suggest a natural choice of a time coordinate and, therefore, of a frequency and a wave number vector. For example, it is well known that both conditions are satisfied in cosmological homogeneous and isotropic scenarios Caprini:2018mtu . In these scenarios, the propagation of GWs is also given by equation (1) but changing $\nu$ by $2\mathcal{H}+\nu$, with $\mathcal{H}$ being the Hubble rate in conformal time Saltas:2014dha . Hence the diagravitational medium is also isotropic and, therefore, characterized by a scalar effective refractive index of the form given by equation (3) (or equation (4) in the limit $n\simeq 1$) with $\nu\rightarrow 2\mathcal{H}+\nu$. 3 Dispersion in anisotropic media It is known that one can define an effective refractive index $3\times 3$-tensor for electromagnetic waves propagating in curved geometries Boonserm:2004wp . In order to understand the most general form that such a tensor could have when one includes also additional gravitational degrees of freedom, we can look at propagation of light through different kind of materials as a source of inspiration. Propagation of electromagnetic waves through anisotropic dielectric media characterized by two generic permittivity and permeability $3\times 3$-matrices, denoted by $\varepsilon^{ij}$ and $\mu^{ij}$, has been studied in reference Itin:2009fj . There the author obtained a compact dispersion relation for electromagnetic waves in an anisotropic medium investigating the existence of solutions for the wave propagation system Itin:2009fj $$\chi^{\alpha\beta\gamma\delta}\partial_{\beta}\partial_{\delta}A_{\gamma}=0,$$ (7) where $A_{\gamma}$ is the vector potential of the electromagnetic field strength and $\chi^{\alpha\beta\gamma\delta}$ is the electromagnetic constitutive tensor that is antisymmetric under permutations of indexes $\alpha$ and $\beta$, and $\gamma$ and $\delta$; and it is symmetric under permutation of $\beta$ and $\delta$. The non-vanishing components of $\chi^{\alpha\beta\gamma\delta}$ can be expressed in terms of $\varepsilon^{ij}$ and $\mu^{ij}$. That dispersion relation is $$\omega^{4}-2\,k_{i}\,\psi^{i}_{j}\,k^{j}\,\omega^{2}+k_{i}k_{j}\,\gamma^{ij}_{% mn}\,k^{m}k^{n}=0,$$ (8) where $$\psi^{ij}=\frac{1}{2}\epsilon^{imn}\epsilon^{jpq}(\varepsilon^{-1})_{nq}(\mu^{% -1})_{mp},$$ (9) and $$\gamma^{ijmn}=\frac{\varepsilon^{ij}}{{\rm det}\,\varepsilon}\frac{\mu^{mn}}{{% \rm det}\,\mu}.$$ (10) As it has been proven in reference Itin:2009fj , one recovers the ordinary dispersion relation $k^{2}=\varepsilon\,\mu\,\omega^{2}$ from (10) in the isotropic case $\varepsilon_{ij}=\varepsilon\,\delta_{ij}$ and $\mu_{ij}=\mu\,\delta_{ij}$. Note that the refractive index is defined in isotropic spaces as $n^{2}=\varepsilon\,\mu$. Nevertheless, the naive definition of the refractive index tensor based on using square root matrices, that is $n^{ij}n_{jm}=\varepsilon^{ij}\mu_{jm}$, would not allow us to simplify the dispersion relation (10) as depending only on this $n_{ij}$ in general. Now let us go back to the study of GWs predicted by ATGs and propagating in curved background geometries. We can consider the case in which the propagation equation for GWs can be written as $$\Theta^{\alpha\beta\gamma\delta\mu\nu}\partial_{\gamma}\partial_{\delta}h_{\mu% \nu}=0,$$ (11) where we have introduced the gravitational constitutive tensor $\Theta^{\alpha\beta\gamma\delta\mu\nu}$, which characterizes the gravitational theory in a particular background environment. It is symmetric under permutation of indexes $\alpha$ and $\beta$; $\gamma$ and $\delta$; and $\mu$ and $\nu$, and the pair of indexes $\alpha,\,\beta$ and $\mu,\,\nu$ (see discussion at the end of this section). We are interested in physically non-trivial solutions of this linear system under analogous conditions of the standard geometric optics approximation applied to the gravitational radiation (gravito-optics). As in the more simple case discussed in the previous section, we will assume a low variation of the entries of the constitutive tensor relative to the change of the metric perturbations. Equation (11) strongly resembles the source-free wave equation (7), so one could expect that a solution of this equation exists for $$\omega^{4}-2\,k_{i}\,{{\Psi}^{i}}_{j}\,k^{j}\omega^{2}+k_{i}k_{j}\,{\Gamma^{ij% }}_{mn}\,k^{m}k^{n}=0,$$ (12) where $\Psi^{i}{}_{j}$ and ${\Gamma^{ij}}_{mn}$ are determined by the constitutive tensor and are in general two independent tensors characterizing the medium. However, in the case that $\Psi^{i}{}_{j}$ and ${\Gamma^{ij}}_{mn}$ are not independent, a unique refractive index tensor can encapsulate the physics of our diagravitational medium. For example, when ${\Gamma^{ij}}_{mn}=\Psi^{i}{}_{m}\Psi^{j}{}_{n}$, the dispersion relation (12) reduces to the quadratic equation $$k_{i}\,(n^{-2})^{i}{}_{j}\,k^{j}=\omega^{2},$$ (13) where we have defined $(n^{-2})^{i}{}_{j}\equiv\Psi^{i}{}_{j}$. In the isotropic case, where $n^{i}{}_{j}=n\,\delta^{i}{}_{j}$, we recover equation (2). The picture we have just discussed may be seen being very general. Nevertheless, it should be noted that equation (11) and, therefore, equation (12) cannot describe all the effects discussed in the previous section. A possible extension of equation (11) is $$\left[\Theta^{\alpha\beta\gamma\delta\mu\nu}\partial_{\gamma}\partial_{\delta}% +\Phi^{\alpha\beta\gamma\mu\nu}\partial_{\gamma}+\Sigma^{\alpha\beta\mu\nu}% \right]\,h_{\mu\nu}=0,$$ (14) which includes two additional constitutive tensors, these are the attenuation tensor and the mass tensor. The attenuation term appears easily when one changes partial derivatives by covariant derivatives in equation (11). It is worth noting that it corresponds with a real attenuation for temporal derivatives. However, for spatial derivatives one would have a contribution of different nature; thus, this term can break the commonly assumed reflexion invariance of the dispersion relation. The origin of the mass term can be traced back to a quadratic term for $h$ in the perturbed Lagrangian. That is $$\displaystyle L$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{2}\partial_{\gamma}h_{\alpha\beta}\,\Theta^{\alpha\beta% \gamma\delta\mu\nu}\,\partial_{\delta}h_{\mu\nu}+\frac{1}{2}h_{\alpha\beta}\,% \Phi^{\alpha\beta\gamma\mu\nu}\partial_{\gamma}h_{\mu\nu}$$ (15) $$\displaystyle+$$ $$\displaystyle\frac{1}{2}h_{\alpha\beta}\,\Sigma^{\alpha\beta\mu\nu}h_{\mu\nu},$$ where we have already discussed the symmetric properties of $\Theta^{\alpha\beta\gamma\delta\mu\nu}$; $\Phi^{\alpha\beta\gamma\mu\nu}$ is antisymmetric under the change of the pair $\alpha,\,\beta$ with $\mu,\,\nu$; and, $\Sigma^{\alpha\beta\mu\nu}$ is symmetric under that change of indexes. We have assumed, on one hand, that all the entries of the tensors are slowly varying functions in the interval of interest and, on the other hand, that Lagrangian (15) can be obtained from a covariant Lagrangian of an ATG up to second order perturbations. 4 Discussion The development of the field of gravito-optics may allow us to unveil unexplored characteristics of ATGs. In that context, as we have shown, we can learn from our experience dealing with the propagation of electromagnetic waves through dielectric materials to understand better the behaviour of GWs propagating in curved background geometries. Moreover, the developments on the understanding of new materials could even suggest us the formulation of novel theories. We have to stress that throughout this work we have assumed that the relevant parameters appearing in the propagation equation, or the entries of the constitutive tensors, can be treated as constants. The constitutive equation describes propagation through the diagravitational medium defined with respect to the vacuum case in GR; thus, the constitutive tensors have an emergent nature. Going beyond the intervals where this approximation can be assumed will complicate the treatment. However, one could think in some situation such that the different spacetime intervals where this approximation is appropriate could be described as disjoint patches, forming a patchwork of diagravitational regions characterized by different effective refractive indexes. Finally, it is interesting to emphasize that an arbitrary ATG can support not only tensorial degrees of freedom, but also scalar and vector ones. The vector and tensorial modes will have different polarizations and each one of these polarizations for each mode can have associated its corresponding mass, attenuation, and Lorentz violating terms and, therefore, its respective anisotropic gravitational refractive index tensor. This tensor is not required to be symmetric nor even invertible. Indeed, for the electromagnetic radiation, negative permittivities are associated to metamaterials; non-symmetric tensors describe external fields effects and non-invertible tensors have been used to describe perfect conductors Lindell:2005 . Analogous discussions may apply for the phenomenology of GWs predicted by ATGs propagating in general backgrounds. Acknowledgments This work is supported by the project FIS2016-78859-P (AEI/FEDER, UE). PMM gratefully acknowledges financial support provided through the Research Award L’Oréal-UNESCO FWIS (XII Spanish edition). References (1) B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], “Observation of Gravitational Waves from a Binary Black Hole Merger”, Phys. Rev. Lett.  116 (2016) no.6, 061102 doi:10.1103/PhysRevLett.116.061102 [arXiv:1602.03837 [gr-qc]]. (2) C. M. Will, “The Confrontation between General Relativity and Experiment”, Living Rev. Rel.  17 (2014) 4 doi:10.12942/lrr-2014-4 [arXiv:1403.7377 [gr-qc]]. (3) B. P. Abbott et al. [LIGO Scientific and Virgo Collaborations], “GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral”, Phys. Rev. Lett.  119 (2017) no.16, 161101 doi:10.1103/PhysRevLett.119.161101 [arXiv:1710.05832 [gr-qc]]. (4) L. Lombriser and A. Taylor, “Breaking a Dark Degeneracy with Gravitational Waves”, JCAP 1603 (2016) no.03, 031 doi:10.1088/1475-7516/2016/03/031 [arXiv:1509.08458 [astro-ph.CO]]. (5) L. Lombriser and N. A. Lima, “Challenges to Self-Acceleration in Modified Gravity from Gravitational Waves and Large-Scale Structure”, Phys. Lett. B 765 (2017) 382 doi:10.1016/j.physletb.2016.12.048 [arXiv:1602.07670 [astro-ph.CO]]. (6) J. M. Ezquiaga and M. Zumalacárregui, “Dark Energy After GW170817: Dead Ends and the Road Ahead”, Phys. Rev. Lett.  119 (2017) no.25, 251304 doi:10.1103/PhysRevLett.119.251304 [arXiv:1710.05901 [astro-ph.CO]]. (7) P. Creminelli and F. Vernizzi, “Dark Energy after GW170817 and GRB170817A”, Phys. Rev. Lett.  119 (2017) no.25, 251302 doi:10.1103/PhysRevLett.119.251302 [arXiv:1710.05877 [astro-ph.CO]]. (8) J. Sakstein and B. Jain, “Implications of the Neutron Star Merger GW170817 for Cosmological Scalar-Tensor Theories”, Phys. Rev. Lett.  119 (2017) no.25, 251303 doi:10.1103/PhysRevLett.119.251303 [arXiv:1710.05893 [astro-ph.CO]]. (9) T. Baker, E. Bellini, P. G. Ferreira, M. Lagos, J. Noller and I. Sawicki, “Strong constraints on cosmological gravity from GW170817 and GRB 170817A”, Phys. Rev. Lett.  119 (2017) no.25, 251301 doi:10.1103/PhysRevLett.119.251301 [arXiv:1710.06394 [astro-ph.CO]]. (10) Y. Akrami, P. Brax, A. C. Davis and V. Vardanyan, “Neutron star merger GW170817 strongly constrains doubly-coupled bigravity”, arXiv:1803.09726 [astro-ph.CO]. (11) C. de Rham, “Massive Gravity”, Living Rev. Rel.  17 (2014) 7 doi:10.12942/lrr-2014-7 [arXiv:1401.4173 [hep-th]]. (12) I. D. Saltas, I. Sawicki, L. Amendola and M. Kunz, “Anisotropic Stress as a Signature of Nonstandard Propagation of Gravitational Waves”, Phys. Rev. Lett.  113 (2014) no.19, 191101 doi:10.1103/PhysRevLett.113.191101 [arXiv:1406.7139 [astro-ph.CO]]. (13) C. Caprini and D. G. Figueroa, “Cosmological Backgrounds of Gravitational Waves”, arXiv:1801.04268 [astro-ph.CO]. (14) P. Boonserm, C. Cattoen, T. Faber, M. Visser and S. Weinfurtner, “Effective refractive index tensor for weak field gravity”, Class. Quant. Grav.  22 (2005) 1905 doi:10.1088/0264-9381/22/11/001 [gr-qc/0411034]. (15) P. Szekeres, “Linearized gravitation theory in macroscopic media”, Annals Phys. 64, 599 (1971). (16) P. C. Peters, “Index of refraction for scalar, electromagnetic and gravitational waves in weak gravitational field”, Phys. Rev. D 9 (1974) 2207. (17) M. Giovannini, “The refractive index of relic gravitons”, Class. Quant. Grav.  33 (2016) no.12, 125002 doi:10.1088/0264-9381/33/12/125002 [arXiv:1507.03456 [astro-ph.CO]]. (18) Y. Itin, “Dispersion relation for anisotropic media”, Phys. Lett. A 374 (2010) 1113 doi:10.1016/j.physleta.2009.12.071 [arXiv:0908.0922 [physics.class-ph]]. (19) M. Giovannini, “The propagating speed of relic gravitational waves and their refractive index during inflation”, arXiv:1803.05203 [gr-qc]. (20) M. Giovannini, “Blue and violet graviton spectra from a dynamical refractive index”, arXiv:1805.08142 [astro-ph.CO]. (21) S. Mirshekari, N. Yunes and C. M. Will, “Constraining Generic Lorentz Violation and the Speed of the Graviton with Gravitational Waves”, Phys. Rev. D 85 (2012) 024041 doi:10.1103/PhysRevD.85.024041 [arXiv:1110.2720 [gr-qc]]. (22) T. Kobayashi, M. Yamaguchi and J. Yokoyama, “Generalized G-inflation: Inflation with the most general second-order field equations”, Prog. Theor. Phys.  126 (2011) 511 doi:10.1143/PTP.126.511 [arXiv:1105.5723 [hep-th]]. (23) J. D. Barrow, J. P. Mimoso and M. R. de Garcia Maia, “Amplification of gravitational waves in scalar - tensor theories of gravity”, Phys. Rev. D 48 (1993) 3630 Erratum: [Phys. Rev. D 51 (1995) 5967]. doi:10.1103/PhysRevD.48.3630, 10.1103/PhysRevD.51.5967 (24) N. Yunes, K. Yagi and F. Pretorius, “Theoretical Physics Implications of the Binary Black-Hole Mergers GW150914 and GW151226”, Phys. Rev. D 94 (2016) no.8, 084002 doi:10.1103/PhysRevD.94.084002 [arXiv:1603.08955 [gr-qc]]. (25) T. P. Sotiriou, “Detecting Lorentz Violations with Gravitational Waves from Black Hole Binaries”, Phys. Rev. Lett.  120 (2018) no.4, 041104 doi:10.1103/PhysRevLett.120.041104 [arXiv:1709.00940 [gr-qc]]. (26) S. Alexander and J. Martin, “Birefringent gravitational waves and the consistency check of inflation”, Phys. Rev. D 71 (2005) 063526 doi:10.1103/PhysRevD.71.063526 [hep-th/0410230]. (27) N. Yunes, R. O’Shaughnessy, B. J. Owen and S. Alexander, “Testing gravitational parity violation with coincident gravitational waves and short gamma-ray bursts”, Phys. Rev. D 82 (2010) 064017 doi:10.1103/PhysRevD.82.064017 [arXiv:1005.3310 [gr-qc]]. (28) K. Max, M. Platscher and J. Smirnov, “Gravitational Wave Oscillations in Bigravity”, Phys. Rev. Lett.  119 (2017) no.11, 111101 doi:10.1103/PhysRevLett.119.111101 [arXiv:1703.07785 [gr-qc]]. (29) I. V. Lindell, A. Sihvola, “Perfect electromagnetic conductor”, Journal of Electromagnetic Waves and Applications 19,7 (2005) 861-869, [arXiv:physics/0503232 [physics.class-ph]].
Modeling the COVID-19 pandemic: A primer and overview of mathematical epidemiology Fernando Saldaña [email protected] Jorge X. Velasco-Hernández [email protected] Instituto de Matemáticas, Campus Juriquilla, 76230, Universidad Nacional Autónoma de México, Quéretaro, Mexico Abstract Since the start of the still ongoing COVID-19 pandemic, there have been many modeling efforts to assess several issues of importance to public health. In this work, we review the theory behind some important mathematical models that have been used to answer questions raised by the development of the pandemic. We start revisiting the basic properties of simple Kermack-McKendrick type models. Then, we discuss extensions of such models and important epidemiological quantities applied to investigate the role of heterogeneity in disease transmission e.g. mixing functions and superspreading events, the impact of non-pharmaceutical interventions in the control of the pandemic, vaccine deployment, herd-immunity, viral evolution and the possibility of vaccine escape. From the perspective of mathematical epidemiology, we highlight the important properties, findings, and, of course, deficiencies, that all these models have. keywords: Mathematical modeling, COVID-19, SARS-CoV-2, Epidemic model, $R_{0}$, Kermack-McKendrick models 1 Introduction According to the World Health Organization (WHO), the COVID-19 pandemic has caused a dramatic loss of human life and presents an unprecedented challenge to public health. The pandemic has also disrupted the global economy, the food system, education, employment, tourism, and several other aspects of life. In response to the COVID-19 crisis, the scientific community has acted fast to better understand the epidemiological, biological, immunological, and virological aspects of the SARS-CoV-2. Mathematical models have played a significant role to support public health preparedness and response efforts against the ongoing COVID-19 pandemic Acuña-Zegarra et al., (2020); Aguiar et al., (2020); Aleta et al., (2020); Althouse et al., (2020); Angulo et al., (2021); Castro and Singer, (2021); Contreras et al., (2020); Day et al., (2020); De la Sen and Ibeas, (2021); Endo et al., (2020); Flaxman et al., (2020); Fontanet et al., (2021); Ganyani et al., (2020); Goldstein et al., (2021); Kain et al., (2021); Kochańczyk et al., (2020); Mena et al., (2020); Saad-Roy et al., (2021); Saldaña et al., (2020); Santamaría-Holek and Castaño, (2020); Santana-Cibrian et al., (2020). From the start of the pandemic, modelers have attempted to forecast the spread of COVID-19 in terms of the expected number of infections, deaths, hospital beds, intensive care units, and other health-care resources. However, although models are useful in many ways, their predictions are based on a set of hypotheses both mathematical and epidemiological for two complex evolving biological entities, populations of hosts and pathogens. Therefore, simulation models are far from perfect and their forecasts, predictions, and scenarios should be taken cautiously Eker, (2020); Thompson et al., (2020). There is a group of models that have played a significant role in the COVID-19 epidemic. This group is the one of the so-called compartmental models, either deterministic or stochastic, that subdivide a given human population into sets of individuals distinguished by their disease status. So we can, for example, have susceptible, infected, recovered, immune hosts where each of these classes can be further subdivided as necessary. In the COVID-19 pandemic it has been useful, for example, to consider several classes of infectious individuals as are confirmed, asymptomatic infected, symptomatically infected, isolated, and so on. Infection occurs when an infectious individual enters into contact with a susceptible one and pathogen transmission ensues. In infectious disease models, perhaps the most important component is the one that describes this infection process, the so-called mixing function. The mixing function introduces one of the basic properties of epidemic systems: heterogeneity. This heterogeneity is expressed in the different ages, tastes, activities, and other sexual, behavioral, social, genetic, and physiological traits that define an individual and a group of individuals in a population. We do not mix randomly because we tend to mix and interact with people that are like us in some particular, specific way. The evolution of the present pandemic has been driven mainly by heterogeneity. All around the World, mitigation measures were implemented to control disease spread. The level of enforcement and compliance of such measures, however, widely varied across the globe. Moreover, at times when mitigation measures were partially relaxed, flare-ups and secondary or tertiary outbreaks have occurred. Many have been associated with so-called superspreading events. These are characterized as periods of time where large numbers of individuals congregate in close contact increasing the average transmission. These events then become foci of infections when individuals return to their homes or communities and start, by doing so, a new wave of local disease transmission. The understanding and modeling of superspreading events constitute a present challenge of significant importance for the control of disease spread Althouse et al., (2020); Endo et al., (2020); Hébert-Dufresne et al., (2020); Liu et al., (2020); Kain et al., (2021); Kochańczyk et al., (2020). At the beginning of the pandemic, given the lack of treatments or effective vaccines, we have relied on the implementation of non-pharmaceutical interventions to prevent disease spread. These types of measures can be rather effective in diminishing transmission but largely rely on individual customs, beliefs, and education. Vaccines on the other hand, available since the beginning of 2021, present problems of their own. One is the interplay between coverage, efficacy, and design. The vaccines that are in the market as this work is being written, are designed for the viral variants prevalent in the first six months of the epidemic. Now we have several new variants (that, curiously, arose in the countries where the epidemic has been worse: UK, USA, Brazil) that have higher transmissibility and the same type of mutations, facts that present worrisome perspectives regarding the possibility of virus variants evolving to escape the action of vaccine. Coverage is a problem too since there has been limited vaccine supply. Without sufficient coverage, the epidemic will linger for many more months with the consequent burden on the economy and public health of many countries. Mathematical tools are being used to provide criteria for the deployment and vaccine roll-out and the impact of these on epidemic evolution. Finally, a problem of interest and where modeling is necessary is the analysis of syndemic diseases, that is, diseases that co-circulate in the same region, time, and populations. As an example, we have the interaction between influenza and COVID-19 that so far has resulted rather benign since the mitigation measures that have only reduced the prevalence of SARS-CoV-2, have practically eliminated influenza in the season October 2020-March 2021 in the Northern hemisphere Acuña Zegarra et al., (2021). This work has as the main objective to review particular models that have been or are being used to address the list of problems commented upon in this Introduction. We will highlight the most important properties, findings, and, of course, deficiencies, that all these models have. The perspective is that of two mathematical epidemiologists involved in the Public Health application of models. We hope the approach and perspective will be of interest to a general mathematical audience. 2 Kermack-McKendrick models and key epidemiological parameters The classical work of Kermack & McKendrick published in 1927 Kermack and McKendrick, (1927) is a milestone in the mathematical modeling of infectious diseases. The model introduced in Kermack and McKendrick, (1927) is a complex age of infection model that involves integro-differential equations and the now very famous compartmental SIR model as a special case: $$\displaystyle\dot{S}$$ $$\displaystyle=-\beta SI,$$ (1) $$\displaystyle\dot{I}$$ $$\displaystyle=\beta SI-\gamma I,$$ $$\displaystyle\dot{R}$$ $$\displaystyle=\gamma I.$$ The variables $S$, $I$, $R$, in system represent the number of susceptible, infectious and recovered individuals in the constant total population $N=S+I+R$. The parameter $\gamma$ is the recovery rate and $\beta=c\phi$, is the effective contact rate which is product of the number of contacts $c$ and the probability of infection given a contact. A well known result for system (1) is that the mean infectious period is $$\int_{0}^{\infty}\gamma t\exp(-\gamma t)dt=\dfrac{1}{\gamma}$$ (2) and therefore the residence times in the infectious state are exponentially distributed. In other words, the probability of recovery per unit of time is constant, regardless of the time elapsed since infection Vergu et al., (2010). Using (2), we can write the basic reproduction number as $$\mathcal{R}_{0}=\dfrac{\beta}{\gamma}S_{0}.$$ (3) $\mathcal{R}_{0}$ measures the expected number of infections generated by a single (and typical) infected individual during his/her entire infectious period ($1/\gamma$) in a population where all individuals are susceptible to infection ($\beta S_{0}=\beta N$). At the early phase of the outbreak, $$\dot{I}=\gamma(\mathcal{R}_{0}-1)I,\quad\text{and}\quad I(t)=I(0)\exp\left(\gamma(\mathcal{R}_{0}-1)t\right)$$ (4) so the infectious class grows initially if $R_{0}>1$ and goes to zero otherwise. Furthermore, $r=\gamma(\mathcal{R}_{0}-1)$ is the grow rate of the epidemic and $$\mathcal{R}_{0}=1+\dfrac{r}{\gamma}.$$ (5) Observe that formula (5) provides a natural method to compute $\mathcal{R}_{0}$ without the need of estimate the initial susceptible population $S_{0}$. Another fundamental property of the SIR model (1) is that $$\lim_{t\rightarrow\infty}S(t)=\lim_{t\rightarrow\infty}S(0)\exp\left(-\beta\int_{0}^{t}I(s)ds\right)=S_{\infty}>0,\quad\lim_{t\rightarrow\infty}I(t)=0$$ (6) so the outbreak will end leaving susceptible individuals who escape infection. Moreover, direct computations allow us to obtain a relationship between the total number of cases $S(0)-S_{\infty}$ and the cumulative number of infections at time $t$, denoted $C(t)$, $$C(t)=-(\log S(t)-\log S_{0})=\dfrac{\gamma}{\beta}\int_{0}^{t}I(s)ds$$ (7) and $$C(\infty)=\lim_{t\rightarrow\infty}C(t)=\log\dfrac{S_{0}}{S_{\infty}}=\dfrac{\gamma}{\beta}\int_{0}^{\infty}I(s)ds$$ (8) Then we obtain the final size equation $$S(0)\exp\left(-\mathcal{R}_{0}C(\infty)\right)=S(0)+I(0)-\dfrac{1}{\mathcal{R}_{0}}C(\infty)$$ (9) that allow us to estimate the expected number of total cases $C(\infty)$ given $\mathcal{R}_{0}$ and the size of the initial susceptible population. It is important to remark that compartmental Kermack-McKendric-type models rely on specific assumptions that should be taken with care when models are applied to real-life problems such as modeling the COVID-19 pandemic. For example, from (2) we deduce that the SIR model assumes that the time during which an infectious individual can transmit the disease is constant and equal to the recovery period. This is an acceptable approximation for some diseases but, for others, especially those whose recovery time is long, it is not. In general, the capacity of infectious individuals to infect another person will depend on the age of infection Kermack and McKendrick, (1927). Let $i(a,t)da$ be the density of infectious individuals at time $t$ with an age of infection between $a$ and $a+da$, the evolution of such population is governed by the McKendrick-von Foester equation $$\dfrac{\partial}{\partial t}i(a,t)+\dfrac{\partial}{\partial a}i(a,t)+\gamma(a)i(a,t)=0,$$ (10) where the recovery rate now depends on the age of infection $a$. The boundary and initial conditions are $$i(a,0)=\phi(a),\quad i(0,t)=B(t)=\int_{0}^{\infty}\beta(a)i(a,t)da,$$ (11) where $\beta(a)$ is the effective contact rate as function of the age of infection and $\phi(a)$ is the age of infection initial distribution. For the simple case in which the solution of (10) is separable i.e. $i(a,t)=T(t)A(t)$, one can see that (10) has a solution if there is a unique $r\in\mathbb{R}$ that satisfies the characteristic equation $$\int_{0}^{\infty}e^{-ra}\beta(a)e^{\left(-\int_{0}^{a}\gamma(s)ds\right)}da=1.$$ (12) The parameter $r$ now is interpreted as the intrinsic growth rate of the epidemic and is directly related with the epidemic doubling time, the amount of time in which the cumulative incidence doubles, $T_{d}=\log 2/r\approx 1/r$. If there is an increase in the doubling time then the transmission is decreasing. To obtain the basic reproduction number, we need to look at the expression $$K(a)=\beta(a)e^{-\int_{0}^{a}\gamma(s)ds}$$ (13) which is the product of the contact rate and the probability of remain infectious at the age of infection $a$. Hence, $K(a)$ gives the number of secondary infections generated by an individuals of age of infection $a$ and $$\mathcal{R}_{0}=\int_{0}^{\infty}K(a)da.$$ (14) In the SIR model (1), the probability of being infectious at time $t$ is exponential, and the transmission rate at the beginning of the epidemic $\beta S_{0}$ is independent from the age of infection, thus $$\mathcal{R}_{0}=\int_{0}^{\infty}K(a)da=\beta S_{0}\int_{0}^{\infty}e^{-\gamma t}dt=\dfrac{\beta}{\gamma}S_{0}$$ (15) and we recover the expression (5). The function $K(a)$ has also been called the distribution of the generational interval. This is one of the fundamental concepts for the computation of $\mathcal{R}_{0}$. However, it has been largely neglected in the mathematical community due to the almost exclusive attention to the computation of reproduction numbers based on compartmental epidemic models. Estimates of the reproduction number together with the generation interval distribution can provide insight into the speed with which a disease will spread as shown in Ganyani et al., (2020) for the case of COVID-19. The density of the generational interval is defined as $$G(a)=\dfrac{K(a)}{\mathcal{R}_{0}},$$ (16) clearly, $0\leq G(a)\leq 1$ for all $a$. Function $G(a)$ measures the time between the infection of a primary case and one of its secondary cases Kenah et al., (2008); Wallinga and Lipsitch, (2007). In other words, the generational interval $G(a)$ is the age of infection that separates the infector from the infectee. From (12) and the definition of $G(a)$, we obtain that the inverse of the $\mathcal{R}_{0}$ is the result of the Laplace transform, or moment-generating function, of the distribution of the generational interval evaluated at the intrinsic grow rate $$M(r)=\int_{0}^{\infty}e^{-ra}G(a)da=\dfrac{1}{\mathcal{R}_{0}}.$$ (17) The generation interval of an epidemic outbreak is calculated directly from incidence data, using the symptom onset date as an approximation to the contagion date, and making extensive use of screening and contact tracing methods Ganyani et al., (2020); Nishiura et al., (2020); Rai et al., (2020). This approximation is known as the serial interval. We must remark that the mean of the distribution for the serial or generational interval does not necessarily coincide with the infectious period which is only an approximation particular to Kermack-McKendrick type models. Using (17), we can calculate $\mathcal{R}_{0}$ for different distributions for the serial interval Wallinga and Lipsitch, (2007). For example, let $T_{g}$ be the mean serial interval, a straightforward computation allow us to see that for the exponential distribution $G(a)=\gamma\exp(-\gamma t)$ we obtain $$\mathcal{R}_{0}=\dfrac{1}{M(r)}=1+\dfrac{r}{\gamma}$$ (18) where $T_{g}=1/\gamma$ and again we deduced (5). For a normal distribution we have $$G(a)=\dfrac{1}{\sqrt{2\pi}\sigma}e^{(a-1/\gamma)^{2}/4\sigma^{2}},\quad\mathcal{R}_{0}=\dfrac{1}{M(r)}=e^{r/\gamma-\sigma^{2}r^{2}/2}$$ (19) so, in this case, an increase in the variance decreases the value of the basic reproduction number. For new emerging diseases as COVID-19, the natural way to obtain $\mathcal{R}_{0}$ is from the observed initial growth rate $r$ of the epidemic. However, as we have shown, the equation relating these two parameters varies concerning the distribution of the infection period so one must be careful since the distribution of the generation interval must be known before we can apply a given relationship for the infection understudy Gostic et al., (2020); Wallinga and Lipsitch, (2007). 3 Mixing functions and disease spread The SIR model (1) is one of the simplest that can be written for a communicable disease. As explained before, it makes the overly simplifying assumption that the mass action law, expressed in the product $SI$, is a good approximation of the mixing of the population that results in infectious contacts. However, this hypothesis is only a very rough approximation to what really happens. To illustrate this consider the spread of a $SI$ disease in a population subdivided into $n$ disjoint groups first presented in Lajmanovich and Yorke, (1976). Each group is homogeneous in itself meaning that the recovery rates are the same for all the individuals belonging to that group and their contacts between individuals are function only of the group they belong to. Define as $\beta_{ij}$ the effective contact rate of the susceptible individuals in the group $i$ with infectious individuals in the group $j$, $N_{i}$ is the total population size of the group $i$, and $\gamma_{i}$ is the recovery rate of infected individuals in the group $i$. Another hypothesis is that $\beta_{ij}=\beta_{ji}$, so contacts are symmetric although this may not be always true. We will show a more general condition on the contacts rates later in this section. Note that since $S_{i}=N_{i}-I_{i}$ we need only write the equations for the infected individuals in each group $i$, giving $$\displaystyle\dot{I}_{i}$$ $$\displaystyle=-\gamma_{i}I_{i}+\sum_{j=1}^{n}\beta_{ij}N_{i}I_{j}-\sum_{j=1}^{n}\beta_{ji}I_{i}I_{j},$$ (20) which is studied in the set $\Omega=\Pi_{i=1}^{n}[0,N_{i}]$. This system has a globally asymptotically stable disease-free equilibrium at $I=(I_{1},\cdots I_{n})=0$ and another endemic equilibrium, with $I=k$ with $k$ a constant vector, that is globally asymptotically stable when the disease-free equilibrium loses its stability. We form the matrix $A=(a_{ij})$ where $a_{ij}=\beta_{ij}$ when $i\neq j$, and $a_{ij}=\beta_{ij}-\gamma_{i}$ if $i=j$ and define the column vector $B(I)$ with components $B_{i}=-\sum_{j=1}^{n}\beta_{ji}I_{i}I_{j}$, then equation (20) can be rewritten Lajmanovich and Yorke, (1976) as $$\displaystyle\dot{I}_{i}$$ $$\displaystyle=AI+B(I).$$ (21) The main result of Lajmanovich and Yorke, (1976) is the following: Theorem 3.1. Consider the system given by (21) where $A$ is an $n\times n$ irreducible matrix and $B$ is continuously differentiable in a region $D$ of $R^{n}$. Assume that 1. the compact convex set $C\subset D$ contains the origin and is positively invariant for (21), 2. $\lim_{I\to 0}\frac{||B(I)||}{||I||}=0$, 3. There exists $\lambda>0$ and a real eigenvector $v$ of $A^{T}$ such that $v\cdot v\geq\lambda||I||$ for all $I\in C$, 4. $v\cdot B(I)\leq 0$ for all $I\in C$, 5. Either $I=0$ is globally asymptotically stable in $C$, or for any $I_{0}\in C-{0}$ the solution $\phi(t,I_{0})$ of (21) satisfies $\lim_{t\to\infty}||\phi(t,I_{0})||\geq m$, $m$ independent of $I_{0}$. Lastly, there exist a constant solution of (21) $I=k$, $k\in C-{0}$. In biological terms, this result simply states that if the basic reproductive number $\mathcal{R}_{0}<1$, then the disease-free equilibrium is unique and asymptotically stable, but if $\mathcal{R}_{0}>1$, then it is unstable and there exists another equilibrium, the endemic equilibrium, that is globally asymptotically stable. The Lajmanovich and York model as the system (20) is known, was one of the first multigroup models that shows that its dynamics can be fully characterized in terms of the fundamental parameter $\mathcal{R}_{0}$ and the transcritical bifurcation that the equilibria suffer at the critical value $\mathcal{R}_{0}=1$ Hethcote, 2000b . Several studies have shown, however, that epidemiological models that follow the Lajmanovich and York characterization are not that general. They are limited in the hypothesis regarding the total population, which is assumed constant, and the mixing pattern among population subgroups. The existence of subthreshold endemic states, that is, endemic states that exist even when $\mathcal{R}_{0}<1$, have been shown for directly transmitted diseases and vector transmitted diseases (see, for example, Dushoff et al., (1998); Gumel, (2012); Kribs-Zaleta and Velasco-Hernandez, (2000); Garba et al., (2008); Saldaña and Barradas, (2019); Villavicencio-Pulido and Barradas, (2008)). A simple example of this type of behavior is illustrated in a generalization of the model first published in Kribs-Zaleta and Velasco-Hernandez, (2000) that is pertinent for the study of vaccination policies during the present pandemic. A close variant of this model is briefly presented now. Let $S$, $I$, $V$, and $R$ denote the subpopulations of susceptible, infectious, vaccinated, and immune individuals respectively. Let $\beta$ be the effective contact rate, $\sigma$ the proportion of reduction in the contact rate due to the vaccine, $\omega$ the waning rate of both vaccine and natural immunity, $\gamma$ the recovery rate, and $\Lambda$, $\mu$ the equal birth and mortality rates, respectively. The model stands $$\displaystyle\dot{S}$$ $$\displaystyle=\Lambda-\beta S\frac{I}{N}-(\mu+\phi)S+\omega V,$$ (22) $$\displaystyle\dot{V}$$ $$\displaystyle=\phi V-(1-\sigma)V\frac{I}{N}-(\mu+\omega)V,$$ $$\displaystyle\dot{I}$$ $$\displaystyle=\beta\frac{I}{N}(S+(1-\sigma)V)-(\gamma+\mu)I,$$ $$\displaystyle\dot{R}$$ $$\displaystyle=\gamma I-(\omega+\mu)R.$$ Using model (22), one can show that, besides the well-known forward bifurcation in which $\mathcal{R}_{0}<1$ is a necessary and sufficient condition for disease elimination, epidemic models may present a backward bifurcation where a stable endemic equilibrium (EE) co-exists with an unstable EE and a stable disease-free equilibrium for $\mathcal{R}_{0}<1$ Kribs-Zaleta and Velasco-Hernandez, (2000). In figure 1 we present and schematic representation of the backward bifurcation phenomena. In terms of disease control, a backward or subcritical bifurcation implies that $\mathcal{R}_{0}$ has to be reduced below a value lower than one and, in some cases, difficult to estimate. Hence, a backward bifurcation is usually considered a undesirable phenomenon since control polices turn out to be more complicated Barradas and Vázquez, (2019). Another factor that alters the neat result in Theorem 3.1 by Lajmanovich and Yorke is the fact that the effective contact rates are not constant. They, of course, can vary when subject to climatic variability but there is a more basic reason why they are variable: effective contact rates depend on the sizes of the groups that form the population and, more importantly, on the mixing preferences of those groups. Busenberg and Castillo-Chavez Busenberg and Castillo-chavez, (1991) after the pioneering work of Jacquez et al., (1988) constructed an axiomatic framework that clearly defines the conditions that contact rates should satisfy to be consistent with the dynamics of epidemiological processes. Following Busenberg and Castillo-chavez, (1991), define $c_{ij}$ the proportion of contacts that individuals from group $i$ have with those in group $j$. Then these contact rates must satisfy the following conditions to be consistent with the epidemic dynamics: 1. $c_{ij}\geq 0$, 2. $\sum_{j=1}^{n}c_{ij}=1$ for $j=1,...,n.$, 3. $a_{i}N_{i}c_{ij}=a_{j}N_{j}c_{ji}$, where $N_{i}$ is the size of group $i$ and $a_{i}$ is the activity or risk level of group $i$. Two particular contact patterns satisfy the above axioms. The proportional mixing where contacts between groups $i$ and $j$ are separable $c_{ij}=c^{a}_{i}c^{b}_{j}$ which implies, using axioms 2 and 3 above, that $c_{ij}=Cc_{j}^{b}=\hat{c}_{j}$ with $$\hat{c}_{j}=\frac{a_{j}N_{j}}{\sum_{i=1}^{n}a_{i}N_{i}}.$$ Proportionate mixing refers to the fact that contacts are distributed according to the proportion of each group in the overall population. The other important contact function is the so-called preferential mixing. Here we divide the contacts of group $i$ into two parts: a fraction of the contacts, $\epsilon_{i}$, is reserved for within group contacts (group $i$) and the other fraction $1-\epsilon_{i}$, is distributed among all other groups according to the following formula $$c_{ij}=\epsilon_{i}\delta_{ij}+(1-\epsilon_{i})\frac{(1-\epsilon_{j})a_{j}N_{j}}{\sum_{i=1}^{n}(1-\epsilon_{j})a_{i}N_{i}},$$ with $\delta_{ij}=0$ if $i\neq j$ and $\delta_{ij}=1$ if $i=j.$ Many of the contact matrices that have been evaluated in the POLYMOD study Mossong et al., (2008); Prem et al., (2017) show a pattern similar to that of preferential mixing characterized for being diagonally dominant since most population groups tend to share a high proportion of their contacts within the same group, in this case, defined by age classes. However, special interaction as those of children and adults which tend to have strong sub-diagonal components, and work settings where age groups are bounded above and below, do not arise from the preferential mixing formula. Glasser and coworkers Glasser et al., (2012) have generalized contact matrices and the preferential mixing function to other situations. 4 Modeling the early phase of the COVID-19 epidemic outbreak An important feature of the COVID-19 disease is the incubation period, which is the time between exposure to the virus and symptom onset. The average incubation period is 5-6 days but can be as long as 14 days Lauer et al., (2020). The SEIR model $$\displaystyle\dot{S}$$ $$\displaystyle=-\beta IS,$$ (23) $$\displaystyle\dot{E}$$ $$\displaystyle=\beta IS-\sigma E,$$ $$\displaystyle\dot{I}$$ $$\displaystyle=\sigma E-\gamma I,$$ $$\displaystyle\dot{R}$$ $$\displaystyle=\gamma I,$$ allow us to incorporate the mean incubation period ($1/\sigma$) via the compartment $E$ that includes exposed individuals who had been infected but are still not infectious. The epidemic grow rate $r$ of the SEIR model (23) can be obtained exploring the local asymptotic stability of disease-free equilibrium (DFE) $E_{\circ}=(N(0),0,0,0)$. In particular, $r$ is computed as the dominant eigenvalue of the Jacobian of system (23) evaluated at $E_{0}$. In this case, $\mathcal{R}_{0}$ and $r$ are related as follows: $$\mathcal{R}_{0}=\left(1+\dfrac{r}{\sigma}\right)\left(1+\dfrac{r}{\gamma}\right)$$ (24) The approximation for the serial interval of the SIR model is $T_{g}=1/\gamma$; for the SEIR model is $T_{g}=1/\gamma+1/\sigma$. Hence, the conceptual approximation of the serial interval given by the SEIR model is more accurate in the sense that incorporates, both, the incubation and the recovery periods. To have a better understanding of how the model assumptions and data may affect the estimation of $\mathcal{R}_{0}$, let us consider the following example in the COVID-19 context. Consider an average incubation period of $1/\sigma=5$ days Lauer et al., (2020) and recovery time of $1/\gamma=10$ days. The epidemic doubling time at the early phase of the epidemic in Hubei Province, where SARS-CoV-2 was first recognized, has been approximated to be $T_{d}=2.5$ days Muniz-Rodriguez et al., (2020) so $r=\log 2/T_{d}=0.277$ so the SIR model estimates $\mathcal{R}_{0}=2.77$ and the SEIR model estimates $\mathcal{R}_{0}=3.8364$. On the other hand, the estimated serial interval in Hubei from contact tracing data has a median of $4.6$ days Yang et al., (2020); hence, using the real serial interval the SIR model predicts $\mathcal{R}_{0}=2.27$. The overestimation of the serial interval leads to the overestimation of $\mathcal{R}_{0}$, which in turn overestimates the growth rate $r$. This implies that Kermack-McKendrick-type models may erroneously anticipate the epidemic peak time (the date of maximum incidence) and also overestimate the final epidemic size (see Fig. 2). Standard deterministic Kermack-McKendrick type models that neglect demographic dynamics predict a single epidemic wave (see Fig. 2); however, rather than a single peak and near-symmetric decline, the COVID-19 epidemic in several cities have shown plateau-like states followed by epidemic rebounds Mena et al., (2020); Solbach et al., (2020); Santana-Cibrian et al., (2020); Weitz et al., (2020). This phenomenon is in part due to changes in transmission induced by lockdowns and other non-pharmaceutical interventions implemented by public health officers to reduce the epidemic burden. Besides, individuals are constantly changing their behavior and mobility patterns depending on the perceived risk of acquiring the infection, so they may reduce their contacts at times of high incidence Weitz et al., (2020). How awareness-driven behavior modulates the epidemic shape has been investigated in Weitz et al., (2020) modifying the SEIR model (23) as follows: $$\displaystyle\dot{S}$$ $$\displaystyle=-\dfrac{\beta IS}{1+(\delta/\delta_{c})^{k}},$$ (25) $$\displaystyle\dot{E}$$ $$\displaystyle=\dfrac{\beta IS}{1+(\delta/\delta_{c})^{k}}-\sigma E,$$ $$\displaystyle\dot{I}$$ $$\displaystyle=\sigma E-\gamma I,$$ $$\displaystyle\dot{R}$$ $$\displaystyle=(1-f)\gamma I,$$ $$\displaystyle\dot{D}$$ $$\displaystyle=f\gamma I$$ where $D$ measures the number of deaths in the population. The parameter $f$ measures the mean fraction of people who die after contracting COVID-19. The transmission rate is affected by the death-awareness social distancing rate $\delta=\dot{D}$, the half-saturation constant $\delta_{c}\geq 0$, and the sharpness of change in the force of infection $k\geq 1$ Weitz et al., (2020). The introduction of awareness in the SEIR model (23) renders scenarios in which a plateau-like behavior appears, that is, the number of daily infections decreases at a very slow pace after the peak (see Fig. 3). 4.1 The role of asymptomatic transmission Another important feature of infection by SARS-CoV-2 is that some people are infected and can transmit the virus, but do not experience any symptoms Park et al., 2020a . Current evidence suggests that asymptomatic carriers will transmit the infection to fewer people than symptomatic individuals but their real contribution to transmission is difficult to estimate since they are expected to have more contacts than symptomatic carriers Byambasuren et al., (2020). These factors may have an impact on disease dynamics. Hence, we extend the SEIR model (23) to illustrate the role of the asymptomatic carriers. The model is given by the following system of non-linear differential equations: $$\displaystyle\dot{S}$$ $$\displaystyle=-(\beta_{A}A+\beta_{I}I)S,$$ (26) $$\displaystyle\dot{E}$$ $$\displaystyle=(\beta_{A}A+\beta_{I}I)S-\sigma E,$$ $$\displaystyle\dot{A}$$ $$\displaystyle=(1-p)\sigma E-\gamma_{A}A,$$ $$\displaystyle\dot{I}$$ $$\displaystyle=p\sigma E-\gamma_{I}I,$$ $$\displaystyle\dot{R}$$ $$\displaystyle=\gamma_{A}A+\gamma_{I}I,$$ where the class $A$, represents asymptomatic infectious individuals. The total population at time $t$, is now $N(t)=S(t)+E(t)+A(t)+I(t)+R(t)=1$. The parameters $\beta_{A}$ and $\beta_{I}$ represent the effective contact rates of the asymptomatic and symptomatic infectious classes, respectively. A proportion $p$ of the exposed individuals $E$ will transition to the symptomatic infectious class $I$ at a rate $\sigma$, while the other proportion $1-p$ will enter the asymptomatic infectious class $A$. The mean infectious periods in the asymptomatic and symptomatic infectious classes are $1/\gamma_{A}$, and $1/\gamma_{I}$, respectively. These individuals gain permanent immunity and move to the recovered class $R$. However, we remark that this assumption is only valid when studying the first outbreak because to date, it is still unknown how long natural immunity will last and there have been already confirmed cases of coronavirus reinfections Parry, (2020); Saldana and Velasco-Hernandez, (2020); To et al., (2020); West et al., (2021). 4.1.1 Basic properties of the SEAIR model A common first step in analyzing compartmental epidemic models is finding the equilibrium points. Setting the right-hand side of system (26) equal to zero, we see that there is a continuum of DFE of the form $$E_{0}=(S^{*},E^{*},A^{*},I^{*},R^{*})=(N(0),0,0,0,0)$$ (27) where $N(0)=S(0)$ is the number of susceptible individuals at the initial time. For model (26), there is no endemic equilibrium. This is because an endemic equilibrium needs a continuous supply of susceptible individuals that generally occur via births into the susceptible population or through the waning of immunity Blackwood and Childs, (2018). However, model (26) assumes permanent immunity and does not consider demographic dynamics i.e. births and deaths in the population. From the model equations (26), it is easy to see that $\dot{N}=\dot{S}+\dot{E}+\dot{A}+\dot{I}+\dot{R}=0$, therefore the total population is a constant $N(t)=N(0)$ for all $t$ and the solutions of system (26) are bounded. The biologically feasible region is $$\Omega=\left\{(S,E,A,I,R)\in\mathbb{R}_{+}^{5}:S(t)+E(t)+A(t)+I(t)+R(t)=N(t)\right\}.$$ Let $X(t)$ be the solution of system (26) for a well-defined initial condition $X(0)\in\Omega$. Since $X_{i}=0$, implies $\dot{X}_{i}\geq 0$ for any state variable, then $X(t)\in\Omega$ for all $t>0$. Thus, solutions trajectories satisfy the usual positiveness and boundedness properties and the model is both epidemiologically and mathematically well posed Hethcote, 2000a . The local stability of the DFE is usually explored via the basic reproduction number. As we have mentioned, the mathematical expression for $\mathcal{R}_{0}$ depends directly on the model assumptions and structure. There are several tools for the computation of the basic reproduction number Heffernan et al., (2005). Probably, the most popular is the next-generation approach (Diekmann et al.,, 1990) using the method of (Van den Driessche and Watmough,, 2002). Under this approach, it is necessary to study the subsystem that describes the production of new infections and changes among infected individuals. The Jacobian matrix $\mathbf{J}$ of this subsystem at the DFE is decomposed as $\mathbf{J}=\mathbf{F}-\mathbf{V}$, where $\mathbf{F}$ is the transmission part and $\mathbf{V}$ describe changes in the infection status. The next-generation matrix is defined as $\mathbf{K}=\mathbf{F}\mathbf{V}^{-1}$, and $\mathcal{R}_{0}=\rho(\mathbf{K})$, where $\rho(\cdot)$ denotes spectral radius. For system (26), we obtain $$\mathbf{F}=\left[\begin{array}[]{ccc}0&\beta_{A}S(0)&\beta_{I}S(0)\\ 0&0&0\\ 0&0&0\end{array}\right],\quad\mathbf{V}=\left[\begin{array}[]{ccc}\sigma&0&0\\ -(1-p)\sigma&\gamma_{A}&0\\ -p\sigma&0&\gamma_{I}\end{array}\right].$$ Therefore, the basic reproduction number is given by $$\mathcal{R}_{0}=\left(\dfrac{(1-p)\beta_{A}}{\gamma_{A}}+\dfrac{p\beta_{I}}{\gamma_{I}}\right)S(0).$$ (28) $\mathcal{R}_{0}$ is a weighted average determined by the proportion, $p$, of symptomatic infected individuals. If there are no asymptomatic carriers i.e. $p=1$, we recover the $\mathcal{R}_{0}$ formula found in the SIR model. As a consequence of the Van den Driessche & Watmough Theorem Van den Driessche and Watmough, (2002), we establish the following result regarding the local stability of the DFE. Theorem 4.1. The continuum of DFE of system (26) given by $E_{0}$ in (27) is locally asymptotically stable if the basic reproduction number satisfies $\mathcal{R}_{0}<1$ and unstable if $\mathcal{R}_{0}>1$. From the basic reproduction number (28), it is evident that asymptomatic carriers may play an important role in the spread of COVID-19 within a population depending on their ability to transmit the infection ($\beta_{A}$) and their frequency in comparison with symptomatic carriers ($p$). Early estimates at the beginning of the pandemic suggested that approximately 80% of infections were asymptomatic Buitrago-Garcia et al., (2020). More recent evidence suggests only between 17% and 20% do not present any symptoms Byambasuren et al., (2020). The authors in Byambasuren et al., (2020) also found that the transmission risk from asymptomatic cases appeared to be lower than that of symptomatic cases, but there was considerable uncertainty in the extent of this (relative risk 0.58; 95% CI 0.335 to 0.994). Nevertheless, asymptomatic individuals may have more contacts than symptomatic people. There are still different opinions of the true magnitude of asymptomatic infections and their impact on the pandemic Qiu et al., (2021). Finally, we must remark that although model (26) considers asymptomatic transmission, it is still a very simplified model that ignores the existence of a pre-symptomatic stage. As remarked in He et al., (2020), infectiousness usually starts 2.5 days before symptoms onset with a high transmission rate. So besides the non-infectious exposed period and the fully asymptomatic class, some authors have also considered a class for the pre-symptomatic stage Aleta et al., (2020); Day et al., (2020) 5 The impact of non-pharmaceutical interventions The SEAIR model (26) is proposed to study the early phase of the outbreak and therefore assumes that at the start of the pandemic no interventions were applied to control the spread of SARS-CoV-2. This makes sense for the first stage of the pandemic which, except for China, was driven by imported cases. However, as levels of local transmission began to increase, sanitary emergency measures were implemented by health authorities in several countries. Given the absence of a vaccine or effective treatment against COVID-19 at the beginning of the pandemic, preventive measures pertained to non-pharmaceutical interventions (NPIs) such as wearing a mask in public, staying at home, avoiding places of mass gathering, social distancing, ventilating indoor spaces, washing hands often, etc. To have a full picture of the effect of NPIs, it is necessary to formulate mechanistic mathematical models that explicitly take into account the impact induced by such sanitary measures. There are different approaches to incorporate NPIs into compartmental models, see, for example, Acuña-Zegarra et al., (2020); Aguiar et al., (2020); Aleta et al., (2020); de León et al., (2020); Eikenberry et al., (2020); Ngonghala et al., (2020); Okuonghae and Omame, (2020); Park et al., 2020b ; Saldaña et al., (2020); Santana-Cibrian et al., (2020); Tocto-Erazo et al., (2020); Ullah and Khan, (2020). Here, we formulate a compartmental mathematical model that explicitly incorporates: (i) isolation of infectious individuals and (ii) mitigation measures that reduce the number of contacts among the individuals in the population, namely, temporary cancellation of non-essential activities, lockdown, and social distancing. One of the key tasks throughout this pandemic has been the use of mathematical models and epidemiological data to forecast excess hospital demand. Hence, we also incorporate appropriate compartments to monitor the required hospital beds, including the number of intensive care units (ICU) during the outbreak. The new model is an extension of the SEAIR model (26) and is given by the following set of differential equations: $$\displaystyle\dot{S}$$ $$\displaystyle=-\epsilon(t)(\beta_{A}A+\beta_{I}I)S,$$ (29) $$\displaystyle\dot{E}$$ $$\displaystyle=\epsilon(t)(\beta_{A}A+\beta_{I}I)S-\sigma E,$$ $$\displaystyle\dot{A}$$ $$\displaystyle=(1-p)\sigma E-\gamma_{A}A,$$ $$\displaystyle\dot{I}$$ $$\displaystyle=p\sigma E-\gamma_{I}I,$$ $$\displaystyle\dot{Q}$$ $$\displaystyle=\alpha\gamma_{I}I-\delta Q,$$ $$\displaystyle\dot{H}$$ $$\displaystyle=\delta(1-\psi)Q-\gamma_{H}H,$$ $$\displaystyle\dot{C}$$ $$\displaystyle=\delta\psi Q-\gamma_{C}C,$$ $$\displaystyle\dot{R}$$ $$\displaystyle=\gamma_{A}A+(1-\alpha)\gamma_{I}I+\gamma_{H}(1-\mu)H+\gamma_{C}(1-\mu)C,$$ $$\displaystyle\dot{D}$$ $$\displaystyle=\gamma_{H}\mu H+\gamma_{C}\mu C.$$ In model (29), after infection, a fraction $\alpha$ of the symptomatic individuals develop severe symptoms and are therefore isolated entering the home quarantine compartment $Q$, the other fraction recovers from the disease and enter the immune class $R$. The model assumes that once isolated, infectious individuals no longer contribute to the force of infection. Individuals expend an average of $1/\delta$ days in the home quarantine class and then a fraction $\psi$ of them develop symptoms that require hospitalization $H$ or an intensive care unit $C$. The average time individuals expend in the classes $H$ and $C$ are $1/\gamma_{H}$ and $1/\gamma_{C}$, respectively, and a fraction $\mu$ of such individuals experiment COVID-19 induced death. The function $0\leq\epsilon(t)\leq 1$ measured the reduction at a specific time $t$ in the transmission rates achieved by the implementation of a lockdown or any social distancing measures that reduce the number of contact among the population. In Figure 4, we performed numerical simulations aimed to show how the implementation and relaxation of lockdowns impacted the transmission dynamics of COVID-19. For illustration purposes, we are considering a rather simple form for the function $\epsilon(t)$. We assume that for the first 40 days after the emergence of SARS-CoV-2 in the population no mitigation measures are implemented, so $\epsilon(t)=1$ for $0\leq t<40$. After this, health authorities implement a very strict lockdown, for approximately two months, that reduces significantly (90% in our simulations) the number of contact within the population, thus $\epsilon=0.1$ for $30\leq t<100$. The reduction in the prevalence of the infection due to lockdown implementation is very clear (see Figure 4). After this period, there is a partial relaxation of the lockdown to reflect the need for reactivating social and economical activities, so $\epsilon(t)=0.5$ for $100\leq t\leq 160$. This causes a new increment in the number of new infections and the start of the second wave of the epidemic (see Figure 4). Finally, after two months of partial relaxation, if health authorities decided to completely lift social distancing measures as we assume i.e. $\epsilon(t)=1$ for $t>160$, the result is an exponential increase in the prevalence of the infection that may increase healthcare system pressure. 5.1 Optimization of non-pharmaceutical interventions The effectiveness of non-pharmaceutical interventions to control the epidemic has been an important aim of recent work Bairagi et al., (2020); Bugalia et al., (2020); Santamaría-Holek and Castaño, (2020); Xing et al., (2020). The economic and social cost of lockdowns, bans of public events or closures of restaurants, commercial centers, etc., must be limited to reduce economic costs. Designing optimal transitory NPIs that reduce disease spread at the lower cost is a key issue that has been investigated in Angulo et al., (2021) using a simple SIR model. Following these authors, the design of adequate NPIs involve a trade-off between minimizing the economic cost of their implementation and the reduction or minimization of deaths due to insufficient health support. In mathematical terms, they assume that NPIs may reduce the effective contact rate $\beta$ by a control term $(1-u)$, $u\in[0,1]$. Note that $u=1$ implies an effective contact rate of zero. However, such aim is unrealistic and thus an upper bound for $u$, defined as $u_{max}\in(0,1)$, is postulated; hence an admissible control must now satisfy $u\in[0,u_{max}]$. In this approach the control is acting on the nonlinear term of the SIR model. The authors’ also assume that health services can adequately manage up to a maximum prevalence $I_{max}\in[0,1)$. This implies that the action of NPIs measured by $u$ must maintain disease prevalence below $I_{max}$. The model is governed by the following equations: $$\displaystyle\dot{S}$$ $$\displaystyle=-(1-u)\beta SI,$$ (30) $$\displaystyle\dot{I}$$ $$\displaystyle=(1-u)\beta SI-\gamma I,$$ $$\displaystyle\dot{R}$$ $$\displaystyle=\gamma I.$$ Since it is assumed that $S(t)+I(t)+R(t)=1$ the problem is reduced to the $S-I$ plane. If the optimal NPIs problem has a solution $u^{*}$, then $u^{*}(S,I)$ gives the optimal reduction in the effective contact rate that the NPIs should achieve given that the epidemic is in the state $(S,I)$. The main result of Angulo et al Angulo et al., (2021) is a complete analytical characterization of the optimal NPIs in the SIR model (30). Their analysis shows that the solution to the optimal intervention is fully characterized by the separating curve $$\Phi(S)=\left\{\begin{array}[]{lc}I_{max}+\mathcal{R}_{c}^{-1}In(S/S^{*})-(S-S^{*})&if\quad S\geq S^{*}\\ \\ I_{max}&if\quad\mathcal{R}_{0}^{-1}\leq S\leq S^{*}\end{array}\right.$$ where $S^{*}=\min\left\{\mathcal{R}_{c},1\right\}$, and $\mathcal{R}_{c}=(1-u_{max})\mathcal{R}_{0}$ is a controlled reproduction number. Note that the shape of the separating curve depends on $I_{max}$ and $\mathcal{R}_{c}$. If $\mathcal{R}_{c}\leq 1$, the separating curve is the straight line $\Phi(S)=I_{max}$. When $\mathcal{R}_{c}>1$, the separating curve becomes nonlinear. The optimal control intervention is characterized by the separating curve as follows: (1) an optimal intervention exists if and only if the initial state $(S_{0},I_{0})$ lies below this separating curve, that is, $I_{0}\leq\Phi(S_{0})$ (2) if it exist, the optimal intervention $u^{*}$ takes the feedback form $$u^{*}(S,I)=\left\{\begin{array}[]{lcc}0&if\quad I<\Phi(S)\;or\;S\leq\mathcal{R}_{0}^{-1}\\ \\ u_{max}&if\quad I=\Phi(S)\;and\;S\geq S^{*}\\ \\ 1-1/(\mathcal{R}_{0}S)&if\quad I=\Phi(S)\;and\;S^{*}\geq S>\mathcal{R}_{0}^{-1}\\ \end{array}\right.$$ Hence, the optimal intervention starts when $I(t)$ reaches $\Phi(S(t))$, and then it slides $I(t)$ along $\Phi(S)$ until reaching the region where $S\leq\mathcal{R}_{0}^{-1}$. Observe that once the trajectory reaches this region, the prevalence of the infection will decrease without the need of further control Angulo et al., (2021). 6 Vaccination policies, herd-immunity and the effective reproduction number Besides being an indicator of the severity of an epidemic, the basic reproduction number $\mathcal{R}_{0}$ is a powerful tool to estimate the control effort needed to eradicate a disease. Consider an infection in a population that mixes homogeneously with $\mathcal{R}_{0}>1$. If at least $\mathcal{R}_{0}-1$ of these individuals are protected from the infection either through naturally acquired immunity after infection or through vaccine-induced immunity, then the epidemic cannot grow Keeling et al., (2013). Hence, the infection can be eradicated if a fraction of individuals greater than $$q=(\mathcal{R}_{0}-1)/\mathcal{R}_{0}=1-1/\mathcal{R}_{0}$$ (31) has been afforded lifelong protection. Under these conditions, a considerable fraction of the population is immune providing indirect protection to those still susceptible. Hence, on average, a typical infectious individual no longer generates more than one secondary infection and herd immunity is established. Estimations of the basic reproduction number for SARS-CoV-2 usually range between $2.0-4.5$, so for example, if $\mathcal{R}_{0}=2.5$ as estimated for Mexico by Saldaña et al., (2020) (although there are several other estimates for this parameter e.g., Acuña-Zegarra et al., (2020); Mena et al., (2020)) a vaccination program aiming to attain herd immunity must immunize at least 60% of the population. This result is only valid for a perfect vaccine that prevents infection with 100% efficacy; however, for most infections, vaccines only confer partial protection. For the case of SARS-CoV-2 infection, vaccine developers behind the Pfizer/BioNTech, Moderna, and Gamaleya vaccines have announced that their vaccines have efficacy above $90\%$, but other developers such as the AstraZeneca and Sinovac teams have vaccines with efficacy way below 90% Kim et al., (2021). For vaccines with efficacy $0<\psi<1$ ($\psi=1$ means 100% efficacy), if health authorities vaccine a fraction $v$ of the population and the remaining fraction of the population are susceptible $s$, then $R_{v}=s\mathcal{R}_{0}+(1-\psi)v\mathcal{R}_{0}$ is the mean number of infections generated by a typical infectious person in this partially immunized population. So the minimum level of vaccination needed to eradicate the infection (the minimum $v$ such that $R_{v}<1$) becomes $v=q/\psi=(1-1/\mathcal{R}_{0})/\psi$. In the absence of mitigation measures, herd immunity is achieved when the effective reproduction number $\mathcal{R}_{e}$ (also denoted $\mathcal{R}_{t}$) is equal to unity which in turn corresponds to the time when the peak of the epidemic is reached Weitz et al., (2020). The effective reproduction number $\mathcal{R}_{e}$ quantifies the mean number of infections produced by a typical infectious case in a partially immune or protected (via isolation, quarantine, etc) population Delamater et al., (2019). Unlike, $\mathcal{R}_{0}$, the effective reproduction number does not assume a fully susceptible population and changes over time depending on the population’s immune status and the impact of non-pharmaceutical interventions or vaccination programs in the mitigation of further disease transmission. Under the classical homogeneous mixing assumption, the effective reproduction number at a particular time $t$ can be approximated as $$\mathcal{R}_{e}(t)\approx(1-p_{c}(t))\mathcal{R}_{0}S(t)/N(t),$$ (32) where $p_{c}(t)$ is the reduction in the transmission rate due to mitigation measures where the effectiveness of such measures may vary at a particular time. The depletion of the susceptible pool decreases the value of $\mathcal{R}_{e}$. Observe that without mitigation measures ($p_{c}=0$), at the beginning of the outbreak, $\mathcal{R}_{e}(0)=\mathcal{R}_{0}$. The time-dependent effective reproduction number $\mathcal{R}_{e}(t)$ has been of paramount importance to assess the impact of mitigation measures against COVID-19 and to guide the easing of such restrictions. There are several methodologies to estimate the value of $\mathcal{R}_{e}$ from data, see, for example Bettencourt and Ribeiro, (2008); Cori et al., (2013); Contreras et al., (2020); Nishiura and Chowell, (2009); Wallinga and Teunis, (2004). Probably, the most simple mathematical formulation to obtain the time-dependent effective reproduction number directly from epidemiological data is the method recently proposed by Contreras and coworkers Contreras et al., (2020). This method considers that, in real-life situations, during an epidemic outbreak, the effective contact rate $\beta$ is not constant as assumed in (1). Instead, it is plausible to assume that $\beta(t)$ is a function that varies in time depending on several circumstances, for example, the impact of mobility restrictions and other mitigation measures to control disease’s spread. Then, the effective reproduction number is $$\mathcal{R}_{e}(t)=\dfrac{\beta(t)}{\gamma}\dfrac{S(t)}{N(t)}.$$ (33) Considering the SIR model equations (1) with a time-dependent $\beta(t)$, a direct computation using the chain rule allow us to obtain Contreras et al., (2020) $$\dfrac{dI}{dS}=-1+\dfrac{1}{\mathcal{R}_{e}(t)}.$$ (34) The equation (34) is discretized in an interval $[t_{i-1},t_{i}]$ where it is assumed that $\mathcal{R}_{e}(t)=\mathcal{R}_{e}(t_{i})$ is constant, thus $$\mathcal{R}_{e}(t_{i})=\dfrac{1}{\dfrac{\Delta_{i}I}{\Delta_{i}S}+1}$$ (35) An extension of the SIR model considering the number of deaths and a population balance implies that discrete differences satisfy $\Delta_{i}S+\Delta_{i}I+\Delta_{i}r+\Delta_{i}D=0$, hence $$\mathcal{R}_{e}(t_{i})=\dfrac{\Delta_{i}I}{\Delta_{i}R+\Delta_{i}D}+1.$$ (36) As remarked in (Contreras et al.,, 2020), if we note that $\Delta_{i}I=(-\Delta_{i}S)-(\Delta_{i}R+\Delta_{i}D)$, that is, the change in the prevalence, $\Delta_{i}I$, equals new infections, $-\Delta_{i}S$, minus new recoveries plus new deaths, $(\Delta_{i}R+\Delta_{i}D)$, therefore $$\mathcal{R}_{e}(t_{i})=\dfrac{-\Delta_{i}S}{\Delta_{i}R+\Delta_{i}D}=\dfrac{\textrm{New infections}}{\textrm{New recoveries}+\textrm{New deaths}}.$$ (37) The features, advantages, limitations, and practical application of this methodology to obtain $\mathcal{R}_{e}$ for COVID-19 is studied in more detail in Contreras et al., (2020); Medina-Ortiz et al., (2020) 6.1 COVID-19 vaccine prioritization Deployment of COVID-19 vaccines was initiated in several countries at the beginning of January 2021. As expected, there has been a high demand for the limited supplies of COVID-19 vaccines, so the optimization of vaccine allocation to maximize public health benefit has been a problem of interest Fitzpatrick and Galvani, (2021). Mathematical models have also been used to guide public health policies on the optimization of vaccine allocation Bubar et al., (2021); Buckner et al., (2021); Castro and Singer, (2021); Goldstein et al., (2021); Schmidt et al., (2020); Fitzpatrick and Galvani, (2021). It is well known that a vaccine directly protects those vaccinated but also indirectly protects those that are not, the more efficacious is the vaccine the greater and more beneficial the indirect protection will be. Recently, Bubar et al. Bubar et al., (2021) developed a suite of mathematical models geared to evaluate age-specific vaccine prioritization policies. Several of the vaccines currently available have high levels of efficacy (above 90%) which means that they can be effective in blocking transmission. On the other hand, influenza is a well-known respiratory infection for which vaccination policies do exist and may therefore serve as an important reference to those for SARS-CoV-2. However, the age-specific probability of infection and the age-specific mortality are different in these two diseases with a higher burden for older than 60 years old people Fitzpatrick and Galvani, (2021). The model used by Bubar et al Bubar et al., (2021) is an age-structured SEIR model with a force of infection, $\lambda_{i}$, for a susceptible individual in age group $i$ given as follows: $$\lambda_{i}=\phi_{i}\sum_{j=1}^{n}c_{ij}\frac{I_{j}+I_{vj}+I_{xj}}{N_{j}-D_{j}}$$ where $\phi_{i}$ is the probability of a successful transmission given contact with an infectious individual, $c_{ij}$ is the daily contact rate of individuals in age group $i$ with individuals in age group $j$, $I_{j}$ is the number of infectious unvaccinated individuals, $I_{vj}$ are individuals who are vaccinated yet infectious, and $I_{xj}$ is the number of infectious individuals that are ineligible for vaccination for the personal hesitancy of due to a positive serological test, $N_{j}$ is the number of individuals in the age group $j$ and $D_{j}$ are the individuals of age group $j$ who have died. Bubar et al Bubar et al., (2021) incorporated vaccine hesitancy assuming limited vaccine uptake such that at most 70% of any age group was eligible to be vaccinated. Such constrain was performed assuming that 30% of each class for each age group was initialized as ineligible for vaccination. Moreover, in their more parsimonious model, they assumed the vaccine to be both, transmission- and infection- blocking, and to work with variable efficacy. In particular, they considered two ways to implement vaccine efficacy ($ve$): as an all-or-nothing vaccine, where the vaccine yields perfect protection to a fraction $ve$ of people who receive it, or as a leaky vaccine, where all vaccinated people have reduced probability $ve$ of infection after vaccination. To incorporate age-dependent vaccine efficacy, they parameterized the relationship between age and vaccine efficacy via an age-efficacy curve with (i) a baseline efficacy, an age at which efficacy begins to decrease (hinge age), and a minimum vaccine efficacy $ve_{m}$ for adults $80+$. This was done assuming that $ve$ is equal to a baseline value for all ages younger than the hinge age, then decreases step-wise in equal increments for each decade to the specified minimum $ve_{m}$ for the $80+$ age group. Finally, existing seroprevalence estimates were Incorporated varying the basic reproduction number and the percentage cumulative incidence reached. The model outcomes used to compare the performance of different vaccine allocation strategies were the cumulative number of infections, deaths, and years of life lost. Bubar et al Bubar et al., (2021) concluded that although vaccination targeted to younger people (20-50 years old) minimized cumulative incidence, mortality and years of life lost were minimized when applied first to older people. These results were based on numerical simulations with a time horizon of one year after the date of the vaccine introduction arguing that this allowed them to focus on the early prioritization phase of the COVID-19 vaccination programs. 7 Superspreading events In the beginning of the epidemic, surveillance made extensive use of the basic reproduction number $\mathcal{R}_{0}$ to characterize the average number of secondary infections. Nevertheless, $\mathcal{R}_{0}$ may hide a large variation at the individual level. Indeed, after more than one year into the pandemic, there have been several reports of superspreading events (SSEs) in which many individuals are infected at once by one or few infectious carriers (see, for example, Endo et al., (2020); Kochańczyk et al., (2020); Lemieux et al., (2020); Liu et al., (2020); Ndairou et al., (2020); Santana-Cibrian et al., (2020) and the references therein). Current reports suggest that a small group of infections generate most of secondary casesKain et al., (2021). In other words, even if $\mathcal{R}_{0}\approx 2-3$, most individuals are not infecting 2 or 3 other people; instead, a tiny number of people dominate transmission while an average person do not transmit the virus at all Althouse et al., (2020). Current data suggest that prolonged indoor gatherings with poor ventilation are one of the main factors inducing SSEs. Hence, crowded closed places are hotspots for SSEs and a source of COVID-19 infections. More factors may lead to SSEs. For example, events in which a huge number of people temporarily cluster, so the number of contacts suddenly increases far above their mean Althouse et al., (2020). The existence of superspreaders, that is, individuals having biological features that cause them to shed more virus than others, is another issue that has attracted attention Lewis, (2021). To understand the role of individual variation in outbreak dynamics, Lloyd-Smith et al Lloyd-Smith et al., (2005), introduced the individual reproduction number, $\nu$, as a random variable that assess the average number of secondary cases generated by a particular infected individual. Then $\nu$ values can be estimated from a continuous probability distribution with a population mean $\mathcal{R}_{0}$. In this context, SSEs are realizations from the right-hand tail of a distribution of $\nu$. A Poisson process is used to describe the stochastic essence of the transmission process; hence, the expected number of secondary infected generated by each case, $Z$, is approximated by an offspring distribution $\mathbb{P}(Z=k)$ where $Z\sim Poisson(\nu)$. Lloyd-Smith et al. considered three possible distributions of $\nu$ generating three candidate models for the offspring distribution: (1) Generation-based models neglecting individual variation, i.e. $\nu=\mathcal{R}_{0}$ for all cases, yielding $Z\sim Poisson(\mathcal{R}_{0})$. (2) Differential-equation models with homogeneous mixing transmission and constant recovery rates, $\nu$ is exponentially distributed yielding $Z\sim geometric(\mathcal{R}_{0})$. (3) $\nu$ is gamma-distributed with mean $\mathcal{R}_{0}$ and dispersion parameter $k$, yielding $Z\sim NB(\mathcal{R}_{0},k)$ (NB$=$ Negative Binomial). Observe that the NB model includes the Poisson ($k\rightarrow\infty$) and geometric ($k=1$) models as special cases. It has variance $\mathcal{R}_{0}(1+\mathcal{R}_{0}/k)$, so smaller values of $k$ indicate greater heterogeneity. Althouse et al Althouse et al., (2020) showed that an epidemic outbreak dominated by SSEs and an NB distribution of secondary infections with small $k$ has very different transmission dynamics in comparison to a Poisson model outbreak with the same $\mathcal{R}_{0}$. For the NB model, secondary infections are over-dispersed causing early transmission dynamics that are more stochastic. Hence, in this case, an epidemic outbreak has lower probability to grow into a huge epidemic. Nevertheless, if the outbreak takes off under the NB model, the incidence starts showing stable exponential growth, with a growth rate approaching that of a model with the same $\mathcal{R}_{0}$, but a Poisson distribution of secondary infections i.e., an NB model with $k\rightarrow\infty$. However, during the early phase of an NB outbreak that takes off, disease incidence will look more intense in the first few generations when SSEs will generate the most of secondary infections, making it possible to spin out large infection clusters in few generations, whereas a Poisson model cannot Althouse et al., (2020); Lloyd-Smith et al., (2005). In the context of COVID-19, Endo et al Endo et al., (2020) presented the first study (to the authors’ knowledge) estimating the level of overdispersion in COVID-19 transmission by using a mathematical model that is characterized by $\mathcal{R}_{0}$ and the overdispersion parameter $k$ of a NB branching process. Their results suggest a high degree of individual-level variation in the transmission of COVID-19. Assuming that the $\mathcal{R}_{0}$ lies between $2-3$, Endo et al. estimated an overdispersion parameter $k$ to be around $0.1$ (median estimate $0.1$; 95% CrI: $0.05-0.2$ for $\mathcal{R}_{0}=2.5$), suggesting that 80% of secondary transmissions may have been caused by a small fraction of infectious individuals ( 10%). Other studies have also estimated $k$ values and their results suggest that $k$ lies in the range $0.04-0.3$ Hébert-Dufresne et al., (2020); Liu et al., (2020); Kain et al., (2021). 8 Viral evolution and vaccine escape mutants As of April 2021, several variants of the SARS-CoV-2 have been reported globally Abdool Karim and de Oliveira, (2021); Chen et al., (2021); Mercatelli and Giorgi, (2020); Phan, (2020). RNA viruses, such as the coronavirus SARS-CoV-2, will naturally mutate over time so such variants are not unexpected. Moreover, most mutations are irrelevant in an epidemiological context. Nevertheless, of the multitude of variants circulating worldwide, at the time of writing (April 2021), health experts are mainly worried about three variants that have undergone changes to the spike protein and are maybe more infectious and threatening Fontanet et al., (2021). The first of them is the B.1.1.7 variant that was first identified in the United Kingdom (UK) and seems to be more transmissible than other variants currently circulating. This variant has also been linked with an increased risk of death but there is still uncertainty surrounding this result. The South-African variant B.1.351 emerged independently from B.1.1.7 but share some of its mutations. The third is the Brazilian variant P.1 that contains some additional mutations that and may be able to overcome the immunity developed after infection by other variants Abdool Karim and de Oliveira, (2021). Current COVID-19 vaccines were developed before the emergence of the above-mentioned variants. Hence, another major concern is the possibility that variants will make vaccines less effective. Some preliminary results by leading vaccine developers suggest that vaccines can still protect against the new variants Fontanet et al., (2021). However, the vaccine-induced immune response may not be as strong or long-lasting Roberts, (2021). Apart from these problems, the initial and limited vaccine supply has raised some discussion on how to distribute COVID-19 vaccines Castro and Singer, (2021); De la Sen and Ibeas, (2021); Goldstein et al., (2021); Iboi et al., (2020); Makhoul et al., (2020); Saldana and Velasco-Hernandez, (2020). Beyond prioritizing healthcare workers and the elderly, the optimal strategy for the general public remains complex. Some countries, including Canada and the UK, had proposed to delay the second dose of the vaccine as an attempt to increase the number of individuals receiving at least one dose and therefore gaining more protection within the population. Delaying the second dose could create conditions that promote the evolution of vaccine escape, namely, viral variants resistant to the antibodies created in response to vaccination Saad-Roy et al., (2021). Escape variants have the potential of creating more infections, deaths and prolong the pandemic. In general, the start of vaccination programs all around the world, whilst the pandemic is still ongoing, may rapidly exert selection pressure on the SARS-CoV-2 virus and lead to mutations that escape the vaccine-induced immune response Gog et al., (2021). At this time, there is uncertainty around the strength of such selection and the probability of vaccine escape, so more studies are needed in this direction. Recently, Gog et al Gog et al., (2021) studied how considerations of vaccine escape risk might modulate optimal vaccine priority order. They found two main insights: (i) vaccination aimed at reducing prevalence could be more effective at reducing disease than directly vaccinating the vulnerable; (ii) the highest risk for vaccine escape can occur at intermediate levels of vaccination. They also remarked that vaccinating most of the vulnerable and only a few of the low-risk individuals could be extremely risky for vaccine escape. Their results are based on a two-population model with differing vulnerability and contact rates. The existence of geographic regions of the human population where the vaccine is scarce is another concern. It can be argued that regions that do not have access to the vaccine can serve as evolutionary reservoirs from which vaccine escape mutations may arise. This hypothesis has been explored by Gerrish et al Gerrish et al., (2021) using a simple two-patch deterministic epidemic model as follows. They considered COVID-19 epidemics in two neighboring regions or patches. Assuming that only one patch has access to the vaccine, they investigated if the presence of the unvaccinated patch affects the probability of vaccine escape in the vaccinated one. The model follows the SIR structure for both patches augmented to consider vaccine escape mutants. The governing equations are as follows: $$\begin{gathered}\dot{S_{j}}=-\sum_{k}^{n}\beta_{kj}I_{k}S_{j}-\phi_{j}S_{j},~{}~{}~{}~{}~{}~{}~{}\dot{I_{j}}=\sum_{k}^{n}\beta_{jk}I_{j}S_{k}-(\gamma+\mu)I_{j},\\ \dot{V_{j}}=\phi_{j}S_{j},~{}~{}~{}~{}~{}\dot{E_{j}}=\mu I_{j}-\gamma E_{j},~{}~{}~{}~{}~{}\dot{R_{j}}=\gamma(I_{j}+E_{j}),\end{gathered}$$ (38) where $S_{j}$, $V_{j}$, $I_{j}$, $E_{j}$, and $R_{j}$ are the fraction of the population that are susceptible, vaccinated, infected, infected with escape mutant, and recovered, respectively, in patch $j$; $\beta_{ij}$ is the transmission rate from patch $i$ to patch $j$, $\phi_{j}$ is vaccination rate in patch $j$; $\gamma$ is recovery rate; $\mu$ is a composite per-host mutation rate from wildtype virus to escape mutant virus; $n$ is the number of patches but they explored only the simple two-patches case ($n=2$). Escape mutants are not considered explicitly, because, under normal conditions, they will always have a strong selective advantage in a vaccinated population. Thus the model focuses on the timing of the first infection event in which a new host is infected with an escape mutant. The time of the first infection event in which a new host in patch $j$ is infected by an escape mutant that arose in patch $i$ will occur at a rate $r_{ij}(t)=\beta_{ij}E_{i}(t)(S_{j}(t)+\sigma V_{j}(t))$, where $\sigma$ allows for varying levels of escape and ranges from $\sigma=0$, no escape mutations, to full escape $\sigma=1$ Gerrish et al., (2021). It is also assumed that intra-patch transmission rates are equal, $\beta_{jj}=\beta$, and inter-patch transmission rates are equal too, $\beta_{ij}\big{|}_{i\neq j}=\beta_{0}$. Assuming $\beta_{0}<\beta$ i.e., inter-patch transmission is much less frequent than intra-patch transmission, the random variable $T_{f}$ is defined as the time at which the last infected individual recovers. Gerrish et al Gerrish et al., (2021) showed that the probability of vaccine escape can be orders of magnitude higher if vaccination is fully allocated to one patch and no vaccination allocated to the other. On the contrary, equal vaccination on the two patches gives the lowest probability of vaccine escape. In other words, unequal distribution of COVID-19 vaccines may lead to vaccine escape and hence, support arguments for vaccine equity at all scales. 9 Conclusions Once the pandemic potential of the SARS-CoV-2 became established in early 2020, there have been many modeling efforts by scientific and public health communities to improve our understanding of the underlying mechanism of SARS-CoV-2 transmission and control Thompson et al., (2020). Mathematical and computational models have been key tools in the fight against the COVID-19 pandemic helping to forecast hospital demand, evaluate the impact of non-pharmaceutical interventions, guide lockdown exit strategies, the potential reach of herd-immunity levels, evaluate the optimal allocation of COVID-19 vaccines, and in general to provide guidance to policy-makers about the development of the epidemic Aguiar et al., (2020); Aleta et al., (2020); Castro and Singer, (2021); Contreras et al., (2020); Eker, (2020); Goldstein et al., (2021); Makhoul et al., (2020); Medina-Ortiz et al., (2020); Mena et al., (2020); Ngonghala et al., (2020); Saldaña et al., (2020); Saldana and Velasco-Hernandez, (2020); Santana-Cibrian et al., (2020); Weitz et al., (2020). In this work, we have discussed the role of the so-called compartmental models to gain insight into epidemiological aspects of virus transmission and the possibilities for its control. The main objective has been to present an overview of the mathematical methods behind some important aspects of the general theory of infectious disease modeling. We started revisiting simple deterministic compartmental models closely connected with the classical work of Kermack & McKendrick and important epidemiological quantities such as the basic and effective reproduction numbers. Then, we explored the role of the so-called mixing function which allows introducing heterogeneity in epidemic models and is, therefore, a key component in disease modeling. Then we investigated a number of compartmental epidemic models to study the early phase of the COVID-19 epidemic outbreak, how awareness-driven behavior modulates the epidemic shape and the role of asymptomatic carriers in disease transmission. The impact of non-pharmaceutical interventions for COVID-19 control is also discussed. We also revisited important results related to vaccination policies, herd-immunity, and the effective reproduction number together with a simple method to perform real-time estimation of this last quantity. We also discussed the presence of superspreading events in the pandemic, the possibility of viral evolution and vaccine escape mutants. One important topic that is not discussed in this review is the interaction of COVID-19 with other respiratory illnesses. Co-circulation of SARS-CoV-2 and other endemic respiratory viral infections is a potential reality that can bring more challenges to public health. Currently, there have been some concerns about the interaction between SARS-CoV-2 and influenza viruses and preliminary results suggest that an initial infection with the influenza A virus strongly enhances the infectivity of SARS-CoV-2 Bai et al., (2021). Some studies have also already reported proportions of SARS-CoV-2 co-infections with other respiratory viruses Burrel et al., (2021). Co-infection mechanisms are common in nature and the previously mentioned studies highlight the risk of influenza virus and SARS-CoV-2 co-infection to public health. Tailoring epidemic models to incorporate the most important features of COVID-19 such as the existence of asymptomatic carriers, the use of non-pharmaceutical interventions, the recent introduction of vaccinations, and the emergence of new variants of SARS-CoV-2 together with co-circulation with other pathogens will result in very complex dynamics. Therefore, developing realistic mathematical models to study the co-circulation of SARS-CoV-2 and other respiratory infections presents one important challenge for disease modelers Acuña Zegarra et al., (2021). There are other topics that are not treated in this review. For example, simulation models for the spatial spread of SARS-CoV-2, age and risk structure, the role of human behavior on disease dynamics, parameter estimation techniques to calibrate models with official data, exploration of long-term epidemiological outcomes such as the possibility of recurrent seasonal outbreaks, among others. Nevertheless, we believe we have given at least a brief overview of key modeling efforts and current challenges related to COVID-19. The immense number of publications related to the ongoing COVID-19 pandemic has confirmed a fundamental fact: the strategic use of mathematical modeling in public health is a multidisciplinary activity that requires a critical judgment in the interpretation of the underlying model’s assumptions and their impact on the projections and outcomes. The use of mathematical models to evaluate contingency plans is essential to overcome a public health emergency. However, a considerable effort is still needed to improve the credibility and usefulness of epidemiological so we are better to prepare to respond to future epidemics. Acknowledgments We acknowledge support from DGAPA-PAPIIT-UNAM grants IV100220 (proyecto especial COVID-19) and IN115720. References Abdool Karim and de Oliveira, (2021) Abdool Karim, S. S. and de Oliveira, T. (2021). New sars-cov-2 variants: Clinical, public health, and vaccine implications. New England Journal of Medicine. Acuña Zegarra et al., (2021) Acuña Zegarra, M. A., Lopez, M. N., Cibrian, M. S., Garcia, A. C., and Velasco-Hernandez, J. X. (2021). Co-circulation of sars-cov-2 and influenza under vaccination scenarios. medRxiv, pages 2020–12. Acuña-Zegarra et al., (2020) Acuña-Zegarra, M. A., Santana-Cibrian, M., and Velasco-Hernandez, J. X. (2020). Modeling behavioral change and covid-19 containment in mexico: A trade-off between lockdown and compliance. Mathematical Biosciences, page 108370. Aguiar et al., (2020) Aguiar, M., Ortuondo, E. M., Van-Dierdonck, J. B., Mar, J., and Stollenwerk, N. (2020). Modelling covid 19 in the basque country from introduction to control measure response. Scientific reports, 10(1):1–16. Aleta et al., (2020) Aleta, A., Martin-Corral, D., y Piontti, A. P., Ajelli, M., Litvinova, M., Chinazzi, M., Dean, N. E., Halloran, M. E., Longini Jr, I. M., Merler, S., et al. (2020). Modelling the impact of testing, contact tracing and household quarantine on second waves of covid-19. Nature Human Behaviour, 4(9):964–971. Althouse et al., (2020) Althouse, B. M., Wenger, E. A., Miller, J. C., Scarpino, S. V., Allard, A., Hébert-Dufresne, L., and Hu, H. (2020). Superspreading events in the transmission dynamics of sars-cov-2: Opportunities for interventions and control. PLoS biology, 18(11):e3000897. Angulo et al., (2021) Angulo, M. T., Castaños, F., Moreno-Morton, R., Velasco-Hernandez, J. X., and Moreno, J. A. (2021). A simple criterion to design optimal nonpharmaceutical interventions for epidemic outbreaks. Royal Society Interface. Bai et al., (2021) Bai, L., Zhao, Y., Dong, J., Liang, S., Guo, M., Liu, X., Wang, X., Huang, Z., Sun, X., Zhang, Z., et al. (2021). Coinfection with influenza a virus enhances sars-cov-2 infectivity. Cell research, pages 1–9. Bairagi et al., (2020) Bairagi, A. K., Masud, M., Munir, M. S., Nahid, A.-A., Abedin, S. F., Alam, K. M., Biswas, S., Alshamrani, S. S., Han, Z., Hong, C. S., et al. (2020). Controlling the outbreak of covid-19: A noncooperative game perspective. IEEE Access, 8:215570–215581. Barradas and Vázquez, (2019) Barradas, I. and Vázquez, V. (2019). Backward bifurcation as a desirable phenomenon: Increased fecundity through infection. Bulletin of mathematical biology, 81(6):2029–2050. Bettencourt and Ribeiro, (2008) Bettencourt, L. M. and Ribeiro, R. M. (2008). Real time bayesian estimation of the epidemic potential of emerging infectious diseases. PloS one, 3(5):e2185. Blackwood and Childs, (2018) Blackwood, J. C. and Childs, L. M. (2018). An introduction to compartmental modeling for the budding infectious disease modeler. Letters in Biomathematics, 5(1):195–221. Bubar et al., (2021) Bubar, K. M., Reinholt, K., Kissler, S. M., Lipsitch, M., Cobey, S., Grad, Y. H., and Larremore, D. B. (2021). Model-informed covid-19 vaccine prioritization strategies by age and serostatus. Science, 371(6532):916–921. Buckner et al., (2021) Buckner, J. H., Chowell, G., and Springborn, M. R. (2021). Dynamic prioritization of covid-19 vaccines when social distancing is limited for essential workers. Proceedings of the National Academy of Sciences, 118(16). Bugalia et al., (2020) Bugalia, S., Bajiya, V. P., Tripathi, J. P., Li, M.-T., and Sun, G.-Q. (2020). Mathematical modeling of covid-19 transmission: the roles of intervention strategies and lockdown. Mathematical Biosciences and Engineering, 17(5):5961–5986. Buitrago-Garcia et al., (2020) Buitrago-Garcia, D., Egli-Gany, D., Counotte, M. J., Hossmann, S., Imeri, H., Ipekci, A. M., Salanti, G., and Low, N. (2020). Occurrence and transmission potential of asymptomatic and presymptomatic sars-cov-2 infections: A living systematic review and meta-analysis. PLoS medicine, 17(9):e1003346. Burrel et al., (2021) Burrel, S., Hausfater, P., Dres, M., Pourcher, V., Luyt, C.-E., Teyssou, E., Soulié, C., Calvez, V., Marcelin, A.-G., and Boutolleau, D. (2021). Co-infection of sars-cov-2 with other respiratory viruses and performance of lower respiratory tract samples for the diagnosis of covid-19. International Journal of Infectious Diseases, 102:10–13. Busenberg and Castillo-chavez, (1991) Busenberg, S. and Castillo-chavez, C. (1991). A general solution of the problem of mixing of subpopulations and its application to risk- and age-structured epidemic models for the spread of AIDS. Mathematical Medicine and Biology, 8(1):1–29. Byambasuren et al., (2020) Byambasuren, O., Cardona, M., Bell, K., Clark, J., McLaws, M.-L., and Glasziou, P. (2020). Estimating the extent of true asymptomatic covid-19 and its potential for community transmission: systematic review and meta-analysis. Available at SSRN 3586675. Castro and Singer, (2021) Castro, M. C. and Singer, B. (2021). Prioritizing covid-19 vaccination by age. Proceedings of the National Academy of Sciences, 118(15). Chen et al., (2021) Chen, A. T., Altschuler, K., Zhan, S. H., Chan, Y. A., and Deverman, B. E. (2021). Covid-19 cg enables sars-cov-2 mutation and lineage tracking by locations and dates of interest. Elife, 10:e63409. Contreras et al., (2020) Contreras, S., Villavicencio, H. A., Medina-Ortiz, D., Saavedra, C. P., and Olivera-Nappa, Á. (2020). Real-time estimation of rt for supporting public-health policies against covid-19. Frontiers in public health, 8. Cori et al., (2013) Cori, A., Ferguson, N. M., Fraser, C., and Cauchemez, S. (2013). A new framework and software to estimate time-varying reproduction numbers during epidemics. American journal of epidemiology, 178(9):1505–1512. Day et al., (2020) Day, T., Gandon, S., Lion, S., and Otto, S. P. (2020). On the evolutionary epidemiology of sars-cov-2. Current Biology, 30(15):R849–R857. De la Sen and Ibeas, (2021) De la Sen, M. and Ibeas, A. (2021). On an se (is)(ih) ar epidemic model with combined vaccination and antiviral controls for covid-19 pandemic. Advances in Difference Equations, 2021(1):1–30. de León et al., (2020) de León, U. A.-P., Pérez, Á. G., and Avila-Vales, E. (2020). An seiard epidemic model for covid-19 in mexico: mathematical analysis and state-level forecast. Chaos, Solitons & Fractals, 140:110165. Delamater et al., (2019) Delamater, P. L., Street, E. J., Leslie, T. F., Yang, Y. T., and Jacobsen, K. H. (2019). Complexity of the basic reproduction number (r0). Emerging infectious diseases, 25(1):1. Diekmann et al., (1990) Diekmann, O., Heesterbeek, J. A. P., and Metz, J. A. (1990). On the definition and the computation of the basic reproduction ratio r 0 in models for infectious diseases in heterogeneous populations. Journal of mathematical biology, 28(4):365–382. Dushoff et al., (1998) Dushoff, J., Huang, W., and Castillo-Chavez, C. (1998). Backwards bifurcations and catastrophe in simple models of fatal diseases. Journal of Mathematical Biology, 36(3):227–248. Eikenberry et al., (2020) Eikenberry, S. E., Mancuso, M., Iboi, E., Phan, T., Eikenberry, K., Kuang, Y., Kostelich, E., and Gumel, A. B. (2020). To mask or not to mask: Modeling the potential for face mask use by the general public to curtail the covid-19 pandemic. Infectious Disease Modelling. Eker, (2020) Eker, S. (2020). Validity and usefulness of covid-19 models. Humanities and Social Sciences Communications, 7(1):1–5. Endo et al., (2020) Endo, A. et al. (2020). Estimating the overdispersion in covid-19 transmission using outbreak sizes outside china. Wellcome Open Research, 5. Fitzpatrick and Galvani, (2021) Fitzpatrick, M. C. and Galvani, A. P. (2021). Optimizing age-specific vaccination. Science, 371(6532):890–891. Flaxman et al., (2020) Flaxman, S., Mishra, S., Gandy, A., Unwin, H. J. T., Mellan, T. A., Coupland, H., Whittaker, C., Zhu, H., Berah, T., Eaton, J. W., et al. (2020). Estimating the effects of non-pharmaceutical interventions on covid-19 in europe. Nature, 584(7820):257–261. Fontanet et al., (2021) Fontanet, A., Autran, B., Lina, B., Kieny, M. P., Karim, S. S. A., and Sridhar, D. (2021). Sars-cov-2 variants and ending the covid-19 pandemic. The Lancet. Ganyani et al., (2020) Ganyani, T., Kremer, C., Chen, D., Torneri, A., Faes, C., Wallinga, J., and Hens, N. (2020). Estimating the generation interval for coronavirus disease (covid-19) based on symptom onset data, march 2020. Eurosurveillance, 25(17):2000257. Garba et al., (2008) Garba, S. M., Gumel, a. B., and Abu Bakar, M. R. (2008). Backward bifurcations in dengue transmission dynamics. Mathematical biosciences, 215(1):11–25. Gerrish et al., (2021) Gerrish, P. J., Saldaña, F., Galeota-Sprung, B., Colato, A., Rodriguez, E. E., and Hernández, J. X. V. (2021). How unequal vaccine distribution promotes the evolution of vaccine escape. medRxiv. Glasser et al., (2012) Glasser, J., Feng, Z., Moylan, A., Del Valle, S., and Castillo-Chavez, C. (2012). Mixing in age-structured population models of infectious diseases. Mathematical Biosciences, 235(1):1–7. Gog et al., (2021) Gog, J. R., Hill, E. M., Danon, L., and Thompson, R. (2021). Vaccine escape in a heterogeneous population: insights for sars-cov-2 from a simple model. medRxiv. Goldstein et al., (2021) Goldstein, J. R., Cassidy, T., and Wachter, K. W. (2021). Vaccinating the oldest against covid-19 saves both the most lives and most years of life. Proceedings of the National Academy of Sciences, 118(11). Gostic et al., (2020) Gostic, K. M., McGough, L., Baskerville, E. B., Abbott, S., Joshi, K., Tedijanto, C., Kahn, R., Niehus, R., Hay, J. A., De Salazar, P. M., et al. (2020). Practical considerations for measuring the effective reproductive number, r t. PLoS computational biology, 16(12):e1008409. Gumel, (2012) Gumel, A. B. (2012). Causes of backward bifurcations in some epidemiological models. Journal of Mathematical Analysis and Applications, 395(1):355–365. He et al., (2020) He, X., Lau, E. H., Wu, P., Deng, X., Wang, J., Hao, X., Lau, Y. C., Wong, J. Y., Guan, Y., Tan, X., et al. (2020). Temporal dynamics in viral shedding and transmissibility of covid-19. Nature medicine, 26(5):672–675. Hébert-Dufresne et al., (2020) Hébert-Dufresne, L., Althouse, B. M., Scarpino, S. V., and Allard, A. (2020). Beyond r 0: heterogeneity in secondary infections and probabilistic epidemic forecasting. Journal of the Royal Society Interface, 17(172):20200393. Heffernan et al., (2005) Heffernan, J. M., Smith, R. J., and Wahl, L. M. (2005). Perspectives on the basic reproductive ratio. Journal of the Royal Society Interface, 2(4):281–293. (47) Hethcote, H. W. (2000a). The mathematics of infectious diseases. SIAM review, 42(4):599–653. (48) Hethcote, H. W. (2000b). The mathematics of infectious Diseases. SIAM Review, 42(4):599–653. Iboi et al., (2020) Iboi, E. A., Ngonghala, C. N., and Gumel, A. B. (2020). Will an imperfect vaccine curtail the covid-19 pandemic in the us? Infectious Disease Modelling, 5:510–524. Jacquez et al., (1988) Jacquez, J. A., Simon, C. P., Koopman, J., Sattenspiel, L., and Perry, T. (1988). Modeling and analyzing HIV transmission: the effect of contact patterns. Mathematical Biosciences, 92(2):119–199. Kain et al., (2021) Kain, M. P., Childs, M. L., Becker, A. D., and Mordecai, E. A. (2021). Chopping the tail: How preventing superspreading can help to maintain covid-19 control. Epidemics, 34:100430. Keeling et al., (2013) Keeling, M., Tildesley, M., House, T., and Danon, L. (2013). The mathematics of vaccination. Math. Today, 49:40–43. Kenah et al., (2008) Kenah, E., Lipsitch, M., and Robins, J. M. (2008). Generation interval contraction and epidemic data analysis. Mathematical biosciences, 213(1):71–79. Kermack and McKendrick, (1927) Kermack, W. O. and McKendrick, A. G. (1927). A contribution to the mathematical theory of epidemics. Proceedings of the royal society of london. Series A, Containing papers of a mathematical and physical character, 115(772):700–721. Kim et al., (2021) Kim, J. H., Marks, F., and Clemens, J. D. (2021). Looking beyond covid-19 vaccine phase 3 trials. Nature medicine, pages 1–7. Kochańczyk et al., (2020) Kochańczyk, M., Grabowski, F., and Lipniacki, T. (2020). Super-spreading events initiated the exponential growth phase of covid-19 with $\mathcal{R}$0 higher than initially estimated. Royal Society open science, 7(9):200786. Kribs-Zaleta and Velasco-Hernandez, (2000) Kribs-Zaleta, C. M. and Velasco-Hernandez, J. (2000). A simple vaccination model with multiple endemic states. Mathematical Biosciences, 164(2):183–201. Lajmanovich and Yorke, (1976) Lajmanovich, A. and Yorke, J. A. (1976). A deterministic model for gonorrhea in a nonhomogeneous population. Mathematical Biosciences, 28(3-4):221–236. Lauer et al., (2020) Lauer, S. A., Grantz, K. H., Bi, Q., Jones, F. K., Zheng, Q., Meredith, H. R., Azman, A. S., Reich, N. G., and Lessler, J. (2020). The incubation period of coronavirus disease 2019 (covid-19) from publicly reported confirmed cases: estimation and application. Annals of internal medicine, 172(9):577–582. Lemieux et al., (2020) Lemieux, J. E., Siddle, K. J., Shaw, B. M., Loreth, C., Schaffner, S. F., Gladden-Young, A., Adams, G., Fink, T., Tomkins-Tinch, C. H., Krasilnikova, L. A., et al. (2020). Phylogenetic analysis of sars-cov-2 in the boston area highlights the role of recurrent importation and superspreading events. medRxiv. Lewis, (2021) Lewis, D. (2021). Superspreading drives the covid pandemic — and could help to tame it. https://www.nature.com/articles/d41586-021-00460-x. Accessed 22-03-2021. Liu et al., (2020) Liu, Y., Eggo, R. M., and Kucharski, A. J. (2020). Secondary attack rate and superspreading events for sars-cov-2. The Lancet, 395(10227):e47. Lloyd-Smith et al., (2005) Lloyd-Smith, J. O., Schreiber, S. J., Kopp, P. E., and Getz, W. M. (2005). Superspreading and the effect of individual variation on disease emergence. Nature, 438(7066):355–359. Makhoul et al., (2020) Makhoul, M., Ayoub, H. H., Chemaitelly, H., Seedat, S., Mumtaz, G. R., Al-Omari, S., and J Abu-Raddad, L. (2020). Epidemiological impact of sars-cov-2 vaccination: mathematical modeling analyses. Vaccines, 8(4):668. Medina-Ortiz et al., (2020) Medina-Ortiz, D., Contreras, S., Barrera-Saavedra, Y., Cabas-Mora, G., and Olivera-Nappa, Á. (2020). Country-wise forecast model for the effective reproduction number rt of coronavirus disease. Frontiers in Physics, 8:304. Mena et al., (2020) Mena, R. H., Velasco-Hernandez, J. X., Mantilla-Beniers, N. B., Carranco-Sapiéns, G. A., Benet, L., Boyer, D., and Castillo, I. P. (2020). Using the posterior predictive distribution to analyse epidemic models: Covid-19 in mexico city. arXiv preprint arXiv:2005.02294. Mercatelli and Giorgi, (2020) Mercatelli, D. and Giorgi, F. M. (2020). Geographic and genomic distribution of sars-cov-2 mutations. Frontiers in microbiology, 11:1800. Mossong et al., (2008) Mossong, J., Hens, N., Jit, M., Beutels, P., Auranen, K., Mikolajczyk, R., Massari, M., Salmaso, S., Tomba, G. S., Wallinga, J., Heijne, J., Sadkowska-Todys, M., Rosinska, M., and Edmunds, W. J. (2008). Social Contacts and Mixing Patterns Relevant to the Spread of Infectious Diseases. PLoS Medicine, 5(3):1. Muniz-Rodriguez et al., (2020) Muniz-Rodriguez, K., Chowell, G., Cheung, C.-H., Jia, D., Lai, P.-Y., Lee, Y., Liu, M., Ofori, S. K., Roosa, K. M., Simonsen, L., et al. (2020). Doubling time of the covid-19 epidemic by province, china. Emerging infectious diseases, 26(8):1912. Ndairou et al., (2020) Ndairou, F., Area, I., Nieto, J. J., and Torres, D. F. (2020). Mathematical modeling of covid-19 transmission dynamics with a case study of wuhan. Chaos, Solitons & Fractals, page 109846. Ngonghala et al., (2020) Ngonghala, C. N., Iboi, E., Eikenberry, S., Scotch, M., MacIntyre, C. R., Bonds, M. H., and Gumel, A. B. (2020). Mathematical assessment of the impact of non-pharmaceutical interventions on curtailing the 2019 novel coronavirus. Mathematical Biosciences, page 108364. Nishiura and Chowell, (2009) Nishiura, H. and Chowell, G. (2009). The effective reproduction number as a prelude to statistical estimation of time-dependent epidemic trends. In Mathematical and statistical estimation approaches in epidemiology, pages 103–121. Springer. Nishiura et al., (2020) Nishiura, H., Linton, N. M., and Akhmetzhanov, A. R. (2020). Serial interval of novel coronavirus (covid-19) infections. International journal of infectious diseases, 93:284–286. Okuonghae and Omame, (2020) Okuonghae, D. and Omame, A. (2020). Analysis of a mathematical model for covid-19 population dynamics in lagos, nigeria. Chaos, Solitons & Fractals, 139:110032. (75) Park, S. W., Cornforth, D. M., Dushoff, J., and Weitz, J. S. (2020a). The time scale of asymptomatic transmission affects estimates of epidemic potential in the covid-19 outbreak. Epidemics, 31:100392. (76) Park, S. W., Sun, K., Viboud, C., Grenfell, B. T., and Dushoff, J. (2020b). Potential role of social distancing in mitigating spread of coronavirus disease, south korea. Emerging infectious diseases, 26(11):2697. Parry, (2020) Parry, J. (2020). Covid-19: Hong kong scientists report first confirmed case of reinfection. Phan, (2020) Phan, T. (2020). Genetic diversity and evolution of sars-cov-2. Infection, genetics and evolution, 81:104260. Prem et al., (2017) Prem, K., Cook, A. R., and Jit, M. (2017). Projecting social contact matrices in 152 countries using contact surveys and demographic data. PLoS computational biology, 13(9):e1005697. Qiu et al., (2021) Qiu, X., Nergiz, A. I., Maraolo, A. E., Bogoch, I. I., Low, N., and Cevik, M. (2021). Defining the role of asymptomatic and pre-symptomatic sars-cov-2 transmission–a living systematic review. Clinical microbiology and infection. Rai et al., (2020) Rai, B., Shukla, A., and Dwivedi, L. K. (2020). Estimates of serial interval for covid-19: A systematic review and meta-analysis. Clinical epidemiology and global health. Roberts, (2021) Roberts, M. (2021). What are the brazil, south africa and uk variants and will vaccines work? urlhttps://www.bbc.com/news/health-55659820. Accesed 16-03-2021. Saad-Roy et al., (2021) Saad-Roy, C. M., Morris, S. E., Metcalf, C. J. E., Mina, M. J., Baker, R. E., Farrar, J., Holmes, E. C., Pybus, O. G., Graham, A. L., Levin, S. A., et al. (2021). Epidemiological and evolutionary considerations of sars-cov-2 vaccine dosing regimes. Science. Saldaña and Barradas, (2019) Saldaña, F. and Barradas, I. (2019). The role of behavioral changes and prompt treatment in the control of stis. Infectious Disease Modelling, 4:1–10. Saldaña et al., (2020) Saldaña, F., Flores-Arguedas, H., Camacho-Gutiérrez, J. A., and Barradas, I. (2020). Modeling the transmission dynamics and the impact of the control interventions for the covid-19 epidemic outbreak. Mathematical Biosciences and Engineering, 17(4):4165–4183. Saldana and Velasco-Hernandez, (2020) Saldana, F. and Velasco-Hernandez, J. X. (2020). The trade-off between mobility and vaccination for covid-19 control: a metapopulation modeling approach. medRxiv. Santamaría-Holek and Castaño, (2020) Santamaría-Holek, I. and Castaño, V. (2020). Possible fates of the spread of sars-cov-2 in the mexican context. Royal Society open science, 7(9):200886. Santana-Cibrian et al., (2020) Santana-Cibrian, M., Acuña-Zegarra, M. A., and Velasco-Hernandez, J. X. (2020). Lifting mobility restrictions and the effect of superspreading events on the short-term dynamics of covid-19. Mathematical Biosciences and Engineering, 17(5):6240–6258. Schmidt et al., (2020) Schmidt, H., Pathak, P., Sönmez, T., and Ünver, M. U. (2020). Covid-19: how to prioritize worse-off populations in allocating safe and effective vaccines. bmj, 371. Solbach et al., (2020) Solbach, W., Schiffner, J., Backhaus, I., Burger, D., Staiger, R., Tiemer, B., Bobrowski, A., Hutchings, T., and Mischnik, A. (2020). Antibody profiling of covid-19 patients in an urban low-incidence region in northern germany. Frontiers in Public Health, 8:575. Thompson et al., (2020) Thompson, R. N., Hollingsworth, T. D., Isham, V., Arribas-Bel, D., Ashby, B., Britton, T., Challenor, P., Chappell, L. H., Clapham, H., Cunniffe, N. J., et al. (2020). Key questions for modelling covid-19 exit strategies. Proceedings of the Royal Society B, 287(1932):20201405. To et al., (2020) To, K. K.-W., Hung, I. F.-N., Chan, K.-H., Yuan, S., To, W.-K., Tsang, D. N.-C., Cheng, V. C.-C., Chen, Z., Kok, K.-H., and Yuen, K.-Y. (2020). Serum antibody profile of a patient with covid-19 reinfection. Clinical infectious diseases: an official publication of the Infectious Diseases Society of America. Tocto-Erazo et al., (2020) Tocto-Erazo, M. R., Espíndola-Zepeda, J. A., Montoya-Laos, J. A., Acuña-Zegarra, M. A., Olmos-Liceaga, D., Reyes-Castro, P. A., and Figueroa-Preciado, G. (2020). Lockdown, relaxation, and acme period in covid-19: A study of disease dynamics in hermosillo, sonora, mexico. PloS one, 15(12):e0242957. Ullah and Khan, (2020) Ullah, S. and Khan, M. A. (2020). Modeling the impact of non-pharmaceutical interventions on the dynamics of novel coronavirus with optimal control analysis with a case study. Chaos, Solitons & Fractals, 139:110075. Van den Driessche and Watmough, (2002) Van den Driessche, P. and Watmough, J. (2002). Reproduction numbers and sub-threshold endemic equilibria for compartmental models of disease transmission. Mathematical biosciences, 180(1-2):29–48. Vergu et al., (2010) Vergu, E., Busson, H., and Ezanno, P. (2010). Impact of the infection period distribution on the epidemic spread in a metapopulation model. PloS one, 5(2):e9371. Villavicencio-Pulido and Barradas, (2008) Villavicencio-Pulido, G. and Barradas, I. (2008). Latency and quarantine vs. backward bifurcation. ecological modelling, 214(1):59–64. Wallinga and Lipsitch, (2007) Wallinga, J. and Lipsitch, M. (2007). How generation intervals shape the relationship between growth rates and reproductive numbers. Proceedings of the Royal Society B: Biological Sciences, 274(1609):599–604. Wallinga and Teunis, (2004) Wallinga, J. and Teunis, P. (2004). Different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures. American Journal of epidemiology, 160(6):509–516. Weitz et al., (2020) Weitz, J. S., Park, S. W., Eksin, C., and Dushoff, J. (2020). Awareness-driven behavior changes can shift the shape of epidemics away from peaks and toward plateaus, shoulders, and oscillations. Proceedings of the National Academy of Sciences, 117(51):32764–32771. West et al., (2021) West, J., Everden, S., and Nikitas, N. (2021). A case of covid-19 reinfection in the uk. Clinical Medicine, 21(1):e52. Xing et al., (2020) Xing, G.-R., Li, M.-T., Li, L., and Sun, G.-Q. (2020). The impact of population migration on the spread of covid-19: A case study of guangdong province and hunan province in china. Frontiers in Physics, 8:488. Yang et al., (2020) Yang, L., Dai, J., Zhao, J., Wang, Y., Deng, P., and Wang, J. (2020). Estimation of incubation period and serial interval of covid-19: analysis of 178 cases and 131 transmission chains in hubei province, china. Epidemiology & Infection, 148.
Magnetization plateaus in spin chains: “Haldane gap” for half-integer spins Masaki Oshikawa${}^{1}$    Masanori Yamanaka${}^{3}$ and Ian Affleck${}^{1,2}$ Department of Physics and Astronomy${}^{1}$ and Canadian Institute for Advanced Research${}^{2}$, University of British Columbia, Vancouver, BC, V6T 1Z1, CANADA Department of Applied Physics${}^{3}$, University of Tokyo Hongo, Bunkyo-ku, Tokyo 113 JAPAN (Received by PRL: October 16, 1996) Abstract We discuss zero-temperature quantum spin chains in a uniform magnetic field, with axial symmetry. For integer or half-integer spin, $S$, the magnetization curve can have plateaus and we argue that the magnetization per site $m$ is topologically quantized as $n(S-m)=\mbox{integer}$ at the plateaus, where $n$ is the period of the groundstate. We also discuss conditions for the presence of the plateau at those quantized values. For $S=3/2$ and $m=1/2$, we study several models and find two distinct types of massive phases at the plateau. One of them is argued to be a “Haldane gap phase” for half-integer $S$. pacs: PACS numbers:75.10.Jm ††preprint: cond-mat/9610168 One-dimensional antiferromagnets are expected not to have long-range magnetic order in general. It was argued by Haldane [1], in 1983, that for integer, but not half-integer spin, $S$, there is a gap to the excited states. In the presence of a magnetic field, the $S=1/2$ Heisenberg antiferromagnetic (AF) chain remains gapless from zero field up to the saturation field, where the groundstate is fully polarized [2]. For integer $S$, the gap persists up to a critical field, equal to the gap, where bose condensation of magnons occurs [3]. The $S=1$ Heisenberg AF chain is known to be gapless from the critical field up to the saturation field [4]. Recently Hida observed that an $S=1/2$ antiferromagnetic chain with period $3$ exchange coupling shows a plateau in the magnetization curve at magnetization per site $m=1/6$ ($1/3$ of the full magnetization) [5]. Related works on bond-alternating chains have also been reported [6, 7, 8, 9] including experimental observation [11]. In this letter, we consider the zero-temperature behaviour of general quantum spin chains, including chains with periodic structures, in a uniform magnetic field pointing along the direction of the axial symmetry ($z$-axis). (i.e. the total $S^{z}$ is conserved.) We argue that, in quantum spin chains, there is a phenomenon which is strikingly analogous to the Quantum Hall Effect – topological quantization of a physical quantity under a changing magnetic field [12].(See Fig. 1.) We first consider an extension of the Lieb-Schultz-Mattis (LSM) theorem [13] to the case with an applied field. This indicates that translationally invariant spin chains in an applied field can be gapful without breaking translation symmetry, only when the magnetization per spin, $m$, obeys $S-m=$ integer. We expect such gapped phases to correspond to plateaus at these quantized values of $m$. “Fractional quantization” can also occur, if accompanied by (explicit or spontaneous) breaking of the translational symmetry. The generalized LSM theorem does not prove the presence of the plateau, however. Thus we construct a corresponding argument using Abelian bosonization, which is in complete agreement with the generalized LSM theorem, and also gives a condition for the presence of the plateau. As simplest examples, we study translationally invariant $S=3/2$ chains at $m=1/2$. We present numerical diagonalization and Density Matrix Renormalization Group (DMRG) [14] calculations, which demonstrate the existence of the two distinct types of gapped phases for generalized models. They are related to the $S=1$ large-$D$ phase and the $S=1$ Haldane phase, respectively. On the other hand, our study shows that the standard $S=3/2$ Heisenberg model is gapless with no plateau at $m=1/2$. We only give a brief summary of our numerical work in this letter; details and further results, including the effect of the axial symmetry breaking, will be presented in a longer paper [15]. The LSM theorem [13] proves the existence of at least one low-energy, $O(1/L)$ excited state for even length $L$ half-integer $S$ AF chains with periodic boundary conditions and general, translationally invariant Hamiltonians. It is expected that, this implies either gapless excitations or spontaneously broken translational symmetry in the $L\to\infty$ limit. The failure of this proof for the integer-$S$ case is necessary for the existence of the Haldane phase with no broken translational symmetry and a gap. We observe that the original version of this proof also works in a magnetic field except for integer $S-m$. Thus only in this case is a massive phase without spontaneously broken translational symmetry possible. The proof consists of making a slow rotation on the groundstate, $|\psi\rangle$, assumed to be unique, and observing that the resulting low-energy state is orthogonal to the groundstate. The rotation operator is $U\equiv\exp{[-i\sum_{j=1}^{L}(2\pi j/L)S^{z}_{j}]}$. For any Hamiltonian $H$, including a magnetic field term, with short-range interactions which is invariant under rotation about $z$-axis, it can be shown that $$\langle\psi|U^{\dagger}HU-H|\psi\rangle=O(\frac{1}{L}).$$ (1) This implies the existence of an excited state with excitation energy of $O(1/L)$, if we can show that $U|\psi\rangle$ is orthogonal to $|\psi\rangle$. To this end, we use the invariance of $H$ under translation by one site: $T$. This operation maps $U$ into: $$U\to TUT^{-1}=Ue^{i2\pi S^{z}_{1}-i(2\pi/L)\sum_{j=1}^{L}S^{z}_{j}}.$$ (2) Namely, the operation of $U$ changes the eigenvalue of $T$ by a factor $e^{i2\pi(S-m)}$, where $m=\sum_{j=1}^{L}S^{z}_{j}/L$. Thus $U|\psi\rangle$ must be orthogonal to $|\psi\rangle$ except when $(S-m)$ is an integer. We note that this is consistent with previous results for translationally invariant $S=1/2$ and $1$ AF chains, where no gap is found at partial magnetization. However for higher spin, gapped phases at partial magnetization are possible without breaking the translational symmetry, when $S-m$ is an integer. When $S-m$ is not an integer, there is a low-lying state with energy of $O(1/L)$. This means either a massless phase with a continuum of low-energy states or spontaneous symmetry breaking in the thermodynamic limit. Following the above proof, when $S-m=p/q$ where $p$ and $q$ are coprimes, $U^{k}|\psi\rangle$ for $k=0,1,\ldots,q-1$ have different eigenvalues of $T$. Thus these $q$ states have low energy of $O(1/L)$. If these are related to a spontaneous breaking of the symmetry, the ground states in the thermodynamic limit should be $q$-fold degenerate. Since they have $q$ different eigenvalues of $T$, they can be related to a spontaneous breaking of the translation symmetry to period of $q$ sites in the thermodynamic limit. It is natural to expect a gap and plateau in this case. As in the case of Quantum Hall Effect, “fractional quantization” is therefore possible, accompanying the spontaneous breaking of the translation symmetry in the present case.(See Fig. 1) We may compare this to a hidden symmetry breaking in Fractional Quantum Hall Effect [16]. Possibly there is a low-energy state other than constructed as above. In such a case, we can again construct a set of $q$ low-energy states by operation of $U$. Thus in general the period of ground state can be an integral multiple of $q$. A simple example of such a case is known for $S=1$ and $m=0$. While a gap without spontaneous symmetry breaking is possible as in the Haldane phase, spontaneous dimerization occurs in the bilinear-biquadratic Heisenberg model for a range of parameters [21]. Our generalization of LSM theorem is easily extended to Hamiltonian with spatial structures: bond-alternating chains [5, 8], spin-alternating chains [10], spin ladders, etc. For example, Hida’s model [5] is only invariant under a three-site translation $T^{3}$; a massive phase without spontaneous symmetry breaking is possible for $3(S-m)=\mbox{integer}$. Thus a quantized plateau is possible at $m=1/6$ as he observed. In general, the quantization condition is given by $S_{u}-m_{u}=\mbox{integer}$, where $S_{u}$ and $m_{u}$ are respectively the sum of $S$ and $m$ over all sites in the unit period of the groundstate. The period of the groundstate is determined by the explicit spatial structure of the Hamiltonian, and also by spontaneous symmetry breakings. The low-energy state $U|\psi\rangle$ appearing in the LSM theorem has the same total magnetization as in the groundstate. It does not directly contradict the existence of a plateau, which is determined by the gap to states with other total magnetizations. However, we expect that, in general a gapless phase has low-energy states in both fixed and different magnetization sector, as can be seen in the following Abelian bosonization approach. Schulz [17] explained the difference between integer and half-integer spin by Abelian bosonization. We show that his result can be understood more simply as a consequence of symmetries. At the same time, we generalize the discussion to the case with a non-vanishing magnetization. Following Schulz [17], we start from Abelian bosonization of $2S$ spin-$1/2$ chains and then couple them to form a spin-$S$ chain. Firstly, each spin-$1/2$ chain is fermionized by Jordan-Wigner transformation. The $z$ component of each spin-$1/2$ is related to the fermion number as $\sigma^{z}_{n}=1-2\psi^{\dagger}_{n}\psi_{n}$. Then the low-energy excitations are treated by continuous fermion fields. Let us denote the lattice spacing as $a$ and the spatial location $x=na$. The continuous fermion fields $\psi_{R}$ and $\psi_{L}$ are defined by $$\psi^{j}_{n}\sim e^{ik_{F}x}\psi^{j}_{R}(x)+e^{-ik_{F}x}\psi^{j}_{L}(x),$$ (3) where $j=1,\cdots,2S$ is the “flavor” index to distinguish $2S$ spin-$1/2$’s. They are bosonized in a standard way: $\psi^{j}_{R}=e^{i\varphi^{j}_{R}/R}$ and $\psi^{j}_{L}=e^{-i\varphi^{j}_{L}/R}$, where $\varphi^{j}_{R}$ and $\varphi^{j}_{L}$ are chiral bosons and $R$ is the compactification radius of the boson. $R$ will be renormalized by interactions [18], and will eventually depend on the model and on the magnetization $m$. (For an isotropic model, $R$ is fixed by the symmetry at $m=0$, but the magnetic field breaks the symmetry and thus $R$ will depend on $m$.) We define the non-chiral bosonic field $\varphi^{j}=\varphi^{j}_{L}+\varphi^{j}_{R}$ and its dual $\tilde{\varphi}^{j}=\varphi^{j}_{L}-\varphi^{j}_{R}$ . Interactions among bosonic fields are also generated during the mapping from the original spin problem. In general, we expect any interaction would be generated if not forbidden by a symmetry. Thus we analyze symmetries of the system, following the treatment of spin-$1/2$ chains in Ref. [18]. The original problem has a $U(1)$ symmetry: rotational invariance about $z$ axis. Rotation of each spin-$1/2$ is given by the phase transformation $\psi^{j}_{L,R}\rightarrow e^{i\theta}\psi^{j}_{L,R}$ of the corresponding fermion. In bosonic language, this corresponds to a shift of the dual field $\tilde{\varphi}^{j}\rightarrow\tilde{\varphi}^{j}+\mbox{const.}$. Since we have coupled $2S$ spin-$1/2$ chains into a spin-$S$ chain, only the simultaneous rotation of $2S$ spin-$1/2$’s is a symmetry of the system. If we define a new bosonic field $\phi=\sum_{j}\varphi^{j}$ (and similarly for $\tilde{\varphi}^{j}$), the $U(1)$ symmetry is written as $\tilde{\phi}\rightarrow\tilde{\phi}+\mbox{const}$. Thus all the interactions of the form $e^{\pm 2n\pi iR\tilde{\phi}}$ are prohibited by the symmetry. The remaining $2S-1$ fields, which are defined by linear combinations of original $\tilde{\varphi}^{j}$ fields, are not protected by the symmetry. Thus all fields except $\phi$ are expected to become massive by interactions, as Schulz observed by an explicit calculation. The remaining $\phi$ field, is also subject to $e^{\pm in\phi/R}$ type interactions. Let us consider another symmetry of the system: one-site translation. By definition (3), it actually corresponds to a transformation of the continuum field $\psi^{j}_{R}\rightarrow e^{ik_{F}a}\psi^{j}_{R}$ and $\psi^{j}_{L}\rightarrow e^{-ik_{F}a}\psi^{j}_{L}$. Again, only the simultaneous translation of all flavors is a symmetry of the system. Thus the one-site translation $T$ is written as $\phi\rightarrow\phi+4S(k_{F}a)\pi R$, in the bosonic language. Since all of the $2S$ flavors are equivalent, the magnetization should be equally distributed among them. Thus the fermi momentum $k_{F}$ is determined as $k_{F}a=(S-m)\pi/(2S)$ . As a consequence, the one-site translation $T$ is given by $$\phi\rightarrow\phi+2(S-m)\pi R.$$ (4) Thus the leading operator $\cos{(\phi/R)}$ is permitted only if $S-m$ is an integer. For $m$ satisfying the quantization condition, the leading operator $\cos{(\phi/R)}$ should be relevant in order to produce a gap. Thus $R$ must be larger than $R_{c}=1/\sqrt{8\pi}$ for the presence of the plateau. If $S-m=p/q$ where $p$ and $q$ are coprimes, the operator $\cos{(q\phi/R)}$ is permitted. It can be relevant if $R\geq q/\sqrt{8\pi}$ (this is a severe condition for a large $q$); if it is, a groundstate in the thermodynamic limit corresponds to a potential minimum of $\cos{(q\phi/R)}$. There are $q$ such groundstates and they are mapped to each other by applying the translation operator $T^{k}$ ($k<q$). Thus the ground states have spontaneously $q$-fold broken translation symmetry. These results are in agreement with the generalized LSM theorem, and also give conditions for a finite plateau at the quantized values. Our bosonization argument is also readily generalized to models with spatial structures. Our picture is consistent with Okamoto’s analysis [6] of Hida’s plateau [5]. For $S=1$ AF chains, Tonegawa et al. [8] obtained an $m=1/2$ plateau as soon as they introduced a small bond-alternation. In our approach, the leading operator is expected to appear as soon as the translational symmetry is broken. Thus we expect a plateau for any finite amount of bond-alternation, if the radius exceeds the critical value, in agreement with Ref. [8]. This is also in agreement with an explicit bosonization calculation by Totsuka [9] for $S=1$ bond-alternating chains. Now let us discuss some examples of translationally invariant $S=3/2$ chains. It is interesting both from an experimental and conceptual point of view to add an easy-plane crystal field term: $$H=\sum_{j}\vec{S}_{j}\cdot\vec{S}_{j+1}+D(S^{z}_{j})^{2}-hS^{z}_{j}.$$ (5) Clearly, if $D>>1$, all the spins are first fixed to $S^{z}=1/2$ with increasing field before any of the spins go into the $S^{z}=3/2$ state, corresponding to a gapped $m=1/2$ plateau. The presence of finite gap and plateau is proved rigorously for a sufficiently large but finite $D$ [19], by applying the general theorem in Ref. [20]. This situation is reminiscent of that which occurs in the large-$D$ phase in a zero field $S=1$ chain. Numerically, we found a finite $m=1/2$ plateau at least for $D\geq 2$. Another kind of trial groundstate for an $S=3/2$ chain, corresponding to an $m=1/2$ plateau is shown in Fig. 2 in the valence bond notation [21]. Regarding each $S=3/2$ operator as being a symmetrized product of three $S=1/2$’s, one $S=1/2$ is polarized by the applied field at each site while the other two form a valence-bond-solid (VBS) groundstate as occurs for an $S=1$ chain in zero field. In Ref. [22], this partially magnetized VBS state was proposed and the relevance to the magnetization process was suggested. A generalization of this kind of VBS-type state and further analysis were later done in Ref. [23]. (See also [10].) Clearly this sort of VBS-type state exists for all $S$ and $m$ such that $S-m$ is an integer. We can construct a model to realize the $S=3/2$ VBS-type state in Fig. 2 as a ground state: $$H=\sum_{j}P^{(j,j+1)}_{3}+\alpha\vec{S}_{j}\cdot\vec{S}_{j+1}-hS^{z}_{j},$$ (6) where $P^{(j,j+1)}_{3}$ is the projection operator onto the space with total spin $3$ for sites $j$ and $j+1$. At $\alpha=0$, any state constructed with one valence bond between neighboring sites is a groundstate. The ground state is thus infinitely degenerate due to the “free” spin-$1/2$ at each site. Applying an infinitesimal magnetic field, the degeneracy is lifted and the ground state is the above mentioned VBS-type state (Fig. 2). Thus the model with $\alpha=0$ has an $m=1/2$ plateau starting from zero magnetic field. Turning on the Heisenberg term, $\alpha$, the degeneracy at $h=0$ is lifted and a finite magnetic field is required to reach $m=1/2$. For small value of $\alpha$, however, we might still expect a finite $m=1/2$ plateau. We studied this model with periodic boundary conditions by numerical diagonalization for up to $12$ sites and found the $m=1/2$ plateau exists at least for $\alpha\leq 0.06$. In contrast to the plateau at large positive $D$, which is related to the large-$D$ phase in $S=1$ chains, it is natural to relate this state to the $S=1$ Haldane phase. For $S=1$, the Haldane phase is known to be distinct from the large-$D$ phase; these two massive phases are separated by a critical point $D_{c}$, where the gap vanishes [17, 24]. The Haldane phase is characterized by the existence of a topological long-range order [25], and gapless edge excitations in the open boundary conditions [26]. These are understood as consequences of a hidden symmetry breaking [20]. One might suspect that the two types of $S=3/2$ massive phases at the $m=1/2$ plateaus discussed above, correspond to distinct phases. If they are distinct, there should be a phase transition between them. In terms of Abelian bosonization, this phase transition may be understood as the vanishing of the coefficient of the allowed relevant operator $\cos{(\phi/R)}$, as in the case of $S=1$ [17]. We numerically measured the gap (width of the plateau) for the model: $$H=\sum_{j}\alpha\vec{S}_{j}\cdot\vec{S}_{j+1}+D(S^{z}_{j})^{2}+P^{(j,j+1)}_{3}% -hS^{z}_{j},$$ (7) which interpolates between (5) and (6). For $\alpha=0.03$ (fixed), we find the plateau vanishes at $D\sim 4.5$, separating the “Haldane gap” type plateau and the “large-$D$” type plateau. Moreover, we compared the spectrum at $\alpha=0.03$ and $D=0$ between open and periodic boundary conditions, and found evidence for edge states. In the large-$D$ region, there are no such edge states. These indicate that the “Haldane phase” at $m=1/2$ plateau, which accompanies the edge states, is distinct from the “large-$D$ phase”. We also numerically examined the standard $S=3/2$ Heisenberg AF chain with open boundary conditions, by DMRG up to $100$ sites. We did not find an $m=1/2$ plateau in this case, in agreement with Refs. [5, 6, 7]. We emphasize that the absence is not a priori obvious. As we have shown, in terms of the free boson theory, the plateau would be present if the compactification radius $R$ is greater than the critical value $R_{c}$ and the coefficient of the most relevant operator $\cos{(\phi/R)}$ is non-vanishing. We have determined the compactification radius from the spectrum for the open boundary condition obtained by DMRG, as $R=0.95R_{c}<R_{c}$. We note that $R$ is rather close to the critical value, and possibly we can realize a plateau by a small perturbation of the standard Heisenberg Hamiltonian [15]. On the other hand, while the radius is not completely well-defined at the massive $D=2$ plateau, a similar analysis gives the estimate $R\sim 1.2R_{c}>R_{c}$. This result is consistent with the presence of the plateau. The plateaus that we have found are closely related [15] to Mott insulating (or charge density wave) phases in models of interacting fermions or bosons [27, 28]. Similarly to those cases [29, 15], we have found that the singular part of the magnetization curve near a plateau is proportional to $\sqrt{|h-h_{c}|}$ where $h_{c}$ is the critical field at (either) edge of a plateau, at least for examples we have studied. Our approach will also give new insights into models of interacting fermions or bosons [15]. It is a pleasure to thank Hal Tasaki for his collaboration in the early stage of this work and many stimulating discussions. We also thank Y. Narumi, S. Sachdev and T. Tonegawa for useful correspondences. The numerical work was partly based on program packages KOBEPACK/1 by T. Tonegawa et al. and TITPACK ver. 2 by H. Nishimori. The computation in this work has been done partially at Supercomputer Center, ISSP, Univ. of Tokyo. This work is partly supported by NSERC. M.O. and M.Y. are supported by UBC Killam Memorial Fellowship and by JSPS Research Fellowships for Young Scientists, respectively. Note added (January 1997): After the submission of the present letter, we received a preprint by Totsuka, which is a substantial enhancement of his presentation [9] at Japanese Physical Society meeting Fall 1996, and contains some of our general argument using bosonization. References [1] F. D. M. Haldane, Phys. Lett. 93A, 464 (1983). [2] R. B. Griffiths, Phys. Rev. 133, A768 (1964). [3] I. Affleck, Phys. Rev. B 43, 3215 (1991). [4] T. Sakai and M. Takahashi, Phys. Rev. B 43, 13383 (1991). [5] K. Hida, J. Phys. Soc. Jpn. 63, 2359 (1994). [6] K. Okamoto, Solid State Commun. 98, 245 (1996). [7] M. Roji and S. Miyashita, J. Phys. Soc. Jpn. 65, 1994 (1996). [8] T. Tonegawa, T. Nakao, and M. Kaburagi, J. Phys. Soc. Jpn. 65, 3317 (1996). [9] K. Totsuka, JPS meeting Fall 1996, at Yamaguchi Univ. [10] S. K. Pati, S. Ramasesha, and D. Sen, preprint cond-mat/9610080; A. K. Kolezhuk, H.-J. Mikeska, and S. Yamamoto, preprint cond-mat/0610097. [11] Y. Narumi, K. Kindo, and M. Hagiwara, JPS meeting Fall 1996 at Yamaguchi Univ., and private communications. [12] R. B. Laughlin, Phys. Rev. B 23, 5632 (1981); D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, Phys. Rev. Lett. 49, 405 (1982). [13] E. H. Lieb, T. Schultz, and D. J. Mattis, Ann. Phys. (N.Y.) 16, 407 (1961); I. Affleck and E. H. Lieb, Lett. Math. Phys. 12, 57 (1986). [14] S. R. White, Phys. Rev. B 48, 10345 (1993). [15] M. Oshikawa, M. Yamanaka, and I. Affleck, in preparation. [16] R. Tao and Y.-S. Wu, Phys. Rev. B 30, 1097 (1984). [17] H. J. Schulz, Phys. Rev. B 34, 6372 (1986). [18] I. Affleck, in Fields, Strings and Critical Phenomena, Les Houches, Session XLIX, edited by E. Brezin and J. Zinn-Justin (North-Holland, Amsterdam, 1988). [19] H. Tasaki, private communications. [20] T. Kennedy and H. Tasaki, Commun. Math. Phys. 147, 431 (1992). [21] I. Affleck, T. Kennedy, E. Lieb, and H. Tasaki, Commun. Math. Phys. 115, 477 (1988). [22] M. Oshikawa, J. Phys. Condens. Matter 4, 7469 (1992). [23] H. Niggemann and J. Zittartz, Z. Phys. B 101, 289 (1996). [24] R. Botet, R. Julien and M. Kolb, Phys. Rev. B 28, 3914 (1983). [25] M. den Nijs and K. Rommelse, Phys. Rev. B 40, 4709 (1989); S. M. Girvin and D. P. Arovas, Phys. Scr. T 27, 156 (1989); Y. Hatsugai and M. Kohmoto, Phys. Rev. B 44, 11789 (1991). [26] T. Kennedy, J. Phys. Condens. Matter 2, 5737 (1990). [27] T. Giamarchi, to be published in the proceedings of the SCES96 conference (cond-mat/9609114) and references therein. [28] P. Niyaz, R. T. Scalettar, C. Y. Fong and G. G. Batrouni, Phys. Rev. B 50, 362 (1994) and references therein. [29] H. J. Schulz, Phys. Rev. B 22 5274 (1980).
Abstract The measurement of the polarized gluon distribution function $\Delta G/G$ using current-target correlations in the Breit frame of deep inelastic scattering is proposed. The approach is illustrated using a Monte Carlo simulation of polarized $ep$-collisions for HERA energies. ANL-HEP-PR-99-89 August 1999 Current-Target Correlations as a Probe of $\Delta G/G$ in Polarized Deep Inelastic Scattering [-1cm] I. V. Akushevich ${}^{a}$ and S. V. Chekanov ${}^{b}$111 On leave from the Institute of Physics, AS of Belarus, Skaryna av.70, Minsk 220072, Belarus. ${}^{a}$ National Center of Particle and High Energy Physics, Bogdanovich str. 153, 220040 Minsk, Belarus. Email: [email protected] ${}^{b}$ Argonne National Laboratory, 9700 S.Cass Avenue, Argonne, IL 60439. USA Email: [email protected] 1 Introduction A direct determination of the polarized gluon distribution is of importance for understanding the spin properties of the nucleon. Recent experimental measurements of spin-dependent structure functions [1, 2, 3] have shown that the valence quarks account for only a small fraction of the nucleon spin. One of the possible explanations for this observation, known as “spin crisis”, is to assume a large contribution from the gluon spin. There exist many theoretical models which explain this phenomenon, but there is no experimental evidence to favor one of them. A direct way to solve this puzzle is to measure the gluon density in polarized lepton-nucleon scattering. One of the suggested methods is based on the detection of the correlated high-$p_{t}$ hadron pairs in polarized deep inelastic scattering (DIS) [4]. This measurement has already been performed by the HERMES Collaboration [5]. Another possibility, which is planned to be studied at the COMPASS experiment, is to analyze events with open/closed charm [6]. At the HERA $ep$ collider, after possible upgrade to polarized beams, the polarized gluon density can also be measured from dijet events [7]. In this paper a possibility to measure $\Delta G/G$ from the multiplicity correlations in neutral-current DIS is discussed. The proposed method is based on the Breit frame [8]. For this frame, in the quark-parton model (QPM), the incident quark carries $Q/2$ momentum222As usually, $Q^{2}=-q^{2}=-(k-k^{{}^{\prime}})^{2}$ ($k$ and $k^{{}^{\prime}}$ denote the 4-momenta of the initial and final-state lepton, respectively) and the Bjorken scaling variable $x$ is defined as $Q^{2}/(2P\cdot q)$, where $P$ is the 4-momentum of the proton. in the positive $Z$-direction and the outgoing struck quark carries the same momentum in the negative $Z$-direction. The phase space of the event can be divided into two regions (see Fig. 1). All particles with negative $p_{Z}^{\mathrm{Breit}}$ components of momenta form the current region. In the QPM, all these particles are produced from hadronization of the struck quark. Particles with positive $p_{Z}^{\mathrm{Breit}}$ are assigned to the target region, which is associated with the proton remnants. For the QPM in the Breit frame, the phase-space separation between the particles from the struck quark and the proton remnants is maximal. Therefore, it is expected that there are no correlations between the current-region particles and particles in the target region of the Breit frame [9]. Such a separation does not exist when the leading-order QCD processes, known as Boson-Gluon Fusion (BGF) and QCD-Compton (QCDC) scattering, are involved [10]. The kinematics of these processes lead to current-target anti-correlations predicted in [9] and experimentally measured by the ZEUS Collaboration [11]. The magnitude of these correlations at small $x$ is mainly determined by the BGF processes. 2 Current-Target Correlations. Unpolarized Case Recently, it was noticed that the BGF events in non-polarized DIS can be studied without involving the jet algorithms [9]. For this one can measure a liner interdependence between the current- and target-region multiplicities in the Breit frame. The approach involves the measurement of the particle multiplicities in large phase-space regions, rather than clustering separate particles at small resolution scales. Therefore, high-order QCD and hadronization effects are expected to be smaller than for the dijet studies that use clustering algorithms with specific resolution scales. In addition, the method does not depend on the jet transverse energy $E_{T}$ used in jet reconstruction and is well suited for low $Q^{2}$ regions where the jet algorithms are less reliable. The correlation between the current and target region multiplicities can be measured with the covariance $$\mathrm{cov}=\langle n_{\mathrm{c}}\,n_{\mathrm{t}}\rangle-\langle n_{\mathrm{% c}}\rangle\langle n_{\mathrm{t}}\rangle,$$ (1) where $n_{\mathrm{c}}$ ($n_{\mathrm{t}}$) is the number of final-state particles in the current (target) region and $\langle\ldots\rangle$ is the averaging over all events. If $h$ particles from the hard QCD processes are emitted in the target region, one can rewrite (1) as $$\mathrm{cov}=\langle(\tilde{n}_{\mathrm{c}}-h)\,(\tilde{n}_{\mathrm{t}}+h)% \rangle-\langle\tilde{n}_{\mathrm{c}}-h\rangle\langle\tilde{n}_{\mathrm{t}}+h\rangle.$$ (2) Here $\tilde{n}_{\mathrm{c}}$ is the total number of particles coming from zero and first-order QCD processes ($h\leq\tilde{n}_{\mathrm{c}}$) and $\tilde{n}_{\mathrm{t}}$ is the multiplicity of the proton remnants without counting the particles due to the hard QCD processes. From (1) one obtains $$\mathrm{cov}=\langle\tilde{n}_{\mathrm{c}}\,h\rangle-\langle\tilde{n}_{\mathrm% {c}}\rangle\langle h\rangle-\langle h^{2}\rangle+\langle h\rangle^{2}.$$ (3) In this expression, the contribution from the remnant multiplicity $\tilde{n}_{\mathrm{t}}$ cancels since we consider the case when $\tilde{n}_{\mathrm{t}}$ is independent of $\tilde{n}_{\mathrm{c}}$ and $h$. This key assumption means that the only important effects leading to the correlation between $n_{\mathrm{c}}$ and $n_{\mathrm{t}}$ are the hard QCD radiation. The validity of this assumption has been tested in [9] for a Monte Carlo model based on the parton shower, describing higher than first order QCD effects, and the LUND string model for hadronization. At small $x$, the BGF is the only dominant first-order QCD process. Let us define the BGF rate as $$R_{\mathrm{BGF}}=\frac{N_{\mathrm{BGF}}}{N_{\mathrm{ev}}},$$ (4) where $N_{\mathrm{BGF}}$ is the number of the BGF events and $N_{\mathrm{ev}}$ is the total number of events in the limit $N_{\mathrm{ev}}\to\infty$. According to the assumption discussed above, $h=0$ for the QPM. For the BGF, however, one or two quarks are emitted in the target hemisphere, so that their parton radiation gives $h>0$. The averaging in (1-3) is performed over all possible DIS events, despite the fact that $\mathrm{cov}\neq 0$ for the BGF events only. Therefore, one can apply the averaging over the relevant BGF events using the relation: $$\langle f(h)\rangle=R_{\mathrm{BGF}}\>\langle f(h)\rangle_{\mathrm{BGF}},$$ (5) where $f(h)$ is any function of $h$ so that $f(h=0)=0$ and $\langle\ldots\rangle_{\mathrm{BGF}}$ is the average over the BGF events. Using this relation, one has $$\mathrm{cov}=\left(\langle\tilde{n}_{\mathrm{c}}\,h\rangle_{\mathrm{BGF}}-% \langle\tilde{n}_{\mathrm{c}}\rangle\langle h\rangle_{\mathrm{BGF}}-\langle h^% {2}\rangle_{\mathrm{BGF}}+R_{\mathrm{BGF}}\langle h\rangle^{2}_{\mathrm{BGF}}% \right)\>R_{\mathrm{BGF}}.$$ (6) The last term can be neglected since it contains a small contribution. Therefore, in the linear approximation, the covariance can be expressed as $$\mathrm{cov}\simeq-A(Q^{2},x)\>R_{\mathrm{BGF}}$$ (7) with $$A(Q^{2},x)=\langle\tilde{n}_{\mathrm{c}}\rangle\langle h\rangle_{\mathrm{BGF}}% +\langle h^{2}\rangle_{\mathrm{BGF}}-\langle\tilde{n}_{\mathrm{c}}\,h\rangle_{% \mathrm{BGF}}.$$ (8) A more detailed form of this expression has been obtained in [9]. The function $A(Q^{2},x)$ depends on: 1) A number of the final-state hadrons in the jets initiated by the quarks in the BGF processes. 2) Kinematics of the BGF jets in the Breit frame, which depend on $Q^{2}$ and $x$. Using LEPTO Monte Carlo [12] with the tuning described in [13] and GRV94 HO [14] parameterization of the parton distribution functions, we have found that for $5<Q^{2}<50$ GeV${}^{2}$, the fraction of BGF events with both quarks moving to the target region increases from $52\%$ at $\langle x\rangle\simeq 0.2\cdot 10^{-1}$ to $61\%$ at $\langle x\rangle=0.5\cdot 10^{-3}$ ($100\%$ corresponds to all possible jet configurations of the BGF events in the Breit frame). Therefore, $A(Q^{2},x)\simeq A(Q^{2})$ in (7) is a good approximation. Note an $x$-dependence of the $\mathrm{cov}$ due to the BGF kinematics is rather small compared to the $x$-dependence of the BGF rate itself: $R_{\mathrm{BGF}}$ increases from $7\%$ at $\langle x\rangle\simeq 0.2\cdot 10^{-1}$ to $21\%$ at $\langle x\rangle=0.5\cdot 10^{-3}$. Before going to a polarized case, let us again remind all approximations made in (7). 1. We neglect the QCD Compton scattering, considering low $Q^{2}$ regions. Note that even if a small fraction of the QCDC is present, some QCDC events cannot contribute to the correlations since singularities in the QCDC cross section favor the event topology with two jets in the current region, which does not produce the correlations (see, for example, Ref. [15]). 2. Effects from high-order QCD and hadronization are already assumed to contribute to a value of $A(Q^{2},x)$, an exact form of which is beyond the scope of the present study. However, the assumption made to derive (7) was that the high-order QCD processes and hadronization do not change significantly the LO dijet kinematics in the Breit frame. This assumption leads to the factorization of $A(Q^{2},x)$ from the BGF rate in the linear approximation. Using a numerical simulation [9], it was shown that the effects quoted above do not strongly contribute to the relation (7) with an $x$-independent $A(Q^{2})$ in the range $5<Q^{2}<50$ GeV${}^{2}$ and $0.5\cdot 10^{-3}<x<0.2\cdot 10^{-1}$. According to the LUND model implemented in JETSET, the hadronization does not produce large correlations during the formation and independent breaking of the strings stretched between the current-region partons and the remnant. 3 Polarized Case The ratio of spin dependent gluon density $\Delta G(x)$ to spin averaged gluon density $G(x)$ at a fixed $Q^{2}$ is proportional to the asymmetry of the BGF cross sections, $$\langle a_{LL}\rangle\>\frac{\Delta G(x)}{G(x)}=\frac{\sigma_{\mathrm{BGF}}^{% \uparrow\downarrow}-\sigma_{\mathrm{BGF}}^{\uparrow\uparrow}}{\sigma_{\mathrm{% BGF}}^{\uparrow\downarrow}+\sigma_{\mathrm{BGF}}^{\uparrow\uparrow}}=\frac{% \sigma_{tot}^{\uparrow\downarrow}\>R_{\mathrm{BGF}}^{\uparrow\downarrow}-% \sigma_{tot}^{\uparrow\uparrow}\>R_{\mathrm{BGF}}^{\uparrow\uparrow}}{\sigma_{% tot}^{\uparrow\downarrow}\>R_{\mathrm{BGF}}^{\uparrow\downarrow}+\sigma_{tot}^% {\uparrow\uparrow}\>R_{\mathrm{BGF}}^{\uparrow\uparrow}},$$ (9) where $\langle a_{LL}\rangle$ is the value of the BGF asymmetry at partonic level [16], $\sigma_{\mathrm{BGF}}^{\uparrow\downarrow}$ ($R_{\mathrm{BGF}}^{\uparrow\downarrow}$) and $\sigma_{\mathrm{BGF}}^{\uparrow\uparrow}$ ($R_{\mathrm{BGF}}^{\uparrow\uparrow}$) are the BGF cross sections (BGF rates) in case of the ant-parallel ($\uparrow\downarrow$) and parallel ($\uparrow\uparrow$) polarizations of the incoming lepton and nucleon. $\sigma_{tot}^{\uparrow\downarrow}$ and $\sigma_{tot}^{\uparrow\uparrow}$ are the total cross sections which can be obtained by counting the number of events in a given kinematic bin for the different spin configurations, normalized to an integrated luminosity. The expression (9) can further be rewritten as $$\langle a_{LL}\rangle\>\frac{\Delta G(x)}{G(x)}\simeq\Delta\rho+A_{||},\qquad% \Delta\rho=\frac{R_{\mathrm{BGF}}^{\uparrow\downarrow}-R_{\mathrm{BGF}}^{% \uparrow\uparrow}}{R_{\mathrm{BGF}}^{\uparrow\downarrow}+R_{\mathrm{BGF}}^{% \uparrow\uparrow}},$$ (10) $$A_{||}=\frac{\sigma_{tot}^{\uparrow\downarrow}-\sigma_{tot}^{\uparrow\uparrow}% }{\sigma_{tot}^{\uparrow\downarrow}+\sigma_{tot}^{\uparrow\uparrow}},$$ (11) where $A_{||}$ is an inclusive polarized asymmetry, which is expected to be small compared to the asymmetry $\Delta\rho$ of the BGF rates. The relationship between (9) and (10) can easily be seen after rewriting $\Delta\rho+A_{||}$ as $$\frac{\sigma_{tot}^{\uparrow\downarrow}\>R_{\mathrm{BGF}}^{\uparrow\downarrow}% -\sigma_{tot}^{\uparrow\uparrow}\>R_{\mathrm{BGF}}^{\uparrow\uparrow}}{(\sigma% _{tot}^{\uparrow\downarrow}\>R_{\mathrm{BGF}}^{\uparrow\downarrow}+\sigma_{tot% }^{\uparrow\uparrow}\>R_{\mathrm{BGF}}^{\uparrow\uparrow})(1-a)},\qquad a=% \frac{(R_{\mathrm{BGF}}^{\uparrow\downarrow}-R_{\mathrm{BGF}}^{\uparrow% \uparrow})(\sigma_{tot}^{\uparrow\downarrow}-\sigma_{tot}^{\uparrow\uparrow})}% {2(\sigma_{tot}^{\uparrow\downarrow}\>R_{\mathrm{BGF}}^{\uparrow\downarrow}+% \sigma_{tot}^{\uparrow\uparrow}\>R_{\mathrm{BGF}}^{\uparrow\uparrow})},$$ which is approximately equal to $(\ref{12})$ by neglecting a small contribution from $a$. According to (7), $$\mathrm{cov}^{\uparrow\downarrow}\simeq-A(Q^{2})\>R_{\mathrm{BGF}}^{\uparrow% \downarrow},\qquad\mathrm{cov}^{\uparrow\uparrow}\simeq-A(Q^{2})\>R_{\mathrm{% BGF}}^{\uparrow\uparrow},$$ (12) where $A(Q^{2})$ is considered to be independent of the polarization, since this function is mainly determined by the number of the final-state hadrons in jets, i.e. by multiple-gluon radiation and hadronization. From (12) it is easy to see that the theoretical $\Delta\rho$ can be determined via the covariances $$\Delta\rho\simeq\Delta\mathrm{cov}=\frac{\mathrm{cov}^{\uparrow\downarrow}-% \mathrm{cov}^{\uparrow\uparrow}}{\mathrm{cov}^{\uparrow\downarrow}+\mathrm{cov% }^{\uparrow\uparrow}}.$$ (13) Thus the asymmetry $\Delta G(x)/G(x)$ can experimentally be obtained from $\Delta\mathrm{cov}$ and directly measurable $A_{||}$. Below we shall numerically estimate the BGF asymmetry $\Delta\rho$ (10). We calculate this quantity directly from the BGF rates and then reconstruct it by measuring the $\Delta\mathrm{cov}$ from the final-state hadrons of polarized DIS. 4 Numerical Studies In order to study the asymmetry using the current-target correlations we use the PEPSI Monte Carlo generator [17] for the polarized leptoproduction. This model is based on the LEPTO 6.5 for DIS together with JETSET 7.4 describing the fragmentation. Two DIS samples with the opposite spin configurations were generated in the region $5<Q^{2}<50$ GeV${}^{2}$. The electron and proton beam energies were taken to be 27.5 and 920 GeV, respectively. For simplicity, both beam polarizations were assumed to be $100\%$. The total number of the generated DIS events is 1M for each polarization sample for the given $Q^{2}$ range. The covariances for each polarization were determined from charged final-state hadrons according to (1). Hadrons with lifetime $c\tau>1$ cm are declared to be stable. Fig. 2a shows the behavior of $\Delta\mathrm{cov}$ calculated from final-state charged hadrons compared to the theoretical prediction for $\Delta\rho$, generated also with the PEPSI using information on the generated type of the LO process. The GS-A [18] polarized parton distribution was used. From this figure one can conclude that the experimentally measured asymmetry $\Delta\mathrm{cov}$ does reproduce the theoretically expected behavior of the $\Delta\rho$. This is especially evident for a small $x$, where the contribution from the QCDC can be neglected. It is also seen that typical statistics expected at HERA are sufficient to reliably determine the sign and the magnitude of the asymmetry. For an illustration, Fig. 2b shows the same as Fig. 2a, but after switching the gluon polarization term off, i.e. $\Delta G=0$. The quark polarization distributions were unchanged. In this case, both $\Delta\rho$ and $\Delta\mathrm{cov}$ tend towards zero. A difference between $\Delta\rho$ and zero value is expected to come from a positive value of $A_{||}$ (11). It may also be noted a small discrepancy between $\Delta\mathrm{cov}$ and $\Delta\rho$. This might mean that for this, in fact, unrealistic case, the values of $R_{\mathrm{BGF}}^{\uparrow\downarrow}$ and $R_{\mathrm{BGF}}^{\uparrow\uparrow}$ are so close to each other that an approximate nature of the expressions $(\ref{15})$ makes a noticeable effect on the relationship between $\Delta\mathrm{cov}$ and $\Delta\rho$. Dependence on cut-offs. The LEPTO Monte Carlo used by PEPSI contains cut-offs to prevent divergences in the LO matrix elements. Therefore, the magnitude of the BGF rate in PEPSI is ambiguously defined. This may produce a systematic bias in the relation (13). Let us remind that, in the BGF cross section, the singularities in the two-parton emission are given by $1/z(1-z)$ [19], where $z=(p^{\prime}\cdot p)/(p^{\prime}\cdot q)$ ($p$ is the momentum of the incoming parton and $p^{\prime}$ is that of the final-state parton, $q^{2}=-Q^{2}$). The LEPTO cut-offs are based on the so-called ”mixed scheme“ in which the parameter $z_{min}$, restricting the values of the variable $z$, plays an important role in the determination of the probability for the BGF event to occur. The default value of this cut-off is set to 0.04. If one decreases this cut-off, the relative contribution of the BGF rates increases, with respect to the QCDC. In this case, the relation (7) is expected to be a good estimate. However, for a large $z_{min}$, the BGF rate is small and $\Delta\mathrm{cov}$ might poorly reflect the behavior of $\Delta\rho$. Therefore, for our systematic checks, the value of $z_{min}$ was increased. We have verified that the relationship between $\Delta\mathrm{cov}$ and $\Delta\rho$ still holds up to $z_{min}=0.2$ for the given statistic uncertainties. Note that while for the default value $z_{min}=0.04$ the BGF rate is about $0.6$ in the smallest $x$-bin, the rate of this process drops down to $0.3$ when $z_{min}=0.2$ is considered. We have also found that $\Delta\rho$ changes only a little by varying the cut-off. This can be explained by the normalisation $(R_{\mathrm{BGF}}^{\uparrow\downarrow}+R_{\mathrm{BGF}}^{\uparrow\uparrow})$ used in (10). Note that, from the experimental point of view, a similar normalisation $(\mathrm{cov}^{\uparrow\downarrow}+\mathrm{cov}^{\uparrow\uparrow})$ in (13) would help to measure the $\Delta\mathrm{cov}$ reliably even if the detector track acceptance for the target-region is small (see [11] for details). Dependence on the structure function. An important test for our method is to investigate the asymmetry measured with $\Delta\mathrm{cov}$ for different types of the parameterization for the parton densities. Various types of polarized parton distributions included into the PEPSI package were analyzed. The method was found to be sensitive to input structure functions and $\Delta\mathrm{cov}$ reproduces the trends of $\Delta\rho$. 5 Conclusions In this paper we have proposed a new method to measure the asymmetry in polarized lepton-nucleon scattering. This method is based on the measurement of the current-target correlations in the Breit frame. The advantage of the method is that all inclusive DIS events with well reconstructed Breit frame can be used to determine the asymmetry, without specific constraints to select useful events for this measurement. In this respect, it is important to emphasize that our approach is well suited for rather low $Q^{2}$ and $E_{T}$, i.e. for the regions where dijet reconstruction suffers from misclusterings and large hadronization corrections. Thus the suggested method compliments and extends the study of $\Delta G/G$ to kinematic regions of low $Q^{2}$ ($E_{T}$) where the method suggested in [7] is less reliable. We also expect different systematics for these two methods: While the method discussed in [7] is a subject of the systematic uncertainties as for the standard dijet studies, systematic effects for our method most probably would come from a low detector acceptance in the target region of the Breit frame. Acknowledgments We thank N. Akopov, M. Amarian and E. Kuraev for helpful discussions. References [1] B. Adeva et al. [Spin Muon Collaboration], Phys. Rev. D58, 112001 (1998). [2] K. Abe et al. [E143 Collaboration], Phys. Rev. D58, 112003 (1998). [3] K. Ackerstaff et al. [HERMES Collaboration], Phys. Lett. B404, 383 (1997); A. Airapetian et al. [HERMES Collaboration], Phys. Lett. B442, 484 (1998). [4] A. Bravar, D. von Harrach, and A. Kotzinian Proc. of 5th Int. Workshop ”DIS97“, Ed. J. Repond and D. Krakauer (Chicago, 1997) p.885. [5] M. Amarian [for the HERMES collaboration], ”Spin Asymmetries in Photoproduction of High-pt Hadron Pairs”. A talk given at workshop ”DIS99“, Zeuthen, 1999. [6] D. von Harrach [for the COMPASS Collaboration], Nucl. Phys. A629, 245C (1998). [7] A. De. Roeck et al., Eur. J. Phys. C6, 121 (1999). [8] R. P. Feynman, Photon-Hadron Interactions, Benjamin, NY, 1972. [9] S. V. Chekanov, J. Phys. G25, 59 (1999); S. V. Chekanov, Presented at XXVIII International Symposium on Multiparticle Dynamics, 1998, (Delphi, Greece) hep-ph/9810477. [10] K. H. Streng, T. F. Walsh and P. M. Zerwas, Z. Phys. C2, 237 (1979). [11] J. Breitweg, et al. [ZEUS Collaboration], DESY-99-063 (submitted to Eur. J. Phys. C). [12] G. Ingelman, A. Edin, J. Rathsman, Comp. Phys. Comm. 101, 108 (1997). [13] N. H. Brook et al., Proc. of the Workshop 1995/1996 Future Physics at HERA, (DESY, Hamburg, 1996), Ed: G. Ingelman et al. p. 613. [14] M. Gluck, E. Reya and A. Vogt, Phys. Lett. B306 (1993) 391. [15] S. V. Chekanov, Proc. of the workshop ”Monte Carlo Generators for HERA Physics”, 1998-1999, DESY, Hamburg (to be published). [16] A. Bravar, K. Kurek and R. Windmolders, Comp. Phys. Comm. 105, 42 (1997). [17] L. Mankiewicz, A. Schafer and M. Veltri, Comp. Phys. Comm. 71, 305 (1992). [18] T. Gehrmann and W. J. Stirling, Phys. Rev. D53 , 6100 (1996). [19] J. G. Körner, E. Mirkes and G. A. Schuler, Int. J. of Mod. Phys. 4, 1781 (1989).
Partitions, rooks, and symmetric functions in noncommuting variables Mahir Bilen Can Department of Mathematics, Tulane University New Orleans, LA 70118, USA, [email protected] and Bruce E. Sagan111Work partially done while a Program Officer at NSF. The views expressed are not necessarily those of the NSF. Department of Mathematics, Michigan State University, East Lansing, MI 48824-1027, USA, [email protected] (December 8, 2020 This paper is dedicated to Doron Zeilberger on the occasion of his 60th birthday. His enthusiasm for combinatorics has been an inspiration to us all. Key Words: noncommuting variables, rook, set partition, symmetric function [5pt] AMS subject classification (2000): Primary 05A18; Secondary 05E05. ) Abstract Let $\Pi_{n}$ denote the set of all set partitions of $\{1,2,\ldots,n\}$. We consider two subsets of $\Pi_{n}$, one connected to rook theory and one associated with symmetric functions in noncommuting variables. Let ${\cal E}_{n}\subseteq\Pi_{n}$ be the subset of all partitions corresponding to an extendable rook (placement) on the upper-triangular board, ${\cal T}_{n-1}$. Given $\pi\in\Pi_{m}$ and $\sigma\in\Pi_{n}$, define their slash product to be $\pi|\sigma=\pi\cup(\sigma+m)\in\Pi_{m+n}$ where $\sigma+m$ is the partition obtained by adding $m$ to every element of every block of $\sigma$. Call $\tau$ atomic if it can not be written as a nontrivial slash product and let ${\cal A}_{n}\subseteq\Pi_{n}$ denote the subset of atomic partitions. Atomic partitions were first defined by Bergeron, Hohlweg, Rosas, and Zabrocki during their study of $NCSym$, the symmetric functions in noncommuting variables. We show that, despite their very different definitions, ${\cal E}_{n}={\cal A}_{n}$ for all $n\geq 0$. Furthermore, we put an algebra structure on the formal vector space generated by all rook placements on upper triangular boards which makes it isomorphic to $NCSym$. We end with some remarks and an open problem. 1 Extendable rooks and atomic partitions For a nonnegative integer $n$, let $[n]=\{1,2,\ldots,n\}$. Let $\Pi_{n}$ denote the set of all set partitions $\pi$ of $[n]$, i.e., $\pi=\{B_{1},B_{2},\ldots,B_{k}\}$ with $\uplus_{i}B_{i}=[n]$ (disjoint union). In this case we will write $\pi\vdash[n]$. The $B_{i}$ are called blocks. We will often drop set parentheses and commas and just put slashes between blocks for readability’s sake. Also, we will always write $\pi$ is standard form which means that $$\min B_{1}<\min B_{2}<\ldots<\min B_{k}$$ (1) and the elements in each block are listed in increasing order. For example $\pi=136|2459|78\vdash[9]$. The trivial partition is the unique element of $\Pi_{0}$, while all other partitions are nontrivial. The purpose of this note is to show that two subsets of $\Pi_{n}$, one connected with rook theory and the other associated to the Hopf algebra $NCSym$ of symmetric functions in noncommuting variables, are actually equal although they have very different definitions. After proving this result in the current section, we will devote the next to putting an algebra structure on certain rook placements which is isomorphic to $NCSym$. The final section contains some comments and open questions. Let us first introduce the necessary rook theory. A rook (placement) is an $n\times n$ matrix, $R$, of $0$’s and $1$’s with at most one 1 in every row and column. So a permutation matrix, $P$, is just a rook of full rank. A board is ${\cal B}\subseteq[n]\times[n]$. We say that $R$ is a rook on ${\cal B}$ if $R_{i,j}=1$ implies $(i,j)\in{\cal B}$. In this case we write, by abuse of notation, $R\subseteq{\cal B}$. A rook $R\subseteq{\cal B}$ is extendable in ${\cal B}$ if there is a permutation matrix $P$ such that $P_{i,j}=R_{i,j}$ for $(i,j)\in{\cal B}$. For example, consider the upper-triangular board ${\cal T}_{n}=\{(i,j)\ :\ i\leq j\}$. The $R\subseteq{\cal T}_{2}$ are displayed in Figure 1. Only the third and fifth rooks in Figure 1 are extendable, corresponding to the transposition and identity permutation matrices, respectively. Extendability is an important concept in rook theory because of its relation to the much-studied hit numbers of a board [5, page 163 and ff.]. There is a well-known bijection between $\pi\in\Pi_{n}$ and the rooks $R\subseteq{\cal T}_{n-1}$ [8, page 75]. Given $R$, define a partition $\pi_{R}$ by putting $i$ and $j$ in the same block of $\pi_{R}$ whenever $R_{i,j-1}=1$. For each $R\subseteq{\cal T}_{2}$, the corresponding $\pi_{R}\in\Pi_{3}$ is shown in Figure 1. Conversely, given $\pi$ we define a rook $R_{\pi}$ by letting $(R_{\pi})_{i,j}=1$ exactly when $i$ and $j+1$ are adjacent elements in a block of $\pi$ in standard form. It is easy to see that the maps $R\mapsto\pi_{R}$ and $\pi\mapsto R_{\pi}$ are inverses. If a matrix has a certain property then we will also say that the corresponding partition does, and vice-versa. Our first subset of $\Pi_{n}$ will be the extendable partitions denoted by $${\cal E}_{n}=\{\pi\in\Pi_{n}\ :\ \mbox{$R_{\pi}$ is extendable in ${\cal T}_{n% -1}$}\}.$$ So, from Figure 1, ${\cal E}_{2}=\{13|2,123\}$. To define our second subset of $\Pi_{n}$, it is convenient to introduce an operation on partitions. For a set of integers $B=\{b_{1},\ldots,b_{j}\}$ we let $B+m=\{b_{1}+m,\ldots,b_{j}+m\}$. Similarly, for a partition $\pi=\{B_{1},\ldots,B_{k}\}$ we use the notation $\pi+m=\{B_{1}+m,\ldots,B_{k}+m\}$. If $\pi\in\Pi_{m}$ and $\sigma\in\Pi_{n}$ then define their slash product to be the partition in $\Pi_{m+n}$ given by $$\pi|\sigma=\pi\cup(\sigma+m).$$ Call a partition atomic if it can not be written as a slash product of two nontrivial partitions and let $${\cal A}_{n}=\{\pi\in\Pi_{n}\ :\ \mbox{$\pi$ is atomic}\}.$$ Atomic partitions were defined by Bergeron, Hohlweg, Rosas, and Zabrocki [2] because of their connection with symmetric functions in noncommuting variables. We will have more to say about this in Section 2. Since ${\cal E}_{n}$ is defined in terms of rook placements, it will be convenient to have a rook interpretation of ${\cal A}_{n}$. Given any two matrices $R$ and $S$, defined their extended direct sum to be $$R\hat{\oplus}S=R\oplus(0)\oplus S$$ where $\oplus$ is ordinary matrix direct sum and $(0)$ is the $1\times 1$ zero matrix. To illustrate, $$\left(\begin{array}[]{ccc}a&b&c\\ d&e&f\end{array}\right)\hat{\oplus}\left(\begin{array}[]{cc}w&x\\ y&z\end{array}\right)=\left(\begin{array}[]{cccccc}a&b&c&0&0&0\\ d&e&f&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&w&x\\ 0&0&0&0&y&z\end{array}\right).$$ It is clear from the definitions that $\tau=\pi|\sigma$ if and only if $R_{\tau}=R_{\pi}\hat{\oplus}R_{\sigma}$. We now have everything we need to prove our first result. Theorem 1.1. For all $n\geq 0$ we have ${\cal E}_{n}={\cal A}_{n}$. Proof. Suppose we have $\tau\in{\cal E}_{n}$. Assume, towards a contradiction, that $\tau$ is not atomic so that $\tau=\pi|\sigma$. On the matrix level we have $R_{\tau}=R_{\pi}\hat{\oplus}R_{\sigma}$ where $R_{\pi}$ is $m\times m$ for some $m$. We are given that $\tau$ is extendable, so let $P$ be a permutation matrix extending $R_{\tau}$. Since $P$ and $R_{\tau}$ agree above and including the diagonal, the first $m+1$ rows of $P$ must be zero from column $m+1$ on. But $P$ is a permutation matrix and so each of these $m+1$ rows must have a one in a different column, contradicting the fact that only $m$ columns are available. Now assume $\tau\in{\cal A}_{n}$. We will construct an extension $P$ of $R_{\tau}$. Let $i_{1},\ldots,i_{r}$ be the indices of the zero rows of $R_{\tau}$ and similarly for $j_{1},\ldots,j_{r}$ and the columns. If $i_{k}>j_{k}$ for all $k\in[r]$, then we can construct $P$ by supplementing $R_{\tau}$ with ones in positions $(i_{1},j_{1}),\ldots,(i_{r},j_{r})$. So suppose, towards a contradiction, that there is some $k$ with $i_{k}\leq j_{k}$. Now $R_{\tau}$ must contain $j_{k}-k$ ones in the columns to the left of column $j_{k}$. If $i_{k}<j_{k}$, then there are fewer than $j_{k}-k$ rows which could contain these ones since $R_{\tau}$ is upper triangular. This is a contradiction. If $i_{k}=j_{k}$, then the $j_{k}-k$ ones in the columns left of $j_{k}$ must lie in the first $i_{k}-k=j_{k}-k$ rows. Furthermore, these ones together with the zero rows force the columns to the right of $j_{k}$ to be zero up to and including row $i_{k}=j_{k}$. It follows that $R_{\tau}=R_{\pi}\hat{\oplus}R_{\sigma}$ for some $\pi,\sigma$ with $R_{\pi}$ being $(i_{k}-1)\times(i_{k}-1)$. This contradicts the fact that $\tau$ is atomic. ∎ Having two descriptions of this set may make it easy to prove assertions about it from one definition which would be difficult to demonstrate if the other were used. Here is an example. Corollary 1.2. Let $R\subseteq{\cal T}_{n}$. If $R_{1,n}=1$ then $R$ is extendable in ${\cal T}_{n}$. Proof. If $R_{1,n}=1$ then we can not have $R=R_{\sigma}\hat{\oplus}R_{\tau}$ for nontrivial $\sigma,\tau$. So $R$ is atomic and, by the previous theorem, $R$ is extendable. ∎ 2 An algebra on rook placements and $NCSym$ The algebra of symmetric functions in noncommuting variables, $NCSym$, was first studied by Wolf [10] who proved a version of the Fundamental Theorem of Symmetric Functions in this context. The algebra was rediscovered by Gebhard and Sagan [4] who used it as a tool to make progress on Stanley’s (${\bf 3}+{\bf 1}$)-free Conjecture for chromatic symmetric functions [7]. Rosas and Sagan [6] were the first to make a systematic study of the vector space properties of $NCSym$. Bergeron, Reutenauer, Rosas, and Zabrocki [3] introduced a Hopf algebra structure on $NCSym$ and described its invariants and covariants. Let $X=\{x_{1},x_{2},\ldots\}$ be a countably infinite set of variables which do not commute. Consider the corresponding ring of formal power series over the rationals ${\mathbb{Q}}\langle{\langle{X}\rangle}\rangle$. Let ${\mathfrak{S}}_{m}$ be the symmetric group on $[m]$. Then any $g\in{\mathfrak{S}}_{n}$ acts on a monomial $x=x_{i_{1}}x_{i_{2}}\cdots x_{i_{n}}$ by $$g(x)=x_{g^{-1}(i_{1})}x_{g^{-1}(i_{2})}\cdots x_{g^{-1}(i_{n})}$$ where $g(i)=i$ for $i>m$. Extend this action linearly to ${\mathbb{Q}}\langle{\langle{X}\rangle}\rangle$. The symmetric functions in noncommuting variables, $NCsym\subset{\mathbb{Q}}\langle{\langle{X}\rangle}\rangle$, are all power series which are of bounded degree and invariant under the action of ${\mathfrak{S}}_{m}$ for all $m\geq 0$. The vector space bases of $NCSym$ are indexed by set partitions. We will be particularly interested in a basis which is the analogue of the power sum basis for ordinary symmetric functions. Given a monomial $x=x_{i_{1}}x_{i_{2}}\cdots x_{i_{n}}$, there is an associated set partition $\pi_{x}$ where $j$ and $k$ are in the same block of $\pi_{x}$ if and only if $i_{j}=i_{k}$ in $x$, i.e., the indices in the $j$th and $k$th positions are the same. For example, if $x=x_{3}x_{5}x_{2}x_{3}x_{3}x_{2}$ then $\pi_{x}=145|2|36$. The power sum symmetric functions in noncommuting variables are defined by $$p_{\pi}=\sum_{x\ :\ \pi_{x}\geq\pi}x,$$ where $\pi_{x}\geq\pi$ is the partial order in the lattice of partitions, so $\pi_{x}$ is obtained by merging blocks of $\pi$. Equivalently, $p_{\pi}$ is the sum of all monomials where the indices in the $j$th and $k$th places are equal if $j$ and $k$ are in the same block of $\pi$, but there may be other equalities as well. To illustrate, $$p_{13|2}=x_{1}x_{2}x_{1}+x_{2}x_{1}x_{2}+\cdots+x_{1}^{3}+x_{2}^{3}+\cdots.$$ Note that, directly from the definitions, $$p_{\pi|\sigma}=p_{\pi}p_{\sigma}.$$ (2) Using this property, Bergeron, Hohlweg, Rosas, and Zabrocki [2] proved the following result which will be useful for our purposes. Proposition 2.1 ([2]). As an algebra, $NCSym$ is freely generated by the $p_{\pi}$ with $\pi$ atomic. ∎ Let $${\cal R}=\{R\subseteq{\cal T}_{n}\ :\ n\geq-1\},$$ where there is a single rook on ${\cal T}_{-1}$ called the unit rook and denoted $R=1$ (not to be confused with the empty rook on ${\cal T}_{0}$). We extend the bijection between set partitions and rooks on upper triangular boards by letting the unit rook correspond to the empty partition. Consider the vector space ${\mathbb{Q}}{\cal R}$ of all formal linear combinations of rooks in ${\cal R}$. By both extending $\hat{\oplus}$ linearly and letting the unit rook act as an identity, the operation of extended direct sum can be considered as a product on this space. It is easy to verify that this turns ${\mathbb{Q}}{\cal R}$ into an algebra. Proposition 2.2. As an algebra, ${\mathbb{Q}}{\cal R}$ is freely generated by the $R_{\pi}$ with $\pi$ atomic. Proof. A simple induction on $n$ shows that any $\tau\in\Pi_{n}$ can be uniquely factored as $\tau=\pi_{1}|\pi_{2}|\cdots|\pi_{t}$ with the $\pi_{i}$ atomic. From the remark just before Theorem 1.1, it follows that each $R_{\tau}$ can be uniquely written as a product of atomic $R_{\pi}$’s. Since the set of all $R_{\tau}$ forms a vector space basis, the atomic $R_{\pi}$ form a free generating set. ∎ Comparing Propositions 2.1 and 2.2 as well as the remark before Theorem 1.1 and equation 2, we immediately get the desired isomorphism. Theorem 2.3. The map $p_{\pi}\mapsto R_{\pi}$ is an algebra isomorphism of $NCSym$ with ${\mathbb{Q}}{\cal R}$. ∎ 3 Remarks and an open question 3.1 Unsplittable partitions Bergeron, Reutenauer, Rosas, and Zabrocki [3] considered another free generating set for $NCSym$ which we will now describe. A restricted growth function of length $n$ is a sequence of positive integers $r=a_{1}a_{2}\ldots a_{n}$ such that 1. $a_{1}=1$, and 2. $a_{i}\leq 1+\max\{a_{1},\ldots,a_{i-1}\}$ for $2\leq i\leq n$. Let $RG_{n}$ denoted the set of restricted growth functions of length $n$. There is a well-known bijection between $\Pi_{n}$ and $RG_{n}$ [8, page 34] as follows. Given $\pi\in\Pi_{n}$ we define $r_{\pi}$ by $a_{i}=j$ if and only if $i\in B_{j}$ in $\pi$. For example, if $\pi=124|36|5$ then $r_{\pi}=112132$. It is easy to see that having $\pi$ in standard form makes the map well defined. And the reader should have no trouble constructing the inverse. Define the split product of $\pi\in\Pi_{m}$ and $\sigma\in\Pi_{n}$ to be $\tau=\pi\circ\sigma\in\Pi_{m+n}$ where $\tau$ is the uniqe partition such that $r_{\tau}=r_{\pi}r_{\sigma}$ (concatenation). To illustrate, if $\pi$ is as in the previous paragraph and $\sigma=13|2$ then $r_{\pi}r_{\sigma}=112132121$ and so $\pi\circ\sigma=12479|368|5$. This is not Bergeron et al.’s original definition, but it is equivalent. Now define $\tau$ to be unsplitable if it can not be written as a split product of two nontrivial partitions. (Bergeron et al. used the term ”nonsplitable” which is not a typical English word.) Let ${\cal US}_{n}\subseteq\Pi_{n}$ be the subset of unsplitable partitions. So ${\cal US}_{2}=\{1|2|3,1|23\}$. Perhaps the simplest basis for $NCSym$ is the one gotten by symmetrizing a monomial. Define the monomial symmetric functions in noncommuting variables to be $$m_{\pi}=\sum_{x\ :\ \pi_{x}=\pi}x.$$ So now indices in a term of $m_{\pi}$ are equal precisely when their positions are in the same block of $\pi$. For example, $$m_{13|2}=x_{1}x_{2}x_{1}+x_{2}x_{1}x_{2}+\cdots.$$ The following is a more explicit version of Wolf’s original result [10]. Proposition 3.1 ([3]). As an algebra, $NCSym$ is freely generated by the $m_{\pi}$ with $\pi$ unsplitable. ∎ Comparing Propositions 2.1 and 3.1 we see that $|{\cal A}_{n}|=|{\cal US}_{n}|$ for all $n\geq 0$ where $|\cdot|$ denotes cardinality. (Although they are not the same set as can be seen by our computations when $n=2$.) It would be interesting to find a bijective proof of this result. 3.2 Hopf structure Thiem [9] found a connection between $NCSym$ and unipotent upper-triangular zero-one matrices using supercharacter theory. This work has very recently been extended using matrices over any field and a colored version of $NCSym$ during a workshop at the American Institute of Mathematics [1]. This approach gives an isomorphism even at the Hopf algebra level. References [1] Bergeron, N. personal communication. [2] Bergeron, N., Hohlweg, C., Rosas, M., and Zabrocki, M. Grothendieck bialgebras, partition lattices, and symmetric functions in noncommutative variables. Electron. J. Combin. 13, 1 (2006), Research Paper 75, 19 pp. (electronic). [3] Bergeron, N., Reutenauer, C., Rosas, M., and Zabrocki, M. Invariants and coinvariants of the symmetric groups in noncommuting variables. Canad. J. Math. 60, 2 (2008), 266–296. [4] Gebhard, D. D., and Sagan, B. E. A chromatic symmetric function in noncommuting variables. J. Algebraic Combin. 13, 3 (2001), 227–255. [5] Riordan, J. An introduction to combinatorial analysis. Dover Publications Inc., Mineola, NY, 2002. Reprint of the 1958 original [Wiley, New York; MR0096594 (20 #3077)]. [6] Rosas, M. H., and Sagan, B. E. Symmetric functions in noncommuting variables. Trans. Amer. Math. Soc. 358, 1 (2006), 215–232 (electronic). [7] Stanley, R. P. A symmetric function generalization of the chromatic polynomial of a graph. Adv. Math. 111, 1 (1995), 166–194. [8] Stanley, R. P. Enumerative Combinatorics. Vol. 1, vol. 49 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1997. With a foreword by Gian-Carlo Rota, Corrected reprint of the 1986 original. [9] Thiem, N. Branching rules in the ring of superclass functions of unipotent upper-triangular matrices. J. Algebraic Combin. 31, 2 (2010), 267–298. [10] Wolf, M. C. Symmetric functions of non-commutative elements. Duke Math. J. 2, 4 (1936), 626–637.
Nonlocality effects on spin-one pairing patterns in two-flavor color superconducting quark matter and compact star applications D. N. Aguilera D. B. Blaschke Institut für Physik, Universität Rostock, Universitätsplatz 3, D-18051 Rostock, Germany Bogoliubov Laboratory of Theoretical Physics, JINR Dubna, Joliot-Curie street 6, 141980 Dubna, Russia Gesellschaft für Schwerionenforschung (GSI), Planckstr. 1, 64291 Darmstadt, Germany Abstract We study the influence of nonlocality in the interaction on two spin one pairing patterns of two-flavor quark matter: the anisotropic blue color paring besides the usual two color superconducting matter (2SCb), in which red and green colors are paired, and the color spin locking phase (CSL). The effect of nonlocality on the gaps is rather large and the pairings exhibit a strong dependence on the form factor of the interaction, especially in the low density region. The application of these small spin-one condensates for compact stars is analyzed: the early onset of quark matter in the nonlocal models may help to stabilize hybrid star configurations. While the anisotropic blue quark pairing does not survive a big asymmetry in flavor space as imposed by the charge neutrality condition, the CSL phase as a flavor independent pairing can be realized as neutral matter in compact star cores. However, smooth form factors and the missmatch between the flavor chemical potential in neutral matter make the effective gaps of the order of magnitude $\simeq 10$ keV, and a more systematic analysis is needed to decide whether such small gaps could be consistent with the cooling phenomenology. PACS number(s): 04.40.Dg, 12.38.Mh, 26.60.+c, 97.60.Jd and 1 Introduction The investigation of color superconducting phases in cold dense quark matter has attracted much interest in the last years since those phases could be relevant for the physics of compact star cores [1]. Nevertheless, it became clear that large spin-0 condensates, like the usual two-flavor superconductor (2SC), although having large pairing gaps ($\sim$ 100 MeV) and therefore a direct influence on the equation of state (EoS), may be disfavored by the charge neutrality condition if not unusually strong diquark coupling constants are considered [2, 3, 4]. Alternatively, spin-1 condensates are being investigated [5, 6, 7]. Due to their smallness, their influence on the EoS is negligible but they could strongly affect the transport properties in quark matter and therefore have important consequences on observable phenomena like compact star cooling, see [8]. A recent investigation of neutrino emission and cooling rates of spin-1 color superconductors constructed for conserved total angular momentum allows for color-spin locking, planar, polar, and A phases [9]. However, none of these phases fulfills the requirements of cooling phenomenology that no ungapped quark mode should occur on which the direct Urca process could operate and lead to very fast cooling in disagreement with modern observational data [12]. In the present work, we consider spin-1 pairing patterns different from the above mentioned, like the anisotropic third color (say blue) quark pairing besides the usual 2SC phase (2SCb) [10] and the s-wave color-spin locking phase (CSL) [11], which have been introduced within the NJL model whereby small gaps in the region of some fractions of MeV have been obtained. Such small gaps could help to suppress efficiently the direct Urca process in quark matter and thus to control the otherwise too rapid cooling of hybrid stars [12, 13]. One important feature is that the form of the regularization, i.e. via a sharp cut-off or a form factor function, is expected to have a strong impact on the resulting gaps due to the sensitive momentum dependence of the integrand in the gap equation [10]. Especially the behavior of quark matter in the density region of the suspected deconfinement transition plays a crucial role for determining the stability of compact star configurations. Models with a late onset of quark matter could eventually lead to unstable hybrid star configurations. For example, in [11] it has been shown within the local NJL model how a possible pairing pattern for compact star matter could be described which fulfills the constraints from compact star cooling phenomenology. These require that all quark species be paired and the smallest pairing gap be of the order of 10 keV to 1 MeV with a decreasing density dependence. A caveat of the NJL model quark matter is, however, that a stable hybrid star can be realized only marginally, see [14]. The advantage of nonlocal models is that they can describe the regularization of the quark interaction via form factor functions and therefore represent it in a smoother way, especially for low densities [15]. For the case of the 2SC phase it has already been shown that the effect of nonlocality in the low density region is rather large and the pairing exhibits a strong dependence with the form factor of the interaction [2, 16]. Moreover, the early onset of quark matter for the dynamical chiral quark model, in contrast to the NJL model, might help to stabilize hybrid star configurations. As it has been shown in [17] within a nonlocal generalization of the NJL model [15], stable hybrid stars with large quark matter cores can be obtained. In order to describe the properties of these stars consistently, including their cooling phenomenology, the description of diquark pairing gaps as well as emissivities and transport properties for the above motivated CSL phase should be given also within a nonlocal quark model. As a first step in this direction we will provide in the present paper the spin-1 pairing gaps for a nonlocal, instantaneous chiral quark model under neutron star constraints for later application in the cooling phenomenology. We give here a first discussion of the influence of nonlocality of the interaction in momentum space when compared to the NJL model case and discuss the possible role of spin-one condensates for compact star applications. The paper is organized as follows: in Section 2 we briefly review how the NJL model is modified when nonlocality is introduced using a three-dimensional nonlocal chiral quark model; in Sections 3 and 4 we present the nonlocal version of the anisotropic blue color paring (2SCb) besides the usual two-flavor color superconducting (2SC) phase and of the color-spin locking phase (CSL), respectively. In Section 5 we present preliminary results for neutral matter in compact stars for the Gaussian form factor and discuss whether requirements for hybrid star cooling phenomenology could be met. Finally, in Section 6, we draw the Conclusions. 2 Nonlocal chiral quark model We investigate a nonlocal chiral model for two-flavor quark matter in which the quark interaction is represented in a separable way by introducing form factor functions $g(p)$ in the bilinears of the current-current interaction terms in the Lagrangian [2, 15, 16, 17]. It is assumed that this four-fermion interaction is instantaneous and therefore the form factors depend only on the modulus of the three momentum $p=|\vec{p}|$. In the mean field approximation the thermodynamical potential can be evaluated and is given by $$\displaystyle\Omega(T,\mu)$$ $$\displaystyle=$$ $$\displaystyle-T\sum_{n}\int\frac{d^{3}p}{(2\pi)^{3}}\frac{1}{2}{\rm Tr}\ln({% \frac{1}{T}S^{-1}(i\omega_{n},\vec{p}\,)})+V~{},$$ (1) where the sum is over fermionic Matsubara frequencies $\omega_{n}=(2n+1)\pi T$ and $V$ is the quadratic contribution of the condensates considered. The specific form of $V$ in dependence on the order parameters $\phi$ for chiral symmetry breaking and $\Delta$ for color superconductivity in the corresponding diquark pairing channels will be given below in Section 3. In our nonlocal extension the inverse of the fermion propagator in Nambu-Gorkov space is modified in comparison to the NJL model case by momentum dependent form factor functions $g(p)$ as follows, $$S^{-1}(p)=\left(\begin{array}[]{cc}\not\!p+\hat{\mu}\gamma^{0}-\hat{M}(p)&g(p)% \hat{\Delta}\\ -{g(p)\hat{\Delta}}^{\dagger}&\not\!p-\hat{\mu}\gamma^{0}-\hat{M}(p)\end{array% }\right)$$ (2) where $\hat{\mu}$ is the chemical potential matrix and the elements of $\hat{M}(p)=\mathrm{diag}\{M_{f}(p)\}$ are the dynamical masses of the quarks given by $$\displaystyle M_{f}(p)=m_{f}+g(p)\phi_{f}~{}.$$ (3) The matrix $\hat{\Delta}$ represents the order parameters for diquark pairing which will be made explicit in Sects. 3 and 4, respectively. In (2) and (3) we have introduced the same form factors $g(p)$ to represent the nonlocality of the interaction in the meson ($q\bar{q}$) and diquark ($qq$) channels. In our calculations we use the Gaussian (G), Lorentzian (L) and cutoff (NJL) form factors defined as $$\displaystyle g_{\rm G}(p)$$ $$\displaystyle=$$ $$\displaystyle\exp(-p^{2}/\Lambda_{\rm G}^{2})~{},$$ (4) $$\displaystyle g_{\rm L}(p)$$ $$\displaystyle=$$ $$\displaystyle[1+(p^{2}/\Lambda_{\rm L}^{2})^{2}]^{-1},$$ (5) $$\displaystyle g_{\rm NJL}(p)$$ $$\displaystyle=$$ $$\displaystyle\theta(1-p/\Lambda_{\rm NJL})~{}.$$ (6) The parameters for the above form factor models used in this work are presented in Tab. 1. They have been fixed by the pion mass, pion decay constant and the constituent quark mass $M=M(0)$ at $T=\mu=0$. In order to estimate the effect of the nonlocality on the results relative to the NJL model, we used $g_{G}$, $g_{L}$ and $g_{NJL}$ with parameters fixed such that $M=380$ MeV, see Tab. 1. For details of the parameterization, see [22]. The stationary points of the thermodynamical potential (1) are found from the condition of a vanishing variation $$\displaystyle\delta\Omega=0$$ (7) with respect to variations of the order parameters. Eq. (7) defines a set of gap equations. Among the solutions of these equations the thermodynamically stable state corresponds to the set of order parameter values for which $\Omega$ has an absolute minimum. 3 Anisotropic blue quark pairing for nonlocal model First we consider the 2SCb phase in which two of the three colors (e.g. red $r$ and green $g$) pair in the standard spin-0 isospin singlet condensate (2SC phase) and the residual third color (consequently, it is blue $b$) pairs in a spin-1 condensate (symmetric in Dirac space, symmetric in color, antisymmetric in flavor) [10]. The matrix $\hat{\Delta}$ in the inverse quark propagator (2) for the 2SCb phase is then given by $$\displaystyle\hat{\Delta}^{2SCb}=\Delta(\gamma_{5}\tau_{2}\lambda_{2})(\delta_% {c,r}+\delta_{c,g})+\Delta^{\prime}(\sigma^{03}~{}\tau_{2}~{}\hat{P}_{3}^{(c)}% )\delta_{c,b}~{},$$ (8) where $\tau_{2}$ is an antisymmetric Pauli matrix in the flavor space and $\lambda_{2}$ an antisymmetric Gell-Mann matrix in the color space; $\sigma^{03}=\frac{i}{2}[\gamma^{0},\gamma^{3}]$ and $\hat{P}_{3}^{(3)}=\frac{1}{3}-\frac{\lambda_{8}}{\sqrt{3}}$ is the projector on the third color. If $\Delta^{\prime}\neq 0$, this can be understood as a nonzero third component of a vector in color space which breaks the $\mathcal{O}(3)$ rotational symmetry spontaneously. The blue quark pairing is therefore anisotropic. We consider first symmetric matter and thus we take the quark chemical potentials as $\mu_{u}=\mu_{d}=\mu$. The thermodynamical potential is given by $$\displaystyle\Omega^{2SCb}(T,\mu)$$ $$\displaystyle=$$ $$\displaystyle\frac{\phi^{2}}{4G_{1}}+\frac{|\Delta|^{2}}{4G_{2}}+\frac{|\Delta% ^{\prime}|^{2}}{16G_{3}}$$ $$\displaystyle-4\sum_{i=1}^{3}\int$$ $$\displaystyle\frac{d^{3}p}{(2\pi)^{3}}\left[\frac{E_{i}^{-}+E_{i}^{+}}{2}+T\ln% {(1+e^{-E_{i}^{-}/T})}+T\ln{(1+e^{-E_{i}^{+}/T})}\right]~{},$$ where the coupling constants $G_{1},G_{2},G_{3}$ follow the relation given by the instanton induced interaction [18] $$\displaystyle G_{1}:G_{2}:G_{3}=1:3/4:3/16~{}.$$ (10) The dispersion law for the paired quarks ($r\,,\,g$) is given by $$\displaystyle E_{1,2}^{\mp}(\vec{p})$$ $$\displaystyle=$$ $$\displaystyle E^{\mp}(\vec{p})=\sqrt{(\epsilon\mp\mu)^{2}+g^{2}(p)|\Delta|^{2}% }~{},$$ (11) where $\epsilon=\sqrt{\vec{p}\,^{2}+M^{2}}$ is the free particle dispersion relation and $M=M_{u}=M_{d}$. For the anisotropic pairing of the blue quarks the dispersion relation can be written as $$\displaystyle E_{3}^{\mp}(\vec{p})$$ $$\displaystyle=$$ $$\displaystyle\sqrt{(\epsilon_{\rm eff}\mp\mu_{\rm eff})^{2}+g^{2}(p)|\Delta^{% \prime}_{\rm eff}|^{2}}~{},$$ (12) where the effective variables depend on the angle $\theta$, with $\cos\theta=p_{3}/|\vec{p}|$, and are defined as $$\displaystyle\epsilon_{\rm eff}^{2}$$ $$\displaystyle=$$ $$\displaystyle\vec{p}\,^{2}+M_{\rm eff}^{2}~{},$$ (13) $$\displaystyle M_{\rm eff}$$ $$\displaystyle=$$ $$\displaystyle M\frac{\mu}{\mu_{\rm eff}}~{},$$ (14) $$\displaystyle\mu_{\rm eff}^{2}$$ $$\displaystyle=$$ $$\displaystyle\mu^{2}+g^{2}(p)|\Delta^{\prime}|^{2}\sin^{2}{\theta}~{},$$ (15) $$\displaystyle|\Delta^{\prime}_{\rm eff}|^{2}$$ $$\displaystyle=$$ $$\displaystyle|\Delta^{\prime}|^{2}(\cos^{2}\theta+\frac{M^{2}}{\mu_{\rm eff}^{% 2}}\sin^{2}{\theta}).$$ (16) The dispersion relation $E_{3}^{\pm}$ is, therefore, an anisotropic function of $\vec{p}$ and therefore the calculation of (LABEL:Omegablue) should be performed as an integral over the modulus of $|\vec{p}|$ and over the angle $\theta$. $E_{3}^{\pm}$ has a minimum if $\theta=\pi/2$ and vanishes if $M=0$ or $\Delta^{\prime}=0$. As it has been pointed out in [10], the gap equations for $\Delta$ and $\Delta^{\prime}$ are only indirectly coupled by their dependence on $M$. Therefore, since the equation for $\Delta^{\prime}$ nearly decouples if $M$ is small, we can illustrate the anisotropic contributions to the thermodynamical potential $\Omega$ fixing the variables $(\mu,T)$ and the order parameters $(\phi,\Delta)$ and varying the angle $\theta$. For this purpose, we consider $$\displaystyle d\Omega^{2SCb}=d(\cos\theta)\Omega^{2SCb}\,|_{\cos\theta}~{},$$ (17) and in Fig. 1 we plot $\Omega^{2SCb}\,|_{\cos\theta}$ as a function of the gap $\Delta^{\prime}$. As $\theta$ increases from $0$ to $\pi/2$ the position of the minimum of $\Omega^{2SCb}\,|_{\cos\theta}$ moves to lower values of the gaps $\Delta^{\prime}$. The value of $\Delta^{\prime}$ that minimizes the thermodynamical potential is found once the integration over the angle $\theta$ is performed. 3.1 Gap equation solutions We search for the stationary points of $\Omega^{2SCb}$ (7) respect to the order parameters solving the gap equations $$\displaystyle\frac{\delta\Omega^{2SCb}}{\delta\phi}=\frac{\delta\Omega^{2SCb}}% {\delta\Delta}=\frac{\delta\Omega^{2SCb}}{\delta\Delta^{\prime}}=0$$ (18) using the dispersion relations (11) and (12). The results that are shown in this work are obtained for $T=0$. In Fig. 2 we show the chiral gap $\phi$, the 2SC diquark gap $\Delta$ and the spin-1 pairing gap of the blue quarks $\Delta^{\prime}$ as functions of the quark chemical potential $\mu$. The gaps $\Delta^{\prime}$ are strongly density-dependent rising functions and typically of the order of magnitude of keV, e.g., for a fixed $\mu$ at least two orders of magnitude smaller than the corresponding 2SC gaps. These small gaps are very sensitive to the form of the regularization and to the parameterization used. Obviously, also the onset and the slope of the superconducting phases depend strongly on the parameters used. The effect of the nonlocality on the results is shown in the lower panel of Fig. 2: the smoothness of the Gaussian form factor reduces the $\Delta^{\prime}$ gaps dramatically. For this case, the blue quark pairing gaps are about two orders of magnitude lower than the corresponding NJL model (both with fixed $M=380$ MeV). The dependence of the results on the parameters used is also rather strong and nonlinear as it is shown in Fig. 3 using, as example, the Gaussian form factor. When the coupling constant $G_{3}$ is doubled (dash-dotted line) the resulting gaps increase between two and three orders of magnitude, depending on the chemical potential. Since these small gaps are practically negligible in comparison to usual 2SC gaps, they would have no influence on the equation of state. On the other hand, it is well known that even small pairing energies could play an important role in the calculation of transport properties of quark matter for temperatures below the order of magnitude of the gap parameter. Nevertheless, it is unlikely that the blue quark pairing could survive the compact star constraints: the charge neutrality requires that the Fermi seas of the up and down quarks should differ by about 50-100 MeV and this is much larger than the gaps that we obtain for these condensates ($\sim$ keV) in the symmetric case. In the following section we present the nonlocality effects on a flavor symmetric spin-1 pairing channel being a good candidate to survive the large mismatch between up and down quark Fermi seas in charge neutral quark matter. 4 Color-spin locking (CSL) phase for a nonlocal chiral model In the s-wave CSL phase introduced in Ref. [11], which differs from the CSL phase in [7], each condensate is a component of the antisymmetric anti-triplet in the color space and is locked with a vector component in the spin space. In the present paper, we study a nonlocal generalization of the CSL pairing pattern of Ref. [11] and consider the matrix $\hat{\Delta}$ in (2) for the CSL channel as $$\displaystyle\hat{\Delta}^{CSL}=\Delta_{f}(\gamma_{3}\lambda_{2}+\gamma_{1}% \lambda_{7}+\gamma_{2}\lambda_{5})~{}.$$ (19) The thermodynamical potential can be decomposed into single-flavor components $$\Omega^{CSL}(T,\{\mu_{f}\})=\sum_{f\in\{u,d\}}\Omega^{CSL}_{f}(T,\mu_{f})~{},$$ (20) where the contribution of each flavor is $$\displaystyle\Omega^{CSL}_{f}(T,\mu_{f})$$ $$\displaystyle=$$ $$\displaystyle\frac{\phi_{f}^{2}}{8G_{1}}+3\frac{|\Delta_{f}|^{2}}{8H_{v}}-\sum% _{k=1}^{6}\int\frac{d^{3}p}{(2\pi)^{3}}(E_{f,k}+2T\ln{(1+e^{-E_{f,k}/T})})~{}.$$ The ratio of the two coupling constants $$\displaystyle G_{1}:H_{v}=1:\frac{3}{8}.$$ (22) is obtained via Fierz transformations of the color-currents for a one-gluon exchange interaction. To derive the dispersion relations $E_{f,k}$ we follow [11] and we extend the expressions for the nonlocal model introducing the form factors to modify the quark interaction. We obtain that $E_{f;1,2}$ could be brought in the standard form $$\displaystyle E_{f;1,2}^{2}$$ $$\displaystyle=$$ $$\displaystyle(\varepsilon_{f,{\rm eff}}\mp\mu_{f,{\rm eff}})^{2}+|\Delta_{f,{% \rm eff}}|^{2}g^{2}(p)$$ (23) if the effective variables are now defined as: $$\displaystyle\varepsilon_{f,{\rm eff}}^{2}$$ $$\displaystyle=$$ $$\displaystyle\vec{p}\,^{2}+M_{f,\rm eff}^{2}~{},$$ (24) $$\displaystyle M_{f,\rm eff}$$ $$\displaystyle=$$ $$\displaystyle\frac{\mu_{f}}{\mu_{f,{\rm eff}}}M_{f}(p)~{},$$ (25) $$\displaystyle\mu_{f,{\rm eff}}^{2}$$ $$\displaystyle=$$ $$\displaystyle\mu_{f}^{2}+|\Delta_{f}|^{2}g^{2}(p)~{},$$ (26) $$\displaystyle\Delta_{f,\rm eff}$$ $$\displaystyle=$$ $$\displaystyle\frac{M_{f}(p)}{\mu_{f,{\rm eff}}}|\Delta_{f}|~{}.$$ (27) For $E_{f;k}$, $k=3...6$, the particle and the antiparticle branches split $$\displaystyle E_{f;3,5}^{2}$$ $$\displaystyle=$$ $$\displaystyle(\varepsilon_{f}-\mu_{f})^{2}+a_{f;3,5}|\Delta_{f}|^{2}g^{2}(p)~{},$$ (28) $$\displaystyle E_{f;4,6}^{2}$$ $$\displaystyle=$$ $$\displaystyle(\varepsilon_{f}+\mu_{f})^{2}+a_{f;4,6}|\Delta_{f}|^{2}g^{2}(p)~{},$$ (29) where the momentum-dependent coefficients $a_{f;k}$, $k=3...6$ are given by $$\displaystyle a_{f;3,5}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\left[5-\frac{\vec{p}\,^{2}}{\varepsilon_{f}\mu_{f}}% \pm\sqrt{\left(1-\frac{\vec{p}\,^{2}}{\varepsilon_{f}\mu_{f}}\right)^{2}+8% \frac{M_{f}^{2}(p)}{\varepsilon_{f}^{2}}}\right]$$ (30) $$\displaystyle a_{f;4,6}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\left[5+\frac{\vec{p}\,^{2}}{\varepsilon_{f}\mu_{f}}% \pm\sqrt{\left(1+\frac{\vec{p}\,^{2}}{\varepsilon_{f}\mu_{f}}\right)^{2}+8% \frac{M_{f}^{2}(p)}{\varepsilon_{f}^{2}}}\right]$$ (31) and $$\displaystyle\varepsilon_{f}^{2}$$ $$\displaystyle=$$ $$\displaystyle\vec{p}\,^{2}+M_{f}^{2}(p)~{}.$$ (32) We solve the gap equations $$\displaystyle\frac{\delta\Omega^{CSL}_{f}}{\delta\phi_{f}}=\frac{\delta\Omega^% {CSL}_{f}}{\delta\Delta_{f}}=0$$ (33) and present the results of the global minimum of $\Omega^{CSL}_{f}$ in the next subsection. 4.1 Gap equation solutions for each flavor The mass gaps and the CSL gaps are shown in Fig. 4 for different form factors of the quark interaction. The CSL gaps are strongly $\mu_{f}$-dependent rising functions in the domain that is relevant for compact star applications. There is a systematic reduction of the CSL gaps as the form factors become smoother (from NJL to Gaussian) and the condensates in the nonlocal extension are at least one order of magnitude smaller than in the NJL case. Note that especially the low-density region is qualitatively determined by the form of the interaction. Since the nonlocality also affects the chiral gap, the breakdown of which is a prerequisite for the occurrence of color superconducting phases, we observe for the nonlocal models an earlier onset of the superconducting quark matter ($\mu_{f,{\rm crit}}\leq 350$ MeV) than in the NJL model cases. As it has been pointed out in [17], the position of the onset is crucial to stabilize hybrid star configurations. In general, NJL models present a later onset than nonlocal ones [16] and might disfavor the occurrence of stable hybrid star configurations with a quark matter core [19]. In Fig. 5 we plot the effective CSL gaps setting the explicit dependence of the form factor $g(p)=1$ in (27) in order to compare the order of magnitude of them with sharp cut-off models. We obtain that the Gaussian is an increasing function of the chemical potential from approximately 15 keV to 35 keV and the Lorentzian is nearly constant of the order of 80-90 keV. Both exhibit gaps that are much smaller than the corresponding NJL ones which are in the range $\Delta_{f,{\rm eff}}\simeq$ 300-200 keV. 5 Results for matter in compact stars Since the CSL pairing is symmetric in flavor, we can easily construct electrically neutral quark matter in $\beta$-equilibrium for compact star applications. We consider stellar matter in the quark core of compact stars consisting of $\{u,d\}$ quarks and leptons $\{e,\nu_{e},\bar{\nu}_{e},\mu,\nu_{\mu},\bar{\nu}_{\mu}\}$. The particle densities $n_{j}$ are conjugate to the corresponding chemical potentials $\mu_{j}$ according to $$\displaystyle n_{j}=-\frac{\partial\Omega}{\partial\mu_{j}}\bigg{|}_{\phi_{0},% \Delta_{0};T}~{},$$ (34) where the index $j$ denotes the particle species. We consider matter in $\beta$-equilibrium with only electrons and since we assume that neutrinos leave the star without being trapped ($\mu_{\bar{\nu}_{e}}=-\mu_{\nu_{e}}=0$, $\mu_{\bar{\nu}_{\mu}}=-\mu_{\nu_{\mu}}=0$) $$\displaystyle\mu_{e}=\mu_{d}-\mu_{u}~{}.$$ (35) We are interested in neutral matter. Therefore we impose that the total electric charge should vanish $$\displaystyle\frac{2}{3}n_{u}-\frac{1}{3}n_{d}-n_{e}=0~{}.$$ (36) The CSL condensates are color neutral such that no color chemical potentials are needed and no further constraints need to be obeyed. We calculate the CSL gaps for each flavor as a function of the quark chemical potential $\mu=(\mu_{u}+2\mu_{d})/3\,$, $$\displaystyle\Delta_{f}$$ $$\displaystyle=$$ $$\displaystyle\Delta_{f}(\mu_{f}(\mu))~{},$$ (37) $$\displaystyle\Delta_{f,{\rm efff}}$$ $$\displaystyle=$$ $$\displaystyle\Delta_{f,{\rm eff}}(\mu_{f}(\mu))~{},$$ where the functional relation $\mu_{f}(\mu)$ is taken from the $\beta$- equilibrated and neutral normal quark matter equation of state. The results for the CSL gaps and the effective gaps as a function of the chemical potential $\mu$ are shown in Fig. 6 and in Fig. 7, respectively. From the parameterizations we listed in Table 1, we choose the Gaussian set because we consider it the most promising for stable hybrid star configurations due to the early onset of chiral and superconducting phase transitions in quark matter. From the figures 6 and 7, we see that the two branches of the gap functions corresponding to the up and down quarks are put apart by the charge neutrality condition (thick lines). The smallest gap, $\Delta_{u}$ runs from $\approx 100$ keV near the onset to $\approx 500$ keV at $\mu=500$ MeV while for the $d$ quarks, $\Delta_{d}$ increases from $\approx 380$ keV to $1.4$ MeV in the same range. On the other hand, the effective gaps $\Delta_{f,{\rm eff}}$ are of the order of magnitude of $\simeq 10$ keV, showing an approximate linear behaviour with $\mu$. It remains to be investigated whether such small effective gaps could effectively suppress the direct Urca process in quark matter which is a requirement of compact star cooling phenomenology. In this respect, we found here that two facts produce a strong reduction of the CSL energy gaps. First, when we include smooth form factors in the effective interactions, the values of the gaps decrease dramatically relative to the NJL case (an order of magnitude from $\simeq 100$ keV for NJL to $\simeq 10$ keV for Gaussian). Second, when neutrality constraints are considered, $\Delta_{u}$ and the effective gaps $\Delta_{f,{\rm eff}}$ are reduced further. However, as we have shown, there is a strong dependence of our results on the parameterization, and a more systematic investigation on the smoothness of the form factor could be helpful to decide whether these phases could be suitable for compact star applications. Moreover, this study should be seen as a preparatory step for subsequent investigations where, for example, a covariant generalization of the formalism for the inclusion of nonlocality effects [20, 21] should be considered. 6 Conclusion We have studied the effect of nonlocality on spin-1 condensates in two flavor quark matter: the 2SC+spin-1 pairing of the blue quarks (2SCb) and the color spin locking (CSL) phase. We found that the size of these small gaps is very sensitive to the form of the regularization. The nonlocality has a strong impact on the low density region and we obtain an earlier onset for the superconducting phases. This might be crucial to stabilize quark matter cores in hybrid stars. On the other hand, due to the flavor asymmetry, we find that the 2SCb pairing phase can be ruled out for compact stars applications. The CSL phase, in contrast, is flavor independent and therefore inert against the constraint of electric neutrality. For electrically neutral quark matter in beta equilibrium we obtain effective CSL gaps which are of the order of magnitude of $10$ keV, which might help to suppress the direct Urca process, in accordance with recent results from compact star cooling phenomenology. Nevertheless, since our results are strongly dependent on the parameters used and on the form of the regularization, more systematic studies are needed in order to decide whether the CSL phase could be applied in the description of compact stars. Acknowledgements We are grateful to J. Berdermann, M. Buballa, H. Grigorian, G. Röpke, N. Scoccola and D.N. Voskresensky for their comments and interest in our work. We acknowledge discussions with D. Rischke, I. Shovkovy and Q. Wang during the meetings of the Virtual Institute VH-VI-041 of the Helmholtz Association on Dense hadronic matter and QCD phase transition and with the participants at the International Workshop on “The new physics of compact stars” at ECT* Trento, Italy. D.N.A. acknowledges support from Landesgraduiertenförderung, Mecklenburg-Vorpommern and from DAAD-Antorchas collaboration project under grant number DE/04/27956. D.N.A thanks for the hospitality of the Tandar Laboratory, CNEA, Buenos Aires, where part of this work has been concluded. References [1] D. Blaschke, N. K. Glendenning and A. Sedrakian, (Eds.): “Physics of neutron star interiors”, Springer, Lecture Notes in Physics, Vol. 578 (2001); D. Blaschke, and D. Sedrakian, (Eds.): “Superdense QCD matter and compact stars”, Springer, NATO Science Series, Vol. 197 (2005). [2] D. N. Aguilera, D. Blaschke and H. Grigorian, Nucl. Phys. A 757 (2005) 527. [3] D. Blaschke, S. Fredriksson, H. Grigorian, A. M. Oztas and F. Sandin, Phys. Rev. D 72, 065020 (2005). [4] S. B. Ruster, V. Werth, M. Buballa, I. A. Shovkovy and D. H. Rischke, Phys. Rev. D 72, 034004 (2005). [5] T. Schafer, Phys. Rev. D 62 (2000) 094007. [6] M. G. Alford, J. A. Bowers, J. M. Cheyne and G. A. Cowan, Phys. Rev. D 67 (2003) 054018. [7] A. Schmitt, Phys. Rev. D 71 (2005) 054016. [8] D. Blaschke, D. N. Voskresensky and H. Grigorian, arXiv:hep-ph/0510368. [9] A. Schmitt, I. A. Shovkovy and Q. Wang, arXiv:hep-ph/0510347. [10] M. Buballa, J. Hosek and M. Oertel, Phys. Rev. Lett.  90 (2003) 182002. [11] D. N. Aguilera, D. Blaschke, M. Buballa and V. L. Yudichev, Phys. Rev. D 72 (2005) 034008. [12] H. Grigorian, D. Blaschke and D. Voskresensky, Phys. Rev. C 71 (2005) 045801. [13] S. Popov, H. Grigorian and D. Blaschke, Phys. Rev. C 74 (2006) in press; [arXiv:nucl-th/0512098]. [14] M. Baldo, M. Buballa, F. Burgio, F. Neumann, M. Oertel and H. J. Schulze, Phys. Lett. B 562 (2003) 153. [15] S. M. Schmidt, D. Blaschke and Y. L. Kalinovsky, Phys. Rev. C 50 (1994) 435. [16] D. Blaschke, S. Fredriksson, H. Grigorian and A. M. Oztas, Nucl. Phys. A 736 (2004) 203. [17] H. Grigorian, D. Blaschke and D. N. Aguilera, Phys. Rev. C 69 (2004) 065802. [18] M. Buballa, Phys. Rept.  407, 205 (2005). [19] M. Buballa, F. Neumann, M. Oertel and I. Shovkovy, Phys. Lett. B 595, 36 (2004). [20] R. S. Duhau, A. G. Grunfeld and N. N. Scoccola, Phys. Rev. D 70 (2004) 074026. [21] D. Blaschke, H. Grigorian, A. Khalatyan and D. N. Voskresensky, Nucl. Phys. Proc. Suppl.  141 (2005) 137 ; C. Gocke, D. Blaschke, A. Khalatyan and H. Grigorian, arXiv:hep-ph/0104183. [22] H. Grigorian, arXiv:hep-ph/0602238.
Specific heat of the mixed spin-1/2 and spin-$S$ Ising model with a rope ladder structure††thanks: Presented at CSMAG’07 Conference, Košice, 9-12 July 2007 J. KIŠŠOVÁ and J. STREČKA Department of Theoretical Physics and Astrophysics, Faculty of Science, P. J. Šafárik University, Park Angelinum 9, 040 01 Košice, Slovak Republic Abstract The mixed spin-1/2 and spin-$S$ ($S>1/2$) Ising model on a rope ladder is examined by combining two exact analytical methods. By the decoration-iteration mapping transformation, this mixed-spin system is firstly transformed to a simple spin-1/2 Ising model on the two-leg ladder, which is then exactly solved by the standard transfer-matrix method. The thermal variations of the zero-field specific heat are discussed in particular. \PACS 05.50.+q, 75.40.Cx 1 Introduction One-dimensional Ising models are of fundamental scientific interest partly on behalf of their exact solubility [1] and partly due to the fact that they represent the simplest lattice-statistical models, which have found rich applications in seemingly diverse research areas [2]. In the present work, we shall provide the exact solution for the mixed spin-1/2 and spin-$S$ ($S>1/2$) Ising model on a rope ladder, which represents a rather novel magnetic structure emerging in several insulating bimetallic coordination compounds [3]. 2 Model and its exact solution Consider the spin-1/2 Ising ladder of $2N$ sites with each of its horizontal and vertical bonds occupied by the decorating spin $S$ ($S>1/2$), as it is schematically shown in Fig. 1. For further convenience, the Hamiltonian of the model under investigation can be written as a sum of site Hamiltonians $$\displaystyle{\cal H}=\sum_{k=1}^{3N}{\cal H}_{k},\qquad{\cal H}_{k}=-JS_{k}(% \sigma_{k1}+\sigma_{k2})-DS_{k}^{2},$$ (1) where each site Hamiltonian ${\cal H}_{k}$ involves all the interaction terms of one decorating spin. Above, $\sigma_{k\gamma}=\pm 1$ ($\gamma=1,2$) denote two vertex spins of the rope ladder that are nearest neighbours to the decorating spin $S_{k}=-S,-S+1,\ldots,+S$. The parameter $J$ labels pairwise exchange interaction between the nearest-neighbour spin pairs, while the parameter $D$ measures a strength of the single-ion anisotropy acting on the decorating spins. The crucial step of our procedure lies in the calculation of the partition function, which can be firstly partially factorized to the expression $$\displaystyle{\cal Z}=\sum_{\left\{\sigma_{j}\right\}}\prod_{j=1}^{3N}\sum^{+S% }_{n=-S}\exp\left(\beta Dn^{2}\right)\cosh\left[\beta Jn\left(\sigma_{k1}+% \sigma_{k2}\right)\right],$$ (2) where $\beta=1/(k_{\rm B}T)$, $k_{\rm B}$ is Boltzmann’s constant, $T$ is the absolute temperature and the symbol $\sum_{\left\{\sigma\right\}}$ denotes the summation over all possible spin configurations of the vertex spins $\sigma_{j}$. Next, it is advisable to employ the generalized decoration-iteration mapping transformation [4] $$\displaystyle\sum^{+S}_{n=-S}\exp\left(\beta Dn^{2}\right)\cosh\left[\beta Jn% \left(\sigma_{k1}+\sigma_{k2}\right)\right]=A\exp\left(\beta R\sigma_{k1}% \sigma_{k2}\right),$$ (3) which physically corresponds to removing all the interaction parameters associated with the decorating spins and replacing them by a new effective interaction $R$ between the remaining vertex spins. Since there are four possible spin states for each pair of vertex spins $\sigma_{k1}$ and $\sigma_{k2}$ and only two of them provide independent equations from the transformation formula (3), the mapping parameters $A$ and $R$ have to meet the following conditions $$\displaystyle A^{2}$$ $$\displaystyle=$$ $$\displaystyle\!\!\left(\sum_{n=-S}^{+S}\!\!\exp(\beta Dn^{2})\cosh(2\beta Jn)% \right)\left(\sum_{n=-S}^{+S}\!\!\exp(\beta Dn^{2})\right),$$ (4) $$\displaystyle\beta R$$ $$\displaystyle=$$ $$\displaystyle\!\!\frac{1}{2}\ln\left(\sum_{n=-S}^{+S}\!\!\exp(\beta Dn^{2})% \cosh(2\beta Jn)\right)-\frac{1}{2}\ln\left(\sum_{n=-S}^{+S}\!\!\exp(\beta Dn^% {2})\right)\!\!.$$ (5) By substituting the transformation (3) into Eq. (2) one gains the relation $$\displaystyle{\cal Z}(\beta,J,D)=A^{3N}{\cal Z}_{0}(\beta,R),$$ (6) which connects the partition function ${\cal Z}$ of the mixed-spin Ising model on the rope ladder with the partition function ${\cal Z}_{0}$ of the simple spin-1/2 Ising model on two-leg ladder with the effective nearest-neighbour interaction $R$. It is worthwhile to remark that the partition function of the latter model can be rather easily calculated by applying the transfer-matrix method [1], which yields in the thermodynamic limit the following exact result $$\displaystyle{\cal Z}_{0}=\left\{\cosh(3\beta R)+\cosh(\beta R)+\sqrt{\left[% \sinh(3\beta R)-\sinh(\beta R)\right]^{2}+4}\right\}^{N}\!\!.$$ (7) Bearing this in mind, the exact solution for the partition function of the mixed-spin Ising model on the rope ladder is formally completed as it can be simply achieved by a mere substitution of the corresponding partition function (7) to the relationship (6), whereas both mapping parameters $A$ and $R$ have to be taken from Eqs. (4) and (5), respectively. 3 Results and discussion Now, let us focus on thermal variations of the zero-field specific heat with the aim to shed light on how this dependence changes with the single-ion anisotropy. Before discussing the most interesting results, it should be mentioned that the zero-field specific heat can readily be obtained from the exact result for the partition function (6) with the help of basic thermodynamical-statistical relations. Even although the final expression is too cumbersome to write it down here explicitly, it is worthy to note nevertheless that it implies an independence on a sign of the exchange interaction $J$. In this respect, the temperature dependences of the zero-field specific heat discussed below remain in force regardless of whether ferromagnetic ($J>0$) or ferrimagnetic ($J<0$) spin system is considered. For illustration, Fig. 2a) and b) depict thermal variations of the zero-field specific heat for the rope ladder with the decorating spins $S=1$ and $S=3/2$. As one can see, the same general trends can be observed in both figures: the round maximum diminishes and shifts towards lower temperatures by decreasing the single-ion anisotropy. However, the most interesting temperature dependences can be found at sufficiently negative single-ion anisotropies close to $D/J=-2.0$ for the case with $S=1$ and respectively, $D/J=-1.0$ for the case with $S=3/2$. In the vicinity of both these boundary values, the double-peak specific heat curves (low-temperature peaks are for clarity shown in the insets) appear due to the competition between two different spin configurations sufficiently close in energy. Actually, it can easily be verified that the general condition ensuring an energetic equivalence between two different spin states of decorating spins reads $D_{S\leftrightarrow S-1}=-2J/(2S-1)$. It should be also mentioned that the zero-field specific heat curves with a single maximum are recovered upon further decrease of the single-ion anisotropy. In conclusion, it seems valuable to extend our calculation to more general case with the non-zero magnetic field. Acknowledgments This work was supported by the Slovak Research and Development Agency under the contract No. LPP-0107-06 and the grant No. VEGA 1/2009/05. References [1] C. J. Thompson, Mathematical Statistical Mechanics, Princeton University Press, New Jersey, 1979. [2] C. J. Thompson, Classical Equilibrium Statistical Mechanics, Oxford University Press, New York, 1988. [3] M. Ohba, N. Maruono, H. Ōkawa et al., J. Am. Chem. Soc. 116, 11566 (1994); M. Ohba, N. Fukita, H. Ōkawa, J. Chem. Soc., Dalton Trans., 1733 (1997); M. L. Kahn, C. Mathonière and O. Kahn, Inorg. Chem. 38, 3692 (1999); H. Ōkawa, M. Ohba, Bull. Chem. Soc. Jpn. 75, 1191 (2002); M. S. El Fallah, J. Ribas, X. Solans, M. Font-Bardia, New J. Chem. 27, 895 (2003); Y. S. You, D. Kim, Y. Do, S. J. Oh, Ch. S. Hong, Inorg. Chem. 43, 6899 (2004). [4] M. E. Fisher, Phys. Rev. 113, 969 (1959).
Trade-off shapes diversity in eco-evolutionary dynamics Farnoush Farahpour Bioinformatics and Computational Biophysics, University of Duisburg-Essen, Germany    Mohammadkarim Saeedghalati Bioinformatics and Computational Biophysics, University of Duisburg-Essen, Germany    Verena Brauer Biofilm Center, University of Duisburg-Essen, Germany    Daniel Hoffmann Bioinformatics and Computational Biophysics, University of Duisburg-Essen, Germany Center for Medical Biotechnology, University of Duisburg-Essen, Germany Center for Computational Sciences and Simulation, University of Duisburg-Essen, Germany Center for Water and Environmental Research, University of Duisburg-Essen, Germany Abstract We introduce an Interaction and Trade-off based Eco-Evolutionary Model (ITEEM), in which species are competing for common resources in a well-mixed system, and their evolution in interaction trait space is subject to a life-history trade-off between replication rate and competitive ability. We demonstrate that the strength of the trade-off has a fundamental impact on eco-evolutionary dynamics, as it imposes four phases of diversity, including a sharp phase transition. Despite its minimalism, ITEEM produces without further ad hoc features a remarkable range of observed patterns of eco-evolutionary dynamics. Most notably we find self-organization towards structured communities with high and sustainable diversity, in which competing species form interaction cycles similar to rock-paper-scissors games. Our intuition separates the time scales of fast ecological and slow evolutionary dynamics, perhaps because we experience the former but not the latter. However, there is increasing experimental evidence that this intuition is wrong Messer et al. (2016); Hendry (2016); Carrol et al. (2007). This insight is challenging both ecological and evolutionary theory, but has also sparked efforts towards unified eco-evolutionary theories Carrol et al. (2007); Fussmann et al. (2007); Schoener (2011); Ferriere and Legendre (2012); Moya-Laraño et al. (2014); Hendry (2016). Here we contribute a new, minimalist model to these efforts, the Interaction and Trade-off based Eco-Evolutionary Model (ITEEM). The first key idea underlying ITEEM is that interactions between organisms, mainly competitive interactions, are central to ecology and to evolution Allesina and Levine (2011); Barraclough (2015); Weber et al. (2017); Coyte et al. (2015). This insight has inspired work on interaction network topology Knebel et al. (2013); Tang et al. (2014); Melo and Marroig (2015); Laird and Schamp (2015); Coyte et al. (2015), and on how these networks evolve and shape diversity Ginzburg et al. (1988); Solé (2002); Tokita and Yasutomi (2003); Drossel et al. (2001); Loeuille and Loreau (2009). The second key component of the model is a trade-off between interaction traits and replication rate: better competitors replicate less. Such trade-offs, probably rooted in differences of energy allocation between life-history traits, have been observed across biology Stearns (1989); Kneitel and Chase (2004); Agrawal et al. (2010); Maharjan et al. (2013); Ferenci (2016), and they were found to be important for emergence and stability of diversity Rees (1993); Bonsall (2004); de Mazancourt and Dieckmann (2004); Ferenci (2016). We show here that ITEEM dynamics closely resembles observed eco-evolutionary dynamics, such as sympatric speciation Drossel and McKane (2000); Coyne (2007); Bolnick and Fitzpatrick (2007); Herron and Doebeli (2013), emergence of two or more levels of differentiation similar to phylogenetic structures Barraclough et al. (2003), large and complex biodiversity over long times Herron and Doebeli (2013); Kvitek and Sherlock (2013), evolutionary collapses and extinctions Kärenlampi (2014); Solé (2002), and emergence of cycles in interaction networks that facilitate species diversification and coexistence Mathiesen et al. (2011); Bagrow and Brockmann (2013); Allesina and Levine (2011); Laird and Schamp (2015). Interestingly, the model shows a unimodal (“humpback”) behavior of diversity as function of trade-off, with a critical trade-off at which biodiversity undergoes a phase transition, a behavior observed in nature Smith (2007); Vallina et al. (2014); Nathan et al. (2016). ITEEM has $N_{s}$ sites of undefined spatial arrangement (well-mixed system), each providing resources for one organism. We start an eco-evolutionary simulation with individuals of a single strain occupying a fraction of the $N_{s}$ sites. Note that in the following we use the term strain for a set of individuals with identical traits. In contrast, a species is a cluster of strains with some diversity (cluster algorithm described in Supplemental Material SM-1 SM ). At every generation or time step $t$, we try $N_{ind}(t)$ (number of individuals) replications of randomly selected individuals. Each selected individual of a strain $\alpha$ can replicate with rate $r_{\alpha}$, with its offspring randomly mutated with rate $\mu$ to new strain $\alpha^{\prime}$. An individual will vanish if it has reached its lifespan, drawn at birth from a Poisson distribution with overall fixed mean lifespan $\lambda$. Each newborn individual is assigned to a randomly selected site. If the site is empty, the new individual will occupy it. If the site is already occupied, the new individual competes with the current holder in a life-or-death struggle; In that case, the surviving individual is determined probabilistically by the “interaction” $I_{\alpha\beta}$, defined for each pair of strains $\alpha$, $\beta$. $I_{\alpha\beta}$ is the survival probability of an $\alpha$ individual in a competitive encounter with a $\beta$ individual, with $I_{\alpha\beta}\in[0,1]$ and $I_{\alpha\beta}+I_{\beta\alpha}=1$. All interactions $I_{\alpha\beta}$ form an interaction matrix $\mathbf{I}(t)$ that encodes the outcomes of all possible competitive encounters. If strain $\alpha$ goes extinct, the $\alpha$th row and column of $\mathbf{I}$ are deleted. Conversely, if a mutation of $\alpha$ generates a new strain $\alpha^{\prime}$, $\mathbf{I}$ grows by one row and column: $$\displaystyle I_{\alpha^{\prime}\beta}$$ $$\displaystyle=$$ $$\displaystyle I_{\alpha\beta}+\eta_{\alpha^{\prime}\beta}\,\,,$$ $$\displaystyle I_{\beta\alpha^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle 1-I_{\alpha^{\prime}\beta}\,\,,$$ $$\displaystyle I_{\alpha^{\prime}\alpha^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\,\,,$$ (1) where $\alpha^{\prime}$ inherits interactions from $\alpha$, but with small random modification $\eta_{\alpha^{\prime}\beta}$, drawn from a zero-centered normal distribution of fixed width $m$. Row$\alpha$ of $\mathbf{I}$ can be considered the “interaction trait” $\mathbf{T}_{\alpha}=\left(I_{\alpha 1},I_{\alpha 2},\ldots,I_{\alpha N_{sp}(t)% }\right)$ of strain $\alpha$, with $N_{sp}(t)$ the number of strains at time $t$. Evolutionary variation of mutants in ITEEM can represent any phenotypic variation which influences direct interaction of species and their relative competitive abilities Thompson (1998); Thorpe et al. (2011); Bergstrom and Kerr (2015); Thompson (1999). To implement trade-offs between fecundity and competitive ability, we introduce a relation between replication rate $r_{\alpha}$ (for fecundity) and competitive ability $C$, defined as average interaction $$C(\mathbf{T}_{\alpha})=\frac{1}{N_{sp}(t)-1}\sum_{\beta\neq\alpha}I_{\alpha% \beta},$$ (2) and we let this relation vary with trade-off parameter $s$: $$r(\mathbf{T}_{\alpha})=\left(1-C(\mathbf{T}_{\alpha})^{1/s}\right)^{s}.$$ (3) With Eq. 3 better competitive ability leads to lower fecundity and vice versa. Of course, other functional forms are conceivable. To systematically study effects of trade-off on dynamics we varied $s=-\log_{2}(1-\delta)$ with trade-off strength $\delta$ covering $[0,1]$ in equidistant steps (SM-2). The larger $\delta$, the stronger the trade-off. $\delta=0$ makes $r=1$ and thus independent of $C$. We compare ITEEM results to the corresponding results of a neutral model Hubbell (2001), where we have formally evolving vectors $\mathbf{T}_{\alpha}$, but fixed and uniform replication rates and interactions. Accordingly, the neutral model has no trade-off. ITEEM belongs to the well-established class of generalized Lotka-Volterra models in the sense that the mean-field version of our stochastic, agent-based model leads to competitive Lotka-Volterra equations (SM-3). Generation of diversity Our first question was whether ITEEM is able to generate and sustain diversity. Since we have a well-mixed system with initially only one strain, a positive answer implies sympatric diversification, i.e. the evolution of new strains and species without geographic isolation or resource partitioning. In fact, we observe in ITEEM evolution of new, distinct species, and emergence of sustainable high diversity (Fig. 1a). Remarkably, the emerging diversity has a clear hierarchical cluster structure (Fig. 1b): at the highest level we see well-separated clusters in trait space similar to biological species. Within these clusters there are sub-clusters of individual strains (SM-4) Barraclough et al. (2003). Both levels of diversity can be quantitatively identified as levels in the distribution of branch lengths in minimum spanning trees in trait space (SM-5). This hierarchical diversity is reminiscent of the phylogenetic structures in biology Barraclough et al. (2003). Overall, the model shows evolutionary divergence from one strain to several species consisting of a total of hundreds of co-existing strains over millions of generations (Fig. 1c, and SM-6.1). Depending on trade-off parameter $\delta$, this high diversity is often sustainable over hundreds of thousands of generations. Collapses to low diversity occur rarely and are usually followed by recovery of diversity (Fig. 1d, and SM-6.1). The observed divergence contradicts the long-held view of sequential fixation in asexual populations Muller (1932). Instead, we see frequently concurrent speciation with emergence of two or more species in quick succession (Fig. 1a), in agreement with recent results from long-term bacterial cultures Herron and Doebeli (2013); Maddamsetti et al. (2015); Kvitek and Sherlock (2013). Our model allows to study speciation in detail, e.g. in terms of interaction network dynamics. The interaction matrix $\mathbf{I}$ defines a complete graph, and we determined direction and strength of interaction edges between two strains $\alpha,\beta$ as sign and size of $I_{\alpha\beta}-I_{\beta\alpha}$. Accordingly, for the interaction network of species (i.e. clusters of strains) we computed directed edges between any two species by averaging over inter-cluster edges between the strains in these clusters (Fig. 1e). Three or more directed edges can form cycles of strains in which each strain competes successfully against one cycle neighbor but loses against the other neighbor, a configuration corresponding to rock-paper-scissors games Szolnoki et al. (2014). Such intransitive interactions have been observed in nature Sinervo and Lively (1996); Lankau and Strauss (2007); Bergstrom and Kerr (2015), and it has been shown that they stabilize a system driven by competitive interactions Mitarai et al. (2012); Allesina and Levine (2011); Mathiesen et al. (2011). In fact, we find that the increase of diversity as measured by e.g. richness, entropy, or functional diversity (SM-6), coincides with growth of average cycle strength (Figs 1d, g and SM-7). Impact of trade-off and lifespan on diversity The eco-evolutionary dynamics described above depend on lifespan and trade-off between replication and competitive ability. To show this we study properties of interaction matrix and trait diversity. Fig. 2 relates average interaction rate $\left\langle I\right\rangle$ and average cycle strength $\rho$ to trade-off parameter $\delta$ at fixed lifespan $\lambda$. Fig. 2b summarizes the behavior of diversity as function of $\delta$ and $\lambda$. Overall, we see in this phase diagram a weak dependency on $\lambda$ and a strong impact of $\delta$, with four distinct phases (I-IV) from low to high $\delta$. Without trade-off, strains do not have to sacrifice replication rate for better competitive abilities. We have a low-diversity population dominated by Darwinian demons, species with high competitive ability and replication rate. Quick predominance of such strategies impedes formation of a diverse network. Increasing $\delta$ in phase I ($\delta\lesssim 0.2$) slightly increases $\left\langle I\right\rangle$ and $\rho$ (Fig. 2a): biotic selection pressure exerted by inter-species interactions starts to generate diverse communities (left inset in Fig. 2b, SM-6). However, the weak trade-off still favors investing in higher competitive ability. When increasing $\delta$ further (phase II), trade-off starts to force strains to choose between higher replication rate $r$ or better competitive abilities $C$. Neither extreme generates viable species: sacrificing $r$ completely for maximum $C$ stalls species dynamics, whereas maximum $r$ leads to inferior $C$. Thus strains seek middle ground values in both $r$ and $C$. The nature of $C$ as mean of interactions (Eq. 2) allows for many combinations of interaction traits with approximately the same mean. Thus in a middle range of $r$ and $C$, many strategies with the same overall fitness are possible, which is a condition of diversity. From this multitude of strategies, sets of trait combinations emerge in which strains with different combinations keep each other in check, e.g. in the form of competitive rock-paper-scissors-like cycles between species described above. An equivalent interpretation is the emergence of diverse sets of non-overlapping compartments or trait space niches (Fig. 1b,f). Diversity in this phase II is the highest and most stable (middle inset in Fig. 2b, SM-6). As $\delta$ approaches $0.7$, $\left\langle I\right\rangle$ and $\rho$ plummet (Fig. 2a) to interaction rates comparable to noise level $m$, and a cycle strength typical for the neutral model (horizontal gray ribbon in Fig. 2a), respectively. The sharp drop of $\left\langle I\right\rangle$ and $\rho$ at $\delta\approx 0.7$ is reminiscent of a phase transition. As expected for a phase transition, the steepness increases with system size (SM-8). For $\delta\gtrsim 0.7$ interaction rates never grow and no structure emerges; diversity remains low and close to a neutral system. The sharp transition at $\delta\approx 0.7$ which is visible in practically all diversity measures (between phases II and III in Fig. 2b, SM-6) is a transition from a system dominated by biotic selection pressure to a neutral system. In high-trade-off phase III, any small change in $C$ changes $r$ drastically. For instance, given a strain $S$ with $r$ and $C$, a closely related mutant $S^{\prime}$ with $C^{\prime}\lessapprox C$ will have $r^{\prime}\gg r$ (because of the large trade-off), and therefore will invade $S$ quickly. Thus, diversity in phase III will remain stable and low, characterized by a group of similar strains with no effective interaction and hence no diversification to distinct species (right inset in Fig. 2b, SM-6). In this high trade-off regime, lifespan comes into play: here, decreasing $\lambda$ can make lives too short for replication. These hostile conditions minimize diversity and favor extinction (phase IV). Trade-off, resource availability, and diversity There is a well-known but not well understood unimodal relationship between biomass productivity and diversity (“humpback curve”,Smith (2007); Vallina et al. (2014)): diversity culminates once at middle values of productivity. This behavior is reminiscent of horizontal sections through the phase diagram in Fig. 2b, though here the driving parameter is not productivity but trade-off. However, we can make the following argument for a monotonous relation between productivity and trade-off. First we note that biomass productivity is a function of available resources: the larger the available resources, the higher the productivity. This allows us to argue in terms of available resources. If then a species has a high replication rate in an environment with scarce resources, its individuals will not be very competitive since for each of the numerous offspring individuals there is little material or energy available. On the other hand, if a species under these resource-limited conditions has competitively constructed individuals it cannot produce many of them. This corresponds to a strong trade-off between replication and competitive ability for scarce resources. At the opposite, rich end of the resource scale, species are not confronted with such hard choices between replication rate and competitive ability, i.e. we have a weak trade-off. Taken together, the trade-off axis should roughly correspond to the inverted resource axis: strong trade-off for poor resources (or low productivity), weak trade-off for rich resources (or high productivity); a detailed analytical derivation will be presented elsewhere. The fact that ITEEM produces this humpback curve that is frequently observed in planktonic systems Vallina et al. (2014) proposes trade-off as underlying mechanism of this productivity-diversity relation. Frequency-dependent selection Observation of eco-evolutionary trajectories as in Fig. 1 suggested the hypothesis that speciation events in ITEEM simulations do not occur with a constant rate and independently of each other, but that one speciation makes a following speciation more likely. We therefore tested whether the distribution of time between speciation or extinction events is compatible with a constant rate Poisson process (SM-9). At long inter-event times we see the same decaying distribution for the Poisson process and for the ITEEM data. However, for shorter times there are significant deviations from a Poisson process for speciation and extinction events: at inter-event times of around $10\text{\textsuperscript{4}}$ the number of events decreases for a Poisson process but increases in ITEEM simulations. This confirms the above hypothesis that new species increase the probability for generation of further species, and additionally that loss of a species makes further losses more likely. This result is similar to the frequency-dependent selection observed in microbial systems where new species open new niches for further species, or the loss of species causes the loss of dependent species Herron and Doebeli (2013); Maddamsetti et al. (2015). Effect of mutation rate on diversity Simulations with different mutation rates ($\mu=10^{-4},5\times 10^{-4},10^{-3},5\times 10^{-3}$) show that in ITEEM diversity grows faster and to a higher level with increasing mutation rate, but without changing the overall structure of the phase diagram (SM-10). One interesting tendency is that for higher mutation rates, the lifespan becomes more important at the interface of regions III and IV (high trade-offs), leading to an expansion of region III at the expense of hostile region IV: long lifespans in combination with high mutation rate establish low but viable diversity at strong trade-offs. Comparison of ITEEM with neutral model The neutral model introduced in the Model section has no meaningful interaction traits, and consequently no meaningful competitive ability or trade-off with replication rate. Instead, it evolves solely by random drift in phenotype space. Similarly to ITEEM, the neutral model generates a clumpy structure in trait space (SM-11), though here the species clusters are much closer and thus the functional diversity much lower. This can be demonstrated quantitatively by the size of the minimum spanning tree of populations in trait space that are much smaller and much less dynamic for the neutral model than for ITEEM at moderate trade-off (SM-11). For high trade-offs (region III, Fig. 2b), diversity and number of strong cycles in ITEEM are comparable to the neutral model (Fig. 2a). Interaction based eco-evolutionary models have received some attention in the past Ginzburg et al. (1988); Solé (2002); Tokita and Yasutomi (2003); Shtilerman et al. (2015) but then were almost forgotten, despite remarkable results. We think that these works have pointed to a possible solution of a hard problem: The complexity of evolving ecosystems is immense, and it is therefore difficult to find a representation suitable for the development of a statistical mechanics that enables qualitative and quantitative analysis Weber et al. (2017). Modeling in terms of interaction traits, rather than detailed descriptions of genotypes or phenotypes, then coarse-grains these complex systems in a natural, biologically meaningful way. Despite these advantages, interaction based models so far have not shown some key features of real systems, e.g. emergence of large, stable and complex diversity, or mass extinctions with the subsequent recovery of diversity Tokita and Yasutomi (2003); Kärenlampi (2014). Therefore, interaction based models were supplemented by ad hoc features, such as special types of mutations Tokita and Yasutomi (2003), induced extinctions Vandewalle and Ausloos (1995), or enforcement of partially connected interaction graphs Kärenlampi (2014). Trade-off between replication and competitive ability have now been experimentally established as essential to living systems Stearns (1989); Agrawal et al. (2010). Our results with ITEEM show that trade-offs fundamentally impact eco-evolutionary dynamics, in agreement with other eco-evolutionary models with trade-off Huisman et al. (2001); Bonsall (2004); de Mazancourt and Dieckmann (2004); Beardmore et al. (2011). Remarkably, we observe with ITEEM sustained high diversity in a well-mixed homogeneous system, without violating the competitive exclusion principle. This is possible because moderate life-history trade-offs force evolving species to adopt different strategies or, in other words, lead to the emergence of well-separated niches in interaction space. The current model has important limitations. For instance, the trade-off formulation was chosen to reflect reasonable properties in a minimalistic way, that should be revised or refined as more experimental data become available. Secondly, we have assumed a single, limiting resource in a well-mixed system to investigate the mechanisms behind diversification in competitive communities and possibility of niche differentiation without resource partitioning or geographic isolation. However, in nature, there will in general be several limiting resources and abiotic factors. It is possible to include those as additional rows and columns in the interaction matrix $\mathbf{I}$. Despite its simplifications, ITEEM reproduces in a single framework several phenomena of eco-evolutionary dynamics that previously were addressed with a range of distinct models or not at all, namely sympatric and concurrent speciation with the emergence of new niches in the community, recovery after mass-extinctions, large and sustained functional diversity with hierarchical organization, and a unimodal diversity distribution as function of trade-off between replication and competition. The model allows detailed analysis of mechanisms and could guide experimental tests. Acknowledgements. We thank S. Moghimi-Araghi for helpful suggestions on trade-off function. References Messer et al. (2016) P. W. Messer, S. P. Ellner, and N. G. Hairston, Trends Genet. 32, 408 (2016). Hendry (2016) A. P. Hendry, Eco-evolutionary Dynamics (Princeton University Press, Princeton, 2016). Carrol et al. (2007) S. P. Carrol, A. P. Hendry, D. N. Reznick, and C. W. Fox, Funct. Ecol. 21, 387 (2007). Fussmann et al. (2007) G. F. Fussmann, M. Loreau, and P. A. Abrams, Funct. Ecol. 21, 465 (2007). Schoener (2011) T. W. Schoener, Science (80-. ). 331, 426 (2011). Ferriere and Legendre (2012) R. Ferriere and S. Legendre, Philos. Trans. R. Soc. B Biol. Sci. 368, 20120081 (2012). Moya-Laraño et al. (2014) J. Moya-Laraño, J. Rowntree, and G. Woodward, Advances in Ecological Research: Eco-evolutionary dynamics, vol. 50 (Academic Press, 2014). Allesina and Levine (2011) S. Allesina and J. M. Levine, Proc. Natl. Acad. Sci. 108, 5638 (2011). Barraclough (2015) T. G. Barraclough, Annu. Rev. Ecol. Evol. Syst. 46, 25 (2015). Weber et al. (2017) M. G. Weber, C. E. Wagner, R. J. Best, L. J. Harmon, and B. Matthews, Trends Ecol. Evol. 32, 291 (2017). Coyte et al. (2015) K. Z. Coyte, J. Schluter, and K. R. Foster, Science (80-. ). 350, 663 (2015). Knebel et al. (2013) J. Knebel, T. Krüger, M. F. Weber, and E. Frey, Phys. Rev. Lett. 110, 168106 (2013). Tang et al. (2014) S. Tang, S. Pawar, and S. Allesina, Ecol. Lett. 17, 1094 (2014). Melo and Marroig (2015) D. Melo and G. Marroig, Proc. Natl. Acad. Sci. 112, 470 (2015). Laird and Schamp (2015) R. A. Laird and B. S. Schamp, J. Theor. Biol. 365, 149 (2015). Ginzburg et al. (1988) L. R. Ginzburg, H. R. Akçakaya, and J. Kim, J. Theor. Biol. 133, 513 (1988). Solé (2002) R. V. Solé, in Biol. Evol. Stat. Phys. (Springer Berlin Heidelberg, Berlin, Heidelberg, 2002), pp. 312–337. Tokita and Yasutomi (2003) K. Tokita and A. Yasutomi, Theor. Popul. Biol. 63, 131 (2003). Drossel et al. (2001) B. Drossel, P. G. Higgs, and A. J. McKane, J. Theor. Biol. 208, 91 (2001). Loeuille and Loreau (2009) N. Loeuille and M. Loreau, in Community Ecol. (Oxford University Press, 2009), vol. 102, pp. 163–178. Stearns (1989) S. C. Stearns, Funct. Ecol. 3, 259 (1989). Kneitel and Chase (2004) J. M. Kneitel and J. M. Chase, Ecol. Lett. 7, 69 (2004). Agrawal et al. (2010) A. A. Agrawal, J. K. Conner, and S. Rasmann, in Evolution since Darwin: the first 150 years, edited by M. A. Bell, D. J. Futuyama, W. F. Eanes, and J. S. Levinton (Sinauer Associates, Inc., Sunderland, MA, 2010), chap. 10, pp. 243–268. Maharjan et al. (2013) R. Maharjan, S. Nilsson, J. Sung, K. Haynes, R. E. Beardmore, L. D. Hurst, T. Ferenci, and I. Gudelj, Ecol. Lett. 16, 1267 (2013). Ferenci (2016) T. Ferenci, Trends Microbiol. 24, 209 (2016). Rees (1993) M. Rees, Nature 366, 150 (1993). Bonsall (2004) M. B. Bonsall, Science (80-. ). 306, 111 (2004). de Mazancourt and Dieckmann (2004) C. de Mazancourt and U. Dieckmann, Am. Nat. 164, 765 (2004). Drossel and McKane (2000) B. Drossel and A. J. McKane, J. Theor. Biol. 204, 467 (2000). Coyne (2007) J. A. Coyne, Curr. Biol. 17, R787 (2007). Bolnick and Fitzpatrick (2007) D. I. Bolnick and B. M. Fitzpatrick, Annu. Rev. Ecol. Evol. Syst. 38, 459 (2007). Herron and Doebeli (2013) M. D. Herron and M. Doebeli, PLoS Biol. 11, e1001490 (2013). Barraclough et al. (2003) T. G. Barraclough, C. W. Birky, and A. Burt, Evolution (N. Y). 57, 2166 (2003). Kvitek and Sherlock (2013) D. J. Kvitek and G. Sherlock, PLoS Genet. 9, e1003972 (2013). Kärenlampi (2014) P. P. Kärenlampi, Eur. Phys. J. E 37, 56 (2014). Mathiesen et al. (2011) J. Mathiesen, N. Mitarai, K. Sneppen, and A. Trusina, Phys. Rev. Lett. 107, 188101 (2011). Bagrow and Brockmann (2013) J. P. Bagrow and D. Brockmann, Phys. Rev. X 3, 021016 (2013). Smith (2007) V. H. Smith, FEMS Microbiol. Ecol. 62, 181 (2007). Vallina et al. (2014) S. M. Vallina, M. J. Follows, S. Dutkiewicz, J. M. Montoya, P. Cermeno, and M. Loreau, Nat. Commun. 5, 4299 (2014). Nathan et al. (2016) J. Nathan, Y. Osem, M. Shachak, and E. Meron, J. Ecol. 104, 419 (2016). (41) See supplemental material (sm) at [url will be inserted by publisher] for more information, definitions and results. Thompson (1998) J. N. Thompson, Trends Ecol. Evol. 13, 329 (1998). Thorpe et al. (2011) A. S. Thorpe, E. T. Aschehoug, D. Z. Atwater, and R. M. Callaway, J. Ecol. 99, 729 (2011). Bergstrom and Kerr (2015) C. T. Bergstrom and B. Kerr, Nature 521, 431 (2015). Thompson (1999) J. N. Thompson, Science (80-. ). 284, 2116 (1999). Hubbell (2001) S. P. Hubbell, The Unified Neutral Theory of Biodiversity and Biogeography (Princeton University Press, Princeton, 2001). Muller (1932) H. J. Muller, Am. Nat. 66, 118 (1932). Maddamsetti et al. (2015) R. Maddamsetti, R. E. Lenski, and J. E. Barrick, Genetics 200, 619 (2015). Szolnoki et al. (2014) A. Szolnoki, M. Mobilia, L.-L. Jiang, B. Szczesny, A. M. Rucklidge, and M. Perc, J. R. Soc. Interface 11, 20140735 (2014). Sinervo and Lively (1996) B. Sinervo and C. M. Lively, Nature 380, 240 (1996). Lankau and Strauss (2007) R. a. Lankau and S. Y. Strauss, Science (80-. ). 317, 1561 (2007). Mitarai et al. (2012) N. Mitarai, J. Mathiesen, and K. Sneppen, Phys. Rev. E 86, 011929 (2012). Shtilerman et al. (2015) E. Shtilerman, D. A. Kessler, and N. M. Shnerb, J. Theor. Biol. 383, 138 (2015). Vandewalle and Ausloos (1995) N. Vandewalle and M. Ausloos, Europhys. Lett. 32, 613 (1995). Huisman et al. (2001) J. Huisman, A. M. Johansson, E. O. Folmer, and F. J. Weissing, Ecol. Lett. 4, 408 (2001). Beardmore et al. (2011) R. E. Beardmore, I. Gudelj, D. A. Lipson, and L. D. Hurst, Nature 472, 342 (2011).
Poisson structures on almost complex Lie algebroids Paul Popescu () Abstract In this paper we extend the almost complex Poisson structures from almost complex manifolds to almost complex Lie algebroids. Examples of such structures are also given and the almost complex Poisson morphisms of almost complex Lie algebroids are studied. 2010 Mathematics Subject Classification: 32Q60, 17B66, 53D17, Key Words: almost complex structure, Lie algebroid, Poisson structure 1 Introduction and preliminaries 1.1 Introduction The Poisson manifolds are the smooth manifolds equipped with a Poisson bracket on their ring of smooth functions. The importance of Poisson structures both in real and complex geometry is without question. Short time after their introduction in the real case [23, 45] the Poisson structures were studied in the context of some different categories in differential geometry as: (almost) complex analytic category [5, 9, 15, 17, 21, 22, 24, 33, 34, 37, 46], foliated category [4, 40, 41, 42] or Banach category [7, 16, 31, 32]. In [24] Lichnerowicz introduces the general concept of complex Poisson structure, as a $2$–vector $\pi^{2,0}$ of bidegree $(2,0)$ on a complex manifold $M$ such that $$[\pi^{2,0},\pi^{2,0}]=0\,,\,\,[\pi^{2,0},\overline{\pi^{2,0}}]=0.$$ Related to this tensor, a bracket $\{\cdot,\cdot\}$ on the algebra of complex differentiable functions is defined, and when $\pi^{2,0}$ is holomorphic, this bracket can be reduced to the algebra of holomorphic functions on the complex manifold, see also [37]. These structures play an important role in mathematics and mathematical physics, as well as the holomorphic Poisson structures, for example, in the study of complex Hamiltonian systems, see for instance [3, 37]. The local structure of holomorphic Poisson manifolds was given in [37], just in the real case [45]. Other generalizations as almost complex Poisson structures and $\partial$-symplectic and $\partial$-Poisson structures was studied in [9] and [33], respectively, and also certain significant studies concerning holomorphic Poisson manifolds in relation with corresponding notions from the real case are given in some recent papers [21, 38]. The Lie algebroids [25, 26] are generalizations of Lie algebras and integrable distributions.  For some algebraic extensions see, for example, [2]. In fact a Lie algebroid is an anchored vector bundle with a Lie bracket on module of sections. The cotangent bundle of a Poisson manifold has a natural structure of a Lie algebroid and between Poisson structures and Lie algebroids are many other connections, as for instance for every Lie algebroid structure on an anchored vector bundle there is a specific linear Poisson structure on the corresponding dual vector bundle and conversely, see [43, 44]. The almost complex Lie algebroids over almost complex manifolds were introduced in [6] as a natural extension of the notion of an almost complex manifold to that of an almost complex Lie algebroid. This generalizes the definition of an almost complex Poisson manifold given in [9], where some examples are also given. In [18] some problems concerning to geometry of almost complex Lie algebroids over smooth manifolds are studied in relation with corresponding notions from the geometry of almost complex manifolds. Also, we notice that the Poisson structures on real Lie algebroids are introduced and studied, see for instance [1, 10, 11, 12, 20, 30, 35]. The aim of this paper is to continue the study of Poisson structures, giving a natural generalization of almost complex Poisson structures from almost complex manifolds [9] to almost complex Lie algebroids. The paper is organized as follows. In the preliminary section, we briefly recall some basic facts about Lie algebroids as well as the Schouten-Nijenhuis bracket and Poisson structures on Lie algebroids. In the second section we briefly present the almost complex Lie algebroids over smooth manifolds and also recall the Newlander-Nirenberg theorem and almost complex morphisms for almost complex Lie algebroids. Next we introduce the notion of almost complex Poisson structures on almost complex Lie algebroids and we describe some examples of such structures. Also, the complex Lichnerowicz-Poisson cohomology of almost complex Poisson Lie algebroids is introduced. We consider morphisms of almost complex Lie algebroids preserving the almost complex Poisson structures that induce almost complex Poisson structures on Lie subalgebroids. Also, we characterize the morphisms of almost complex Poisson Lie algebroids in terms of its Graph which is a $J_{E}$–coisotropic Lie subalgebroid of the associated almost complex Poisson Lie algebroid given by direct product structure. Compatibility conditions between complex structures and Poisson structures arise naturally in the real form of holomorphic Poisson manifolds and Lie algebroids; see , for example, [21]. These compatibility conditions are particular cases of Poisson-Nijenhuis structures as studied, for example in [12, 20]. In the last section we give examples of integrable almost complex structures and almost complex Poisson structures that are not compatible and that arise naturally on Lie algebroids on each sphere $S^{2n-1}$, $n\geq 1$. These examples motivate the study of almost complex Poisson Lie algebroids studied in this paper, where the almost complex and the Poisson structures are not submitted to compatibility conditions. The main methods used here are similarly and closely related to those used in [9] for the case of almost complex Poisson manifolds. 1.2 Preliminaries on Lie algebroids Definition 1.1. We say that $p:E\rightarrow M$ is an anchored vector bundle if there exists a vector bundle morphism $\rho:E\rightarrow TM$. The morphism $\rho$ will be called the anchor map. Definition 1.2. Let $(E,p,M)$ and $(E^{\prime},p^{\prime},M^{\prime})$ be two anchored vector bundles over the same base $M$ with the anchors $\rho:E\rightarrow TM$ and $\rho^{\prime}:E^{\prime}\rightarrow TM$. A morphism of anchored vector bundles over $M$ or a $M$–morphism of anchored vector bundles between $(E,\rho)$ and $(E^{\prime},\rho^{\prime})$ is a morphism of vector bundles $\varphi:(E,p,M)\rightarrow(E^{\prime},p^{\prime},M)$ such that $\rho^{\prime}\circ\varphi=\rho$. The anchored vector bundles over the same base $M$ form a category. The objects are the pairs $(E,\rho_{E})$ with $\rho_{E}$ the anchor of $E$ and a morphism $\phi:(E,\rho_{E})\rightarrow(F,\rho_{F})$ is a vector bundle morphism $\phi:E\rightarrow F$ which verifies the condition $\rho_{F}\circ\phi=\rho_{E}$. Let $p:E\rightarrow M$ be an anchored vector bundle with the anchor $\rho:E\rightarrow TM$ and the induced morphism $\rho_{E}:\Gamma(E)\rightarrow\mathcal{X}(M)$. Assume there is a bracket $[\cdot,\cdot]_{E}$ on $\Gamma(E)$ that provides a structure of real Lie algebra on $\Gamma(E)$. Definition 1.3. The triple $(E,\rho_{E},[\cdot,\cdot]_{E})$ is called a Lie algebroid if i) $\rho_{E}:(\Gamma(E,[\cdot,\cdot]_{E})\rightarrow(\mathcal{X}(M),[\cdot,\cdot])$ is a Lie algebra homomorphism, i.e. $$\rho_{E}([s_{1},s_{2}]_{E})=[\rho_{E}(s_{1}),\rho_{E}(s_{2})];$$ ii) $[s_{1},fs_{2}]_{E}=f[s_{1},s_{2}]_{E}+\rho_{E}(s_{1})(f)s_{2}$, for every $s_{1},s_{2}\in\Gamma(E)$ and $f\in C^{\infty}(M)$. A Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ is said to be transitive, if $\rho_{E}$ is surjective. There exists a canonical cohomology theory associated to a Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ over a smooth manifold $M$. The space $C^{\infty}(M)$ is a $\Gamma(E)$-module relative to the representation $$\Gamma(E)\times C^{\infty}(M)\rightarrow C^{\infty}(M),\,\,(s,f)\mapsto\rho_{E% }(s)f.$$ Following the well-known Chevalley-Eilenberg cohomology theory [8], we can introduce a cohomology complex associated to the Lie algebroid as follows. A $p$-linear mapping $$\omega:\Gamma(E)\times\ldots\times\Gamma(E)\rightarrow C^{\infty}(M)$$ is called a $C^{\infty}(M)$-valued $p$-cochain. Let $\mathcal{C}^{p}(E)$ denote the vector space of these cochains. The operator $d_{E}:\mathcal{C}^{p}(E)\rightarrow\mathcal{C}^{p+1}(E)$ given by $$(d_{E}\omega)(s_{0},\ldots,s_{p})=\sum_{i=0}^{p}(-1)^{i}\rho_{E}(s_{i})(\omega% (s_{0},\ldots,\widehat{s_{i}},\ldots,s_{p}))$$ (1.1) $$+\sum_{i<j=1}^{p}(-1)^{i+j}\omega([s_{i},s_{j}]_{E},s_{0},\ldots,\widehat{s_{i% }},\ldots,\widehat{s_{j}},\ldots,s_{p}),$$ for $\omega\in\mathcal{C}^{p}(E)$ and $s_{0},\ldots,s_{p}\in\Gamma(E)$, defines a coboundary since $d_{E}\circ d_{E}=0$. Hence, $(\mathcal{C}^{p}(E),d_{E})$, $p\geq 1$ is a differential complex and the corresponding cohomology spaces are called the cohomology groups of $\Gamma(E)$ with coefficients in $C^{\infty}(M)$. Lemma 1.1. If $\omega\in\mathcal{C}^{p}(E)$ is skew-symmetric and $C^{\infty}(M)$-linear, then $d_{E}\omega$ also is skew-symmetric. From now on, the subspace of skew-symmetric and $C^{\infty}(M)$-linear cochains of the space $\mathcal{C}^{p}(E)$ will be denoted by $\Omega^{p}(E)$ and its elements will be called $p$–forms on $E$ (or $p$–section forms on $E$). As in [11], the Lie algebroid cohomology $H^{p}(E)$ of $(E,\rho_{E},[\cdot,\cdot]_{E})$ is the cohomology of the subcomplex $(\Omega^{p}(E),d_{E})$, $p\geq 1$. It is a natural generalization of the Lie algebra case, as studied in [8]. Definition 1.4. Let $(E,\rho_{E},[\cdot,\cdot]_{E})$ and $(E^{\prime},\rho_{E^{\prime}},[\cdot,\cdot]_{E^{\prime}})$ be two Lie algebroids over $M$. A morphism of Lie algebroids over $M$, is a morphism $\varphi:(E,\rho_{E})\rightarrow(E^{\prime},\rho_{E^{\prime}})$ of anchored vector bundles with property that: $$d_{E}\circ\varphi^{*}=\varphi^{*}\circ d_{E^{\prime}},$$ (1.2) where $\varphi^{*}:\Omega^{p}(E^{\prime})\rightarrow\Omega^{p}(E)$ is defined by $$(\varphi^{*}\omega^{\prime})(s_{1},\ldots,s_{p})=\omega^{\prime}(\varphi(s_{1}% ),\ldots,\varphi(s_{p})),\,\omega^{\prime}\in\Omega^{p}(E^{\prime}),\,s_{1},% \ldots,s_{p}\in\Gamma(E).$$ We also say that $\varphi$ is a $M$–morphism of Lie algebroids. Remark 1.1. Alternatively, we say that $\varphi:(E,\rho_{E})\rightarrow(E^{\prime},\rho_{E^{\prime}})$ is an $M$–morphism of Lie algebroids if $$\varphi\left([s_{1},s_{2}]_{E}\right)=\left[\varphi(s_{1}),\varphi(s_{2})% \right]_{E^{\prime}}\,,\,\,\forall\,s_{1},s_{2}\in\Gamma(E).$$ (1.3) The Lie algebroids over the same manifold $M$ and all $M$–morphisms of Lie algebroids form a category, which (via a forgetful structure) is a subcategory of the category of anchored vector bundles over $M$. Let $(E,\rho_{E},[\cdot,\cdot]_{E})$ be a Lie algebroid over $M$. If we consider $(x^{i})$, $i=1,\ldots,n$ a local coordinates system on $M$ and $\{e_{a}\}$, $a=1,\ldots,m$ a local basis of sections on the bundle $E$, where $\dim M=n$ and $\mathrm{rank}\,E=m$, then $(x^{i},y^{a})$, $i=1,\ldots,n$, $a=1,\ldots,m$ are local coordinates on $E$. Also, for an element $y\in E$ such that $p(y)=x\in U\subset M$, we have $y=y^{a}e_{a}(p(y))$. In a such local coordinates system, the anchor $\rho_{E}$ and the Lie bracket $[\cdot,\cdot]_{E}$ are expressed by the smooth functions $\rho^{i}_{a}$ and $C^{a}_{bc}$, namely $$\rho_{E}(e_{a})=\rho^{i}_{a}\frac{\partial}{\partial x^{i}}\,\,\mathrm{and}\,% \,[e_{a},e_{b}]_{E}=C^{c}_{ab}e_{c}\,,\,i=1,\ldots,n,\,a,b,c=1,\ldots,m.$$ (1.4) The functions $\rho^{i}_{a}\,,\,C^{a}_{bc}\in C^{\infty}(M)$ given by the above relations are called the structure functions of Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ in the given local coordinates system. If $(E,\rho_{E},[\cdot,\cdot]_{E})$ is a Lie algebroid over $M$, $(x^{i})$, $i=1,\ldots,n$ is a local coordinate system on $M$ and $\{e_{a}\}$, $a=1,\ldots,m$ is a local basis of sections on $E$, then the structure functions $\rho_{a}^{i}\,,\,C_{bc}^{a}\in C^{\infty}(M)$ of the Lie algebroid $E$ verify the following relations: $$\rho_{a}^{j}\frac{\partial\rho_{b}^{i}}{\partial x^{j}}-\rho_{b}^{j}\frac{% \partial\rho_{a}^{i}}{\partial x^{j}}=\rho_{c}^{i}C_{ab}^{c}\,,\,C_{ab}^{c}=-C% _{ba}^{c}\,,\,\sum_{cycl(a,b,c)}\left(\rho_{a}^{i}\frac{\partial C_{bc}^{d}}{% \partial x^{i}}+C_{ab}^{e}C_{ce}^{d}\right)=0.$$ (1.5) The equations (1.5) are called the structure equations of Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$. 1.3 Poisson structures on Lie algebroids As in the classical settings from the geometry of Poisson manifolds, [39], the Schouten-Nijenhuis bracket on a Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ is a $\mathbb{R}$–linear map $[\cdot,\cdot]_{E}:\mathcal{V}^{\bullet}(E)\rightarrow\mathcal{V}^{\bullet-1}(E)$ defined by $$\displaystyle[s_{1}\wedge\ldots\wedge s_{p},t_{1}\wedge\ldots\wedge t_{q}]_{E}$$ $$\displaystyle=$$ $$\displaystyle(-1)^{p+1}\sum_{i=1}^{p}\sum_{j=1}^{q}(-1)^{i+j}[s_{i},t_{j}]_{E}% \wedge s_{1}\wedge\ldots\wedge\widehat{s_{i}}\wedge\ldots\wedge s_{p}$$ $$\displaystyle\wedge t_{1}\wedge\ldots\wedge\widehat{t_{j}}\wedge\ldots\wedge t% _{q},$$ for every $s_{i},t_{j}\in\Gamma(E)$, [20], where $\mathcal{V}^{p}(E)=\Gamma(\bigwedge^{p}E)$ is the set of $p$–sections of $E$. Let us consider the bisection (i.e. contravariant, skew-symmetric, $2$–section) $\pi_{E}\in\Gamma(\bigwedge^{2}E)$ locally given by $$\pi_{E}=\frac{1}{2}\pi^{ab}(x)e_{a}\wedge e_{b}.$$ (1.6) Definition 1.5. [20] The bisection $\pi_{E}\in\Gamma(\bigwedge^{2}E)$ is called a Poisson bisection on the Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ if $[\pi_{E},\pi_{E}]_{E}=0$. If $(x^{i})$, $i=1,\ldots,n$ is a local coordinate system on $M$ and $\{e_{a}\}$, $a=1,\ldots,m$ is a local basis of sections on $E$, then the condition $[\pi_{E},\pi_{E}]_{E}=0$ is locally expressed as $$\sum_{(a,e,d)}\left(\pi^{ab}\rho_{b}^{i}\frac{\partial\pi^{ed}}{\partial x^{i}% }+\pi^{ab}\pi^{cd}C_{bc}^{e}\right)=0.$$ (1.7) If $\pi_{E}$ is a Poisson bisection on the Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$, then the pair $(E,\pi_{E})$ is called a Poisson Lie algebroid. A corresponding Poisson bracket on $M$ is given by $$\{f,g\}=\pi_{E}\left(d_{E}f,d_{E}g\right)\,,\,\,f,g\in C^{\infty}(M).$$ (1.8) Also, we have the map $\pi_{E}^{\#}:\Omega^{1}(E)\rightarrow\Gamma(E)$ defined by $$\pi_{E}^{\#}\omega=\imath_{\omega}\pi_{E}\,,\,\,\omega\in\Omega{}^{1}(E).$$ (1.9) Let us consider now the bracket $$[\omega,\theta]_{\pi_{E}}=\mathcal{L}_{\pi_{E}^{\#}\omega}\theta-\mathcal{L}_{% \pi_{E}^{\#}\theta}\omega-d_{E}(\pi_{E}(\omega,\theta)),$$ (1.10) where $\mathcal{L}$ denotes the Lie derivative in the Lie algebroids calculus framework, [27], and $\omega,\theta\in\Omega^{1}(E)$. With respect to this bracket and the usual Lie bracket on vector fields, the map $\widetilde{\rho}_{E}:\Gamma(E^{\ast})\rightarrow\mathcal{X}(M)$ defined by $$\widetilde{\rho}_{E}=\rho_{E}\circ\pi_{E}^{\#},$$ (1.11) is a Lie algebra homomorphism, namely $\widetilde{\rho}_{E}\left([\omega,\theta]_{\pi_{E}}\right)=[\widetilde{\rho}_{% E}(\omega),\widetilde{\rho}_{E}(\theta)]$. Also, the bracket $[\cdot,\cdot]_{\pi_{E}}$ satisfies the Leibniz rule $$[\omega,f\theta]_{\pi_{E}}=f[\omega,\theta]_{\pi_{E}}+\widetilde{\rho}_{E}(% \omega)(f)\theta,$$ and it results that $(E^{\ast},[\cdot,\cdot]_{\pi_{E}},\widetilde{\rho}{}_{E})$ is a Lie algebroid, [20]. Moreover, we can define the Lichnerowicz type differential $d_{\pi_{E}}:\Omega^{p}(E)\rightarrow\Omega^{p+1}(E)$ by $$\displaystyle d_{\pi_{E}}\omega(s_{1},\ldots,s_{p+1})$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}^{p+1}(-1)^{i+1}\widetilde{\rho}_{E}(s_{i})\left(\omega% (s_{1},\ldots,\widehat{s_{i}},\ldots,s_{p+1})\right)$$ $$\displaystyle+\sum_{1\leq i<j\leq p+1}(-1)^{i+j}\omega\left([s_{i},s_{j}]_{\pi% _{E}},s_{1},\ldots,\widehat{s_{i}},\ldots,\widehat{s_{j}},\ldots,s_{p+1}\right).$$ We have $d_{\pi_{E}}\circ d_{\pi_{E}}=0$ and, hence we get the cohomology of Lie algebroid $(E^{\ast},[\cdot,\cdot]_{\pi_{E}},\widetilde{\rho}_{E})$ which generalize the Lichnerowicz-Poisson cohomology for Poisson manifolds [23, 39]. 2 Poisson structures on almost complex Lie algebroids In the first subsection of this section we briefly present some elementary notions about almost complex Lie algebroids over smooth manifolds. In the second subsection we introduce the notion of almost complex Poisson structure on almost complex Lie algebroids and we describe some examples of such structures. Also, the complex Lichnerowicz-Poisson cohomology of almost complex Poisson Lie algebroids is introduced. In the last subsection we consider the morphisms between almost complex Lie algebroids preserving the almost complex Poisson structures and induce almost complex Poisson structures on Lie subalgebroids. Also, we characterize the morphisms of almost complex Poisson Lie algebroids in terms of its Graph which is a $J_{E}$–coisotropic Lie subalgebroid of the associated almost complex Poisson Lie algebroid given by direct product structure. 2.1 Almost complex Lie algebroids In [6] is given the definition of an almost complex Lie algebroid over an almost complex manifold, which generalizes the definition of an almost complex Poisson manifold given in [9], as follows. Consider a $2n$–dimensional almost complex manifold $M$, with an almost complex structure $J_{M}:\Gamma(TM)\rightarrow\Gamma(TM)$ and $(E,\rho_{E},[\cdot,\cdot]_{E})$ be a Lie algebroid over $M$ with $\mathrm{rank}\,E=2m$.  According to [6], an almost complex structure $J_{E}$ on $(E,\rho_{E},[\cdot,\cdot]_{E})$ is an endomorphism $J_{E}:\Gamma(E)\rightarrow\Gamma(E)$ such that $J_{E}^{2}=-\mathrm{id_{\Gamma(E)}}$ and $J_{M}\circ\rho_{E}=\rho_{E}\circ J_{E}$. However, for our purpose we do not need the relation $J_{M}\circ\rho_{E}=\rho_{E}\circ J_{E}$ and so we will consider the almost complex Lie algebroids over smooth manifolds $M$. Definition 2.1. A real Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ over a smooth manifolds $M$ endowed with an almost complex endomorphism (i.e. an endomorphism $J_{E}$ of ${E}$ such that $J_{E}^{2}=-\mathrm{id_{\Gamma(E)}}$) will be called an almost complex Lie algebroid. The standard examples of almost complex Lie algebroids are given by the tangent bundle of an almost complex manifold $(M,J)$, the cotangent bundle of an almost complex Poisson manifold $(M,J,\pi^{2,0})$, the tangent bundle of a complex foliation, the prolongation of a Lie algebroid over its vector bundle projection, see [18]. By complexification of the real vector bundle $E$ we obtain the complex vector bundle $E_{\mathbb{C}}:=E\otimes_{\mathbb{R}}\mathbb{C}\rightarrow M$ and by extending the anchor map and the Lie bracket $\mathbb{C}$–linearly, we obtain a complex Lie algebroid $(E_{\mathbb{C}},[\cdot,\cdot]_{E},\rho_{E})$ with the anchor map $\rho_{E}:\Gamma(E_{\mathbb{C}})\rightarrow\Gamma(TM_{\mathbb{C}})$, that is, a homomorphism of the complexified of corresponding Lie algebras, and $[s_{1},fs_{2}]_{E}=f[s_{1},s_{2}]_{E}+\rho_{E}(s_{1})(f)s_{2}$, for every $s_{1},s_{2}\in\Gamma(E_{\mathbb{C}})$ and $f\in C^{\infty}(M)_{\mathbb{C}}=C^{\infty}(M)\otimes_{\mathbb{R}}\mathbb{C}$. Also, extending $\mathbb{C}$–linearly the almost complex structure $J_{E}$, we obtain the almost complex structure $J_{E}$ on $E_{\mathbb{C}}$. As usual, we have a splitting $$E_{\mathbb{C}}=E^{1,0}\oplus E^{0,1}$$ according to the eigenvalues $\pm i$ of $J_{E}$ on $E_{\mathbb{C}}$. We also have $$\Gamma(E^{1,0})=\{s-iJ_{E}s\,|\,s\in\Gamma(E)\}\,,\,\Gamma(E^{0,1})=\{s+iJ_{E}% s\,|\,s\in\Gamma(E)\}.$$ (2.1) Similarly, we have the splitting $$E^{*}_{\mathbb{C}}:=E^{*}\otimes_{\mathbb{R}}\mathbb{C}=(E^{1,0})^{*}\oplus(E^% {0,1})^{*}$$ according to the eigenvalues $\pm i$ of $J^{*}_{E}$ on $E^{*}_{\mathbb{C}}$, where $J_{E}^{*}$ is the natural almost complex structure induced on $E^{*}$. We also have $$\Gamma((E^{1,0})^{*})=\{\omega-iJ^{*}_{E}\omega\,|\,\omega\in\Gamma(E^{*})\}\,% ,\,\Gamma((E^{0,1})^{*})=\{\omega+iJ^{*}_{E}\omega\,|\,\omega\in\Gamma(E^{*})\}.$$ (2.2) We set $$\bigwedge^{p,q}(E)=\bigwedge^{p}(E^{1,0})^{*}\wedge\bigwedge^{q}(E^{0,1})^{*}% \,\,\mathrm{and}\,\,\Omega^{p,q}(E)=\Gamma\left(\bigwedge^{p,q}(E)\right).$$ Then, the differential $d_{E}$ of the complex $\Omega^{\bullet}(E)=\bigoplus_{p,q}\Omega^{p,q}(E)$ splits into the sum $$d_{E}=\partial^{\prime}_{E}+\partial_{E}+\overline{\partial}_{E}+\partial^{% \prime\prime}_{E},$$ where $$\partial^{\prime}_{E}:\Omega^{p,q}(E)\rightarrow\Omega^{p+2,q-1}(E)\,,\,% \partial_{E}:\Omega^{p,q}(E)\rightarrow\Omega^{p+1,q}(E),$$ $$\overline{\partial}_{E}:\Omega^{p,q}(E)\rightarrow\Omega^{p,q+1}(E)\,,\,% \partial^{\prime\prime}_{E}:\Omega^{p,q}(E)\rightarrow\Omega^{p-1,q+2}(E).$$ For an almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ we can consider the Nijenhuis tensor of $J_{E}$ defined by $$N_{J_{E}}(s_{1},s_{2})=[J_{E}s_{1},J_{E}s_{2}]_{E}-J_{E}[s_{1},J_{E}s_{2}]_{E}% -J_{E}[J_{E}s_{1},s_{2}]_{E}-[s_{1},s_{2}]_{E}\,,\,\forall\,s_{1},s_{2}\in% \Gamma(E).$$ (2.3) Definition 2.2. The almost complex structure $J_{E}$ of almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ is called integrable if $N_{J_{E}}=0$. Also, using a standard procedure from almost complex geometry, [13, 47], we have proved the following Newlander-Nirenberg type theorem: Theorem 2.1. [18] For an almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ over a smooth manifold $M$ the following assertions are equivalent: (i) If $s_{1},s_{2}\in\Gamma(E^{1,0})$ then $[s_{1},s_{2}]_{E}\in\Gamma(E^{1,0})$; (ii)) If $s_{1},s_{2}\in\Gamma(E^{0,1})$ then $[s_{1},s_{2}]_{E}\in\Gamma(E^{0,1})$; (iii) $d_{E}\Omega^{1,0}(E)\subset\Omega^{2,0}(E)+\Omega^{1,1}(E)$ and $d_{E}\Omega^{0,1}(E)\subset\Omega^{1,1}(E)\oplus\Omega^{0,2}(E)$; (iv) $d_{E}\Omega^{p,q}(E)\subset\Omega^{p+1,q}(E)+\Omega^{p,q+1}(E)$; (v) the real Nijenhuis $N_{J_{E}}$ from (2.3) vanish, namely $J_{E}$ is integrable. The above Newlander-Nirenberg type theorem says that for any integrable almost complex structure $J_{E}$ on an almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ we have the usual decomposition $$d_{E}=\partial_{E}+\overline{\partial}_{E}.$$ From $d_{E}^{2}=d_{E}\circ d_{E}=0$ we obtain the following identities: $$\partial_{E}^{2}=\overline{\partial}_{E}^{2}=\partial_{E}\overline{\partial}_{% E}+\overline{\partial}_{E}\partial_{E}=0.$$ (2.4) Hence, in this case we have a Dolbeault type Lie algebroid cohomology as the cohomology of the complex $(\Omega^{p,\bullet}(E),\overline{\partial}_{E})$. We notice that in more situations in this paper we will consider the case when $J_{E}$ is integrable. Let us consider now $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ and $(E^{\prime},\rho_{E^{\prime}},[\cdot,\cdot]_{E^{\prime}},J_{E^{\prime}})$ be two almost complex Lie algebroids over a smooth manifold $M$. A $M$–morphism $\varphi$ of almost complex Lie algebroids is naturally extended by $\mathbb{C}$–linearity to $\varphi:(E_{\mathbb{C}},\rho_{E},J_{E})\rightarrow(E^{\prime}_{\mathbb{C}},% \rho_{E^{\prime}},J_{E^{\prime}})$ over $M$ and it is called almost complex if $$\varphi\circ J_{E}=J_{E^{\prime}}\circ\varphi.$$ (2.5) Proposition 2.1. [18] If $\varphi:(E,\rho_{E},J_{E})\rightarrow(E^{\prime},\rho_{E^{\prime}},J_{E^{% \prime}})$ is a $M$–morphism of almost complex Lie algebroids over $M$, then the following assertions are equivalent: (i) If $s_{1}\in\Gamma(E^{1,0})$ then $\varphi(s_{1})\in\Gamma(E^{\prime 1,0})$; (ii) If $s_{1}\in\Gamma(E^{0,1})$ then $\varphi(s_{1})\in\Gamma(E^{\prime 0,1})$; (iii) If $\omega^{\prime}\in\Omega^{p,q}(E^{{}^{\prime}})$ then $\varphi^{*}\omega^{\prime}\in\Omega^{p,q}(E)$, where $$(\varphi^{*}\omega^{\prime})(s_{1},\ldots,s_{p},t_{1},\ldots,t_{q})=\omega^{% \prime}(\varphi(s_{1}),\ldots,\varphi(s_{p}),\varphi(t_{1}),\ldots,\varphi(t_{% q}))$$ for any $s_{1},\ldots,s_{p}\in\Gamma(E^{1,0})$ and $t_{1},\ldots,t_{q}\in\Gamma(E^{0,1})$. (iv) The morphism $\varphi$ is almost complex. 2.2 Almost complex Poisson Lie algebroids Let $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ be an almost complex Lie algebroid over a smooth manifold $M$. Then the complexification $\mathcal{V}^{\bullet}_{\mathbb{C}}(E)=\mathcal{V}^{\bullet}(E)\otimes_{\mathbb% {R}}\mathbb{C}$ of the Grassmann algebra of multisections of $E$, admits the decomposition $\mathcal{V}_{\mathbb{C}}^{p+q}(E)=\bigoplus_{p,q}\mathcal{V}^{p,q}(E)$, where we have put $\mathcal{V}^{p,q}(E)=\Gamma\left(\bigwedge^{p}E^{1,0}\wedge\bigwedge^{q}E^{0,1% }\right)$. As in the usual case of almost complex Poisson manifolds, [9], with respect to the graduation above, we have the following decomposition of the Schouten-Nijenhuis bracket on $E$: $$[S,T]_{E}\in\mathcal{V}^{p+r-2,q+s+1}(E)\oplus\mathcal{V}^{p+r-1,q+s}(E)\oplus% \mathcal{V}^{p+r,q+s-1}(E)\oplus\mathcal{V}{}^{p+r+1,q+s-2}(E),$$ (2.6) for every $S\in\mathcal{V}^{p,q}(E)$ and $T\in\mathcal{V}^{r,s}(E)$. In particular, the bracket of two sections of type $(1,0)$ may not be of type $(1,0)$. If the almost complex structure $J_{E}$ is integrable then the Schouten-Nijenhuis bracket on $E$ has the property $$[S,T]_{E}\in\mathcal{V}^{p+r-1,q+s}(E)\oplus\mathcal{V}^{p+r,q+s-1}(E)$$ (2.7) for every $S\in\mathcal{V}^{p,q}(E)$ and $T\in\mathcal{V}^{r,s}(E)$. In particular, the bracket of two sections of type $(1,0)$ is again of type $(1,0)$. Definition 2.3. Let $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ be an almost complex Lie algebroid. Then a $(2,0)$–section $\pi^{2,0}_{E}\in\mathcal{V}^{2,0}(E)$ is called an almost complex Poisson structure if $$\left[\pi^{2,0}_{E},\pi^{2,0}_{E}\right]_{E}=0\,\,\mathrm{and}\,\,\left[\pi^{2% ,0}_{E},\overline{\pi^{2,0}_{E}}\right]_{E}=0.$$ (2.8) Associated to an almost complex Poisson structure on the almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ there is a bracket on the algebra of complex functions $C^{\infty}(M)_{\mathbb{C}}=C^{\infty}(M)\otimes_{\mathbb{R}}\mathbb{C}$, $$\{\cdot,\cdot\}:C^{\infty}(M)_{\mathbb{C}}\times C^{\infty}(M)_{\mathbb{C}}% \rightarrow C^{\infty}(M)_{\mathbb{C}}$$ given by $\{f,g\}=\imath_{\pi_{E}^{2,0}}(d_{E}f\wedge d_{E}g)$; and it satisfies the skew-symmetry condition, the Leibniz rule and the Jacobi identity (which in fact is equivalent to the first relation from (2.8)). Also, from (2.6), we easily obtain Proposition 2.2. If $\pi^{2,0}_{E}$ defines an almost complex Poisson structure on an almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$, then $\pi_{E}=\pi^{2,0}_{E}+\overline{\pi^{2,0}_{E}}$ and $\pi^{\prime}_{E}=i\left(\pi^{2,0}_{E}-\overline{\pi^{2,0}_{E}}\right)$ are real Poisson structures on $E$; the converse also holds. Remark 2.1. The converse in the above proposition is not true if only one of the tensors $\pi_{E}$ or $\pi^{\prime}_{E}$ is real Poisson, unless $J_{E}$ is integrable. Example 2.1. ($\partial_{E}$–holomorphic symplectic structures). According to [19] a symplectic structure on a Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ is defined by a non-degenerated $d_{E}$–closed $2$–form $\omega\in\Omega^{2}(E)$. For instance it can be induced by a Kählerian structure $g$ on an almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$, see [18]. As similar to [33, 34], when $J_{E}$ is integrable, we can consider the notion of a $\partial_{E}$–symplectic structure on an almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$ as a non-degenerated $\partial_{E}$–closed $(2,0)$–form $\omega^{2,0}\in\Omega^{2,0}(E)$. As it happens in the case of real symplectic structures on Lie algebroids, there exists the isomorphism $\mu:\Gamma(E^{1,0})\rightarrow\Omega^{1,0}(E)$, defined by $\mu(s)=\imath_{s}\omega^{2,0}$. Extending $\mu$ to a mapping on the associated Grassmann algebras, we consider the $(2,0)$-section $\pi_{E}^{2,0}=-\mu^{-1}(\omega^{2,0})\in\mathcal{V}^{2,0}(E)$. Equivalently, $\pi_{E}^{2,0}$ is defined by $\pi_{E}^{2,0}(\partial_{E}f,\partial_{E}g)=-\omega^{2,0}(s_{f},s_{g})$, where $s_{f}$ is the Hamiltonian section of $f\in C^{\infty}(M)_{\mathbb{C}}$ defined as the unique section of type $(1,0)$ on $E$ such that $\imath_{s_{f}}\omega^{2,0}=\partial_{E}f$. In order to prove that $\pi_{E}^{2,0}$ defines an almost complex Poisson structure on the almost complex Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E},J_{E})$, we have $$\displaystyle\imath_{[s_{f},s_{g}]_{E}}\omega^{2,0}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{L}_{s_{f}}\imath_{s_{g}}\omega^{2,0}-\imath_{s_{g}}% \mathcal{L}_{s_{f}}\omega^{2,0}$$ $$\displaystyle=$$ $$\displaystyle\imath_{s_{f}}d_{E}(\partial_{E}g)+d_{E}\imath_{s_{f}}\imath_{s_{% g}}\omega^{2,0}-\imath_{s_{g}}d_{E}(\partial_{E}f),$$ and comparing the bigrade of both terms of above equality, we have (i) $[s_{f},s_{g}]_{E}=s_{\{f,g\}}$; (ii) $\imath_{s_{f}}\overline{\partial}_{E}\partial_{E}g-\imath_{s_{g}}\overline{% \partial}_{E}\partial_{E}f-\overline{\partial}_{E}\{f,g\}=0$. Now, as a consequence of (i) and that  $\omega^{2,0}$ is $\partial_{E}$–closed , we obtain the Jacobi identity, hence $\left[\pi_{E}^{2,0},\pi_{E}^{2,0}\right]_{E}=0$. If in addition, we require that the base manifold $M$ to be complex and $\omega^{2,0}$ to be a holomorphic $2$–form on $E$ (that is a $(2,0)$–form on $E$ with coefficients holomorphic functions on $M$), then $\pi_{E}^{2,0}$ is a holomorphic $2$–section (or equivalently $\overline{\pi_{E}^{2,0}}$ is anti-holomorphic $2$–section) and then, the condition $\left[\pi_{E}^{2,0},\overline{\pi_{E}^{2,0}}\right]_{E}=0$ is trivially satisfied, and so $\pi_{E}^{2,0}$ defines an almost complex Poisson structure on $E$. Remark 2.2. If we consider in above example the particular case of the almost complex Lie algebroid $\left(T\mathcal{F},i_{\mathcal{F}},[\cdot,\cdot]_{\mathcal{F}},J_{\mathcal{F}}\right)$ associated to a complex foliation $\mathcal{F}$, see [18], then similarly to Proposition 6.1 from [9], we obtain an almost complex Poisson structure $\pi_{\mathcal{F}}^{2,0}$ on $T\mathcal{F}$ from a foliated complex symplectic $2$–form $\omega_{\mathcal{F}}$, that is, $\omega_{\mathcal{F}}$ is a closed foliated $\mathcal{F}$–holomorphic $(2,0)$–form. Example 2.2. (Complete lift to the prolongation of a Lie algebroid). For a Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$ with $\mathrm{rank}\,E=m$ we can consider the prolongation of $E$ over its vector bundle projection, see  [14, 29, 36], which is a vector bundle $p_{L}:\mathcal{L}^{p}(E)\rightarrow E$ of $\mathrm{rank}\,\mathcal{L}^{p}(E)=2m$ which has a Lie algebroid structure over $E$. More exactly, $\mathcal{L}^{p}(E)$ is the subset of $E\times TE$ defined by $\mathcal{L}^{p}(E)=\{(u,z)\,|\,\rho_{E}(u)=p_{*}(z)\}$, where $p_{*}:TE\rightarrow TM$ is the canonical projection. The projection on the second factor $\rho_{\mathcal{L}^{p}(E)}:\mathcal{L}^{p}(E)\rightarrow TE$, given by $\rho_{\mathcal{L}^{p}(E)}(u,z)=z$ will be the anchor of the prolongation Lie algebroid $\left(\mathcal{L}^{p}(E),\rho_{\mathcal{L}^{p}(E)},[\cdot,\cdot]_{\mathcal{L}^% {p}(E)}\right)$ over $E$. According to   [29, 36], we can consider the vertical lift $s^{v}$ and the complete lift $s^{c}$ of a section $s\in\Gamma(E)$ as sections of $\mathcal{L}^{p}(E)$ as follows. The local basis of $\Gamma(\mathcal{L}^{p}(E))$ is given by $\left\{\mathcal{X}_{a}(u)=\left(e_{a}(p(u)),\rho_{a}^{i}\frac{\partial}{% \partial x^{i}}|_{u}\right),\mathcal{V}_{a}=\left(0,\frac{\partial}{\partial y% ^{a}}\right)\right\}$, where $\left\{\frac{\partial}{\partial x^{i}},\frac{\partial}{\partial y^{a}}\right\}$, $i=1,\ldots,n=\dim M$, $a=1\ldots,m=\mathrm{rank}\,E$, is the local basis on $TE$. Then, the vertical and complete lifts, respectively, of a section $s=s^{a}e_{a}\in\Gamma(E)$ are given by $$s^{v}=s^{a}\mathcal{V}_{a}\,,\,s^{c}=s^{a}\mathcal{X}_{a}+\left(\rho_{E}(e_{c}% )(s^{a})-C^{a}_{bc}s^{b}\right)y^{c}\mathcal{V}_{a}.$$ In particular, $e_{a}^{v}=\mathcal{V}_{a}$ and $e_{a}^{c}=\mathcal{X}_{a}-C^{b}_{ac}y^{c}\mathcal{V}_{b}$. Now, if $\mathrm{rank}\,E=2m$ and $J_{E}$ is an almost complex structure on the Lie algebroid $(E,\rho_{E},[\cdot,\cdot]_{E})$, then $J_{E}^{c}$ is an almost complex structure on $\left(\mathcal{L}^{p}(E),\rho_{\mathcal{L}^{p}(E)},[\cdot,\cdot]_{\mathcal{L}^% {p}(E)}\right)$ (because one of the properties of the complete lift is: for $p(T)$ a polynomial, then $p(T^{c})=p(T)^{c}$). In fact, if $J_{E}$ is expressed locally by $J_{E}=J_{b}^{a}e_{a}\otimes e^{b}$, then $J_{E}^{c}$ is locally given by $$J_{E}^{c}=\left(\rho_{E}(e_{c})(J^{a}_{b})-C^{a}_{db}J^{d}_{c}\right)y^{c}% \mathcal{V}_{a}\otimes\mathcal{X}^{b}+J^{a}_{b}\mathcal{X}_{a}\otimes\mathcal{% X}^{b}+J^{a}_{b}\mathcal{V}_{a}\otimes\mathcal{V}^{b},$$ where $\{\mathcal{X}^{a},\mathcal{V}^{b}\}$ is the dual basis of $\{\mathcal{X}_{a},\mathcal{V}_{b}\}$. Moreover, $J_{E}^{c}s^{v}=(J_{E}s)^{v}$ and $J_{E}^{c}s^{c}=(J_{E}s)^{c}$, so we have that if $s\in\Gamma(E^{1,0})$ (or $s\in\Gamma(E^{0,1})$), then $s^{v},s^{c}\in\Gamma(\mathcal{L}^{p}(E)^{1,0})$ (or $s^{v},s^{c}\in\Gamma(\mathcal{L}^{p}(E)^{0,1})$). Taking into account now that $$(S\wedge T)^{v}=S^{v}\wedge T^{v}\,\,\mathrm{and}\,\,(S\wedge T)^{c}=S^{c}% \wedge T^{v}+S^{v}\wedge T^{c},$$ (2.9) for $S,T\in\mathcal{V}^{\bullet}(E)$, it follows that the bigraduation with respect to the complex structures $J_{E}$ and $J_{E}^{c}$ is preserved when considering the vertical and complete lifts of multisections, that is, if $S\in\mathcal{V}^{p,q}(E)$, then $S^{v},S^{c}\in\mathcal{V}^{p,q}(\mathcal{L}^{p}(E))$. Furthermore, since $N_{J_{E}^{c}}=\left(N_{J_{E}}\right)^{c}$, then $J_{E}^{c}$ is integrable if and only if $J_{E}$ is integrable. Now, from (2.9) and $$[s_{1}^{v},s_{2}^{v}]_{\mathcal{L}^{p}(E)}=0\,,\,[s_{1}^{v},s_{2}^{c}]_{% \mathcal{L}^{p}(E)}=[s_{1},s_{2}]^{v}_{E}\,,\,[s_{1}^{c},s_{2}^{c}]_{\mathcal{% L}^{p}(E)}=[s_{1},s_{2}]^{c}_{E},$$ see [28], it follows that for the Schouten-Nijenhuis bracket we have $$[S,T]_{E}^{c}=[S^{c},T^{c}]_{\mathcal{L}^{p}(E)},$$ for $S,T\in\mathcal{V}^{\bullet}(E)$. Hence, if $\left(J_{E},\pi^{2,0}_{E}\right)$ is an almost complex Poisson structure on $E$, then $\left(J_{E}^{c},(\pi^{2,0}_{E})^{c}\right)$ is an almost complex Poisson structure on $\mathcal{L}^{p}(E)$. Example 2.3. Let us consider again the prolongation Lie algebroid $\left(\mathcal{L}^{p}(E),\rho_{\mathcal{L}^{p}(E)},[\cdot,\cdot]_{\mathcal{L}^% {p}(E)}\right)$ over $E$ from above example, and let $\nabla$ a linear connection on the Lie algebroid $E$ (in particular a Riemannian Lie algebroid $(E,g)$ and the Levi-Civita connection). Then, the connection $\nabla$ leads to a natural decomposition of $\mathcal{L}^{p}(E)$ into vertical and horizontal subbundles $$\mathcal{L}^{p}(E)=H\mathcal{L}^{p}(E)\oplus V\mathcal{L}^{p}(E),$$ where $V\mathcal{L}^{p}(E)=\mathrm{span}\,\{\mathcal{V}_{a}\}$ and $H\mathcal{L}^{p}(E)=\mathrm{span}\,\{\mathcal{H}_{a}=\mathcal{X}_{a}-\Gamma^{b% }_{ac}y^{c}\mathcal{V}_{b}\}$, where $\Gamma^{b}_{ac}(x)$ are the local coefficients of the linear connection $\nabla$. We notice that, the above decomposition can be obtained also by a nonlinear (Ehresmann) connection. The horizontal lift $s^{h}$ of a section $s=s^{a}e_{a}\in\Gamma(E)$ to $\mathcal{L}^{p}(E)$ is locally given by $$s^{h}=s^{a}\mathcal{H}_{a}=s^{a}(\mathcal{X}_{a}-\Gamma^{b}_{ac}y^{c}\mathcal{% V}_{b}).$$ Then, there exists an almost complex structure $J^{1}_{\mathcal{L}^{p}(E)}$ on $\mathcal{L}^{p}(E)$, induced by the connection $\nabla$, given by $$J^{1}_{\mathcal{L}^{p}(E)}(s^{h})=s^{v}\,,\,J^{1}_{\mathcal{L}^{p}(E)}(s^{v})=% -s^{h}.$$ A simple example of almost complex Poisson structure on $\mathcal{L}^{p}(E)$ with respect to the almost complex structure $J^{1}_{\mathcal{L}^{p}(E)}$ can be constructed as follows: Let $s_{1},s_{2}\in\Gamma(E)$, then $s_{1}^{h}-is_{1}^{v}$ and $s_{2}^{h}-is_{2}^{v}$ are complex sections of type $(1,0)$ on $\mathcal{L}^{p}(E)$, and we consider $$\displaystyle\pi^{2,0}_{\mathcal{L}^{p}(E)}$$ $$\displaystyle=$$ $$\displaystyle(s_{1}^{h}-is_{1}^{v})\wedge(s_{2}^{h}-is_{2}^{v})$$ $$\displaystyle=$$ $$\displaystyle s_{1}^{h}\wedge s_{2}^{h}-i(s_{1}^{h}\wedge s_{2}^{v}+s_{1}^{v}% \wedge s_{2}^{h})-s_{1}^{v}\wedge s_{2}^{v}$$ $$\displaystyle=$$ $$\displaystyle s_{1}^{h}\wedge s_{2}^{h}-i(s_{1}\wedge s_{2})^{h}-(s_{1}\wedge s% _{2})^{v}.$$ Taking into account that $$[s_{1}^{h},s_{2}^{h}]_{\mathcal{L}^{p}(E)}=[s_{1},s_{2}]_{E}^{h}+(R(s_{1},s_{2% })u)^{v}\,,\,[s_{1}^{h},s_{2}^{v}]_{\mathcal{L}^{p}(E)}=\left(\nabla_{s_{1}}s_% {2}\right)^{v}\,,\,[s_{1}^{v},s_{2}^{v}]_{\mathcal{L}^{p}(E)}=0,$$ it follows that $\pi^{2,0}_{\mathcal{L}^{p}(E)}$ is an almost complex Poisson structure on $\mathcal{L}^{p}(E)$ if $[s_{1},s_{2}]_{E}=0$, $\nabla_{s_{1}}s_{2}=0$, $R(s_{1},s_{2})=0$ and $T(s_{1},s_{2})=0$ (in particular, if $\nabla$ is the Levi-Civita connection of a Riemannian metric on $E$, then $T=0$ always). Here, the curvature $R$ satisfies $R=-N_{h}$, where $N_{h}$ is the Nijenhuis tensor of the horizontal projection. If, moreover, $\mathrm{rank}\,E=2m$ and $J_{E}$ is an almost complex structure on $E$, then we can consider its horizontal lift $J_{E}^{h}$ on $\mathcal{L}^{p}(E)$, which is again an almost complex structure, such that $J_{E}^{h}(s^{v})=(J_{E}s)^{v}$ and $J_{E}^{h}(s^{h})=(J_{E}s)^{h}$. It is easy to see that the bigraduations with respect to $J_{E}$ and $J_{E}^{h}$ is preserved by vertical and horizontal lifts and hence, the determination of almost complex Poisson structures on $\mathcal{L}^{p}(E)$ with respect to the almost complex structure $J_{E}^{h}$ can be of some interest. Example 2.4. (Direct product structure). The direct product of two Lie algebroids $\left(E_{1},\rho_{E_{1}},[\cdot,\cdot]_{E_{1}}\right)$ over $M_{1}$ and $\left(E_{2},\rho_{E_{2}},[\cdot,\cdot]_{E_{2}}\right)$ over $M_{2}$ is defined in [26, pg. 155], as a Lie algebroid structure $E_{1}\times E_{2}\rightarrow M_{1}\times M_{2}$. Let us briefly recall this construction. The general sections of $E_{1}\times E_{2}$ are of the form $s=\sum(f_{i}\otimes s_{i}^{1})\oplus\sum(g_{j}\otimes s_{j}^{2})$, where $f_{i},g_{j}\in C^{\infty}(M_{1}\times M_{2})$, $s_{i}^{1}\in\Gamma(E_{1})$, $s_{j}^{2}\in\Gamma(E_{2})$, and the anchor map is defined by $$\rho_{E}\left(\sum(f_{i}\otimes s_{i}^{1})\oplus\sum(g_{j}\otimes s_{j}^{2})% \right)=\sum(f_{i}\otimes\rho_{E_{1}}(s_{i}^{1}))\oplus\sum(g_{j}\otimes\rho_{% E_{2}}(s_{j}^{2})).$$ Imposing the conditions $$[1\otimes s^{1},1\otimes t^{1}]_{E}=1\otimes[s^{1},t^{1}]_{E_{1}}\,,\,[1% \otimes s^{1},1\otimes t^{2}]_{E}=0,$$ $$[1\otimes s^{2},1\otimes t^{2}]_{E}=1\otimes[s^{2},t^{2}]_{E_{2}}\,,\,[1% \otimes s^{2},1\otimes t^{1}]_{E}=0,$$ for every $s^{1},t^{1}\in\Gamma(E_{1})$ and $s^{2},t^{2}\in\Gamma(E_{2})$, it follows that for $s=\sum(f_{i}\otimes s_{i}^{1})\oplus\sum(g_{j}\otimes s_{j}^{2})$ and $s^{\prime}=\sum(f_{k}^{\prime}\otimes s_{k}^{\prime 1})\oplus\sum(g_{l}^{% \prime}\otimes s_{l}^{\prime 2})$, we have, using Leibniz condition, the following expression for bracket on $E=E_{1}\times E_{2}$: $$\displaystyle[s,s^{\prime}]_{E}$$ $$\displaystyle=$$ $$\displaystyle\left(\sum f_{i}f_{k}^{\prime}\otimes[s_{i}^{1},s_{k}^{\prime 1}]% _{E_{1}}+\sum f_{i}\rho_{E_{1}}(s_{i}^{1})(f_{k}^{\prime})\otimes s_{k}^{% \prime 1}-\sum f_{k}^{\prime}\rho_{E_{1}}(s_{k}^{\prime 1})(f_{i})\otimes s_{i% }^{1}\right)$$ $$\displaystyle\oplus\left(\sum g_{j}g_{l}^{\prime}\otimes[s_{j}^{2},s_{l}^{% \prime 2}]_{E_{2}}+\sum g_{j}\rho_{E_{2}}(s_{j}^{2})(g_{l}^{\prime})\otimes s_% {l}^{\prime 2}-\sum g_{l}^{\prime}\rho_{E_{2}}(s_{l}^{\prime 2})(g_{j})\otimes s% _{j}^{2}\right).$$ Now, we consider $E_{1}$ and $E_{2}$ endowed with almost complex Poisson structures $\left(J_{E_{1}},\pi_{E_{1}}^{2,0}\right)$ and $\left(J_{E_{2}},\pi_{E_{2}}^{2,0}\right)$, respectively. We define the almost complex structure $J_{E}$ on $E=E_{1}\times E_{2}$ by $J_{E}(s)=\sum(f_{i}\otimes J_{E_{1}}(s_{i}^{1}))\oplus\sum(g_{j}\otimes J_{E_{% 2}}(s_{j}^{2}))$ and the $2$–section $\pi_{E}^{2,0}=\pi_{E_{1}}^{2,0}+\pi_{E_{2}}^{2,0}$. Since $[s^{1},s^{2}]_{E}=0$ for every $s^{1}\in\Gamma(E_{1})$, $s^{2}\in\Gamma(E_{2})$, it follows that $\pi_{E}^{2,0}$ is an almost complex Poisson structure on the direct product $E_{1}\times E_{2}$. Let us consider now $\pi^{2,0}_{E}$ be an almost complex Poisson structure on the almost complex Lie algebroid $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E}\right)$. It is well known that, associated to the real Poisson structure $\pi_{E}=\pi^{2,0}_{E}+\overline{\pi^{2,0}_{E}}$, there is a morphism $$\pi_{E}^{\#}:\Omega^{1}(E)\rightarrow\mathcal{V}^{1}(E),$$ given by $\pi^{\#}_{E}(\varphi)(\psi)=\pi_{E}(\varphi,\psi)$ for $\varphi,\psi\in\Omega^{1}(E)$. Then, we can consider the complexified spaces and extend $\pi^{\#}_{E}$ to the complex Grassmann algebras as follows: let $I^{*}_{E}$ be the adjoint of $I_{E}=\pi^{\#}_{E}$ (it is easy to see that $I_{E}^{*}=-I_{E}$), then we have $$I_{E}:\Omega_{\mathbb{C}}^{p}(E)\rightarrow\mathcal{V}_{\mathbb{C}}^{p}(E)\,,% \,I_{E}(\varphi)(\varphi_{1},\ldots,\varphi_{p})=\varphi(I_{E}^{*}\varphi_{1},% \ldots,I_{E}^{*}\varphi_{p}),$$ for $\varphi\in\Omega^{p}_{\mathbb{C}}(E)$ and $\varphi_{1},\ldots,\varphi_{p}\in\Omega^{1}_{\mathbb{C}}(E)$. For the $(2,0)$–section $\pi_{E}^{2,0}$ it can be defined similarly the morphisms $$\left(\pi_{E}^{2,0}\right)^{\#}:\Omega^{1,0}(E)\rightarrow\mathcal{V}^{1,0}(E)% \,,\,\left(\overline{\pi_{E}^{2,0}}\right)^{\#}:\Omega^{0,1}(E)\rightarrow% \mathcal{V}^{0,1}(E).$$ Since $\pi_{E}^{\#}|_{\Omega^{1,0}(E)}=\left(\pi_{E}^{2,0}\right)^{\#}$, $\pi_{E}^{\#}|_{\Omega^{0,1}(E)}=\left(\overline{\pi_{E}^{2,0}}\right)^{\#}$ and $I_{E}$ is a morphism of Grassmann algebras, then $I_{E}$ preserves the bigraduation with respect to $J_{E}$. Now, let $\pi^{2,0}_{E}$ be an almost complex Poisson structure on the almost complex Lie algebroid $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E}\right)$ such that $J_{E}$ is integrable. We define the following operators (i) $\sigma_{E}:\mathcal{V}_{\mathbb{C}}^{p}(E)\rightarrow\mathcal{V}_{\mathbb{C}}^% {p+1}(E)$, by $\sigma_{E}(S)=-\left[S,\pi_{E}\right]_{E}$, for $S\in\mathcal{V}_{\mathbb{C}}^{p}(E)$; (ii) $\sigma_{E}^{1}:\mathcal{V}^{p,q}(E)\rightarrow\mathcal{V}^{p+1,q}(E)\oplus% \mathcal{V}^{p+2,q-1}(E)$, by $\sigma_{E}^{1}(S)=-\left[S,\pi^{2,0}_{E}\right]_{E}$, for $S\in\mathcal{V}^{p,q}(E)$; (ii) $\sigma_{E}^{2}:\mathcal{V}^{p,q}(E)\rightarrow\mathcal{V}^{p-1,q+2}(E)\oplus% \mathcal{V}^{p,q+1}(E)$, by $\sigma_{E}^{2}(S)=-\left[S,\overline{\pi^{2,0}_{E}}\right]_{E}$, for $S\in\mathcal{V}^{p,q}(E)$. Clearly, $\overline{\sigma_{E}^{1}(S)}=\sigma_{E}^{2}(\overline{S})$, for $S\in\mathcal{V}^{p,q}(E)$. Also, in the same way we can define $\sigma_{E},\sigma_{E}^{1}$ and $\sigma_{E}^{2}$ in the case when $J_{E}$ is not integrable, but the image space has more components. If $J_{E}$ is integrable, as usual, we have $I_{E}(d_{E}\varphi)=-\sigma_{E}(I_{E}\varphi)$ and for $\varphi\in\Omega^{p,q}(E)$ it follows (i) $I_{E}(d_{E}\varphi)=I_{E}(\partial_{E}\varphi)+I_{E}(\overline{\partial}_{E}% \varphi)\in\mathcal{V}^{p+1,q}(E)\oplus\mathcal{V}^{p,q+1}(E)$; (ii) $\sigma_{E}(I_{E}\varphi)=\sigma_{E}^{1}(I_{E}\varphi)+\sigma_{E}^{2}(I_{E}% \varphi)\in\mathcal{V}^{p+1,q}(E)\oplus\mathcal{V}^{p+2,q-1}(E)\oplus\mathcal{% V}^{p,q+1}(E)\oplus\mathcal{V}^{p-1,q+2}(E)$. Corollary 2.1. Let $\pi^{2,0}_{E}$ be an almost complex Poisson structure on $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E}\right)$. Suppose that $J_{E}$ is integrable. Then $$I_{E}(\partial_{E}\varphi)=-\sigma_{E}^{1}(I_{E}\varphi)\in\mathcal{V}^{p+1,q}% (E)\,,\,I_{E}(\overline{\partial}_{E}\varphi)=-\sigma_{E}^{2}(I_{E}\varphi)\in% \mathcal{V}^{p,q+1}(E),$$ for every $\varphi\in\Omega^{p,q}(E)$. Proposition 2.3. A nondegenerate almost complex Poisson structure $\pi^{2,0}_{E}$ on an almost complex Lie algebroid $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E}\right)$ with $J_{E}$ integrable defines a $\partial_{E}$–symplectic structure on $E$. Proof. If $\pi_{E}^{2,0}$ is a nondegenerate almost complex Poisson structure, then $\pi_{E}=\pi_{E}^{2,0}+\overline{\pi_{E}^{2,0}}$ is also nondegenerate and the morphism $I_{E}$ is an isomorphism. Now, we consider $\omega^{2,0}=I_{E}^{-1}\left(\pi_{E}^{2,0}\right)$ and, then $$I_{E}\left(\partial_{E}\omega^{2,0}\right)=-\sigma_{E}^{1}\left(I_{E}\omega^{2% ,0}\right)=-\sigma_{E}^{1}\left(\pi_{E}^{2,0}\right)=\left[\pi_{E}^{2,0},\pi_{% E}^{2,0}\right]_{E}=0,$$ and, since $I_{E}$ is an isomorphism, it follows that $\omega^{2,0}$ defines a $\partial_{E}$–symplectic structure on $E$. $\Box$ For a complex smooth function $f\in C^{\infty}(M)_{\mathbb{C}}$, there is an associated complex section of type $(1,0)$ of $E$ (called the Hamiltonian section associated to $f$, see also Example 2.1) defined by $$s_{f}=\imath_{\partial_{E}f}\pi_{E}^{2,0}=\pi_{E}^{2,0}(\partial_{E}f).$$ From the Jacobi identity of the bracket $\{\cdot,\cdot\}$ associated to $\pi_{E}^{2,0}$, we obtain $[s_{f},s_{g}]_{E}=s_{\{f,g\}}$ (see also Example 2.1); hence, the space $\mathrm{Ham}\,_{\pi_{E}^{2,0}}$ of Hamiltonian $(1,0)$–sections on $E$ is a subspace of $\mathcal{V}^{1,0}(E)$ with a Lie algebra structure respect to the Lie bracket of sections of $E$, which in the case when $J_{E}$ is integrable is actually a Lie subalgebra of $\mathcal{V}^{1,0}(E)$. Moreover, we have Proposition 2.4. Every Hamiltonian section of $\pi^{2,0}_{E}$ is an infinitesimal automorphism of $\pi^{2,0}_{E}$, that is, $\mathcal{L}_{s_{f}}\pi^{2,0}_{E}=0$. Proposition 2.5. If the almost complex Poisson structure $\pi^{2,0}_{E}$ on an almost complex Lie algebroid $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E}\right)$ is nondegenerated, then $J_{E}$ is integrable, and hence $E$ admits a $\partial_{E}$–symplectic structure. Proof. It is sufficient to prove that the bracket of two sections of type $(1,0)$ is again of type $(1,0)$; and this follows because, by Proposition 2.3, the non-degeneracy condition implies that the mapping $\left(\pi_{E}^{2,0}\right)^{\#}$ is an isomorphism, so the space $\mathcal{V}^{1,0}(E)$ is (locally) spanned by the Hamiltonian sections, for which it is satisfied. $\Box$ If $\pi_{E}^{2,0}$ is an almost complex Poisson structure on an almost complex Lie algebroid $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E}\right)$ and $\pi_{E}=\pi_{E}^{2,0}+\overline{\pi_{E}^{2,0}}$ is the associated real Poisson structure on $E$, then the distribution $$\mathcal{D}=\cup_{x\in M}\mathcal{D}(x)\subset TM,\,\,\mathrm{where}\,\,% \mathcal{D}(x):=\rho_{E}\left(\pi_{E}^{\#}(E_{x}^{*})\right)\subset T_{x}M,\,x\in M$$ is a generalized foliation on $M$ (in the sense of Sussmann). As well as we seen previously, for an almost complex Poisson structure $\pi^{2,0}_{E}$ on the almost complex Lie algebroid $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E}\right)$ such that $J_{E}$ is integrable, we have defined the operator $\sigma_{E}^{1}:\mathcal{V}^{p,q}(E)\rightarrow\mathcal{V}^{p+1,q}(E)\oplus% \mathcal{V}^{p+2,q-1}(E)$, by $\sigma_{E}^{1}(S)=-\left[S,\pi^{2,0}_{E}\right]_{E}$, for $S\in\mathcal{V}^{p,q}(E)$. Using the properties of the Schouten-Nijenhuis bracket on $E$, it is easy to prove that: (i) $\sigma_{E}^{1}\circ\sigma_{E}^{1}=0$, (ii) $\sigma_{E}^{1}(S_{1}\wedge S_{2})=\sigma_{E}^{1}(S_{1})\wedge S_{2}+(-1)^{% \mathrm{deg}(S_{1})}S_{1}\wedge\sigma_{E}^{1}(S_{2})$, (iii) $\sigma_{E}^{1}\left([S_{1},S_{2}]_{E}\right)=-[\sigma_{E}^{1}(S_{1}),S_{2}]_{E% }-(-1)^{\mathrm{deg}(S_{1})}[S_{1},\sigma_{E}^{1}(S_{2})]_{E}$, for every $S_{1},S_{2}\in\mathcal{V}_{\mathbb{C}}^{\bullet}(E)$, where $\mathrm{deg}(S)$ denotes the degree of the multisection $S$. Then, if we consider the decomposition of $\sigma_{E}^{1}$ induced by the bigraduation, $$\sigma_{E}^{1}=\sigma_{E}^{11}+\sigma_{E}^{12},$$ and since $\sigma_{E}^{1}\circ\sigma_{E}^{1}=0$, it follows that $\sigma{}_{E}^{11}\circ\sigma_{E}^{11}=\sigma_{E}^{12}\circ\sigma_{E}^{12}=0$ and $\sigma_{E}^{11}\circ\sigma_{E}^{12}+\sigma_{E}^{12}\circ\sigma_{E}^{11}=0$. So, we obtain a differential bigraded complex $$\ldots\overset{\sigma_{E}^{11}}{\longrightarrow}\mathcal{V}^{p-1,q}(E)\overset% {\sigma_{E}^{11}}{\longrightarrow}\mathcal{V}^{p,q}(E)\overset{\sigma_{E}^{11}% }{\longrightarrow}\mathcal{V}^{p+1,q}(E)\overset{\sigma_{E}^{11}}{% \longrightarrow}\ldots$$ whose cohomology groups, denoted by $H_{CLP}^{p,q}(E)$, will be called complex Lichnerowicz-Poisson cohomology groups of the almost complex Poisson Lie algebroid $\left(E,\rho_{E},[\cdot,\cdot]_{E},J_{E},\pi_{E}^{2,0}\right)$ with $J_{E}$ integrable. 2.3 Morphisms of almost complex Poisson Lie algebroids In this subsection, we consider the morphisms between almost complex Lie algebroids preserving the almost complex Poisson structures. Definition 2.4. Let $\left(E_{1},\rho_{E_{1}},[\cdot,\cdot]_{E_{1}},J_{E_{1}},\pi^{2,0}_{E_{1}}\right)$ and $\left(E_{2},\rho_{E_{2}},[\cdot,\cdot]_{E_{2}},J_{E_{2}},\pi^{2,0}_{E_{2}}\right)$ be two almost complex Poisson Lie algebroids over the same base manifold $M$. A Lie algebroid morphism $\phi:E_{1}\rightarrow E_{2}$ is called an almost complex Poisson morphism if: (i) $\phi$ is an almost complex morphism; (ii) $\phi$ is a Poisson type morphism with respect to $\pi_{E_{1}}^{2,0}$ and $\pi_{E_{2}}^{2,0}$, that is, $\phi$ satisfies one of the following equivalent properties: 1) $\pi_{E_{1}}^{2,0}$ and $\pi_{E_{2}}^{2,0}$ are $\phi$–related, that is $$\pi_{E_{1}}^{2,0}(\phi^{*}(\varphi),\phi^{*}(\psi))=\pi_{E_{2}}^{2,0}(\varphi,% \psi),\,\forall\,\varphi,\psi\in\Omega^{1,0}(E_{2});$$ 2) $\left(\pi_{E_{2}}^{2,0}\right)^{\#}=\phi\circ\left(\pi_{E_{1}}^{2,0}\right)^{% \#}\circ\phi^{*}$. Remark 2.3. If $\phi:\left(E_{1},J_{E_{1}},\pi^{2,0}_{E_{1}}\right)\rightarrow\left(E_{2},J_{E% _{2}},\pi^{2,0}_{E_{2}}\right)$ is an almost complex Poisson morphism then $\phi:\left(E_{1},\pi^{2,0}_{E_{1}}+\overline{\pi^{2,0}_{E_{1}}}\right)% \rightarrow\left(E_{2},\pi^{2,0}_{E_{2}}+\overline{\pi^{2,0}_{E_{2}}}\right)$ is an almost Poisson morphism of real Poisson Lie algebroids. Now, we can induce almost complex Poisson structures on Lie subalgebroids. Let $\left(E,\rho_{E},[\cdot,\cdot]_{E}\right)$ be a Lie algebroid over a smooth manifold $M$. According to [26], a Lie subalgebroid over $M$, is given by a vector subbundle $p^{\prime}:E^{\prime}\rightarrow M$ such that: (i) The anchor $\rho_{E}:E\rightarrow TM$ restricts to $p^{\prime}:E^{\prime}\rightarrow M$; (ii) If $s^{\prime}_{1},s^{\prime}_{2}\in\Gamma(E^{\prime})$ then $[s^{\prime}_{1},s^{\prime}_{2}]_{E}\in\Gamma(E^{\prime})$ also. Let $\phi:\left(E_{1},\rho_{E_{1}},[\cdot,\cdot]_{E_{1}}\right)\rightarrow\left(E_{% 2},\rho_{E_{2}},[\cdot,\cdot]_{E_{2}}\right)$ be a $M$–morphism of Lie algebroids over $M$. Then, according to [19], if $\phi$ is injective then $\left(E_{1},\rho_{E_{1}},[\cdot,\cdot]_{E_{1}}\right)$ is a Lie subalgebroid of $\left(E_{2},\rho_{E_{2}},[\cdot,\cdot]_{E_{2}}\right)$ over the same base $M$. Definition 2.5. A Lie subalgebroid $(\widetilde{E},\phi)$ of an almost complex Poisson Lie algebroid $\left(E,J_{E},\pi^{2,0}_{E}\right)$, is said to be an almost complex Poisson Lie subalgebroid of $E$ if the injective morphism $\phi:\widetilde{E}\rightarrow E$ is an almost complex Poisson morphism. Definition 2.6. A Lie subalgebroid $\widetilde{E}$ of an almost complex Poisson Lie algebroid $\left(E,J_{E},\pi^{2,0}_{E}\right)$ is said to be: (i) $J_{E}$–coisotropic if $J_{E}(\widetilde{E})\subset\widetilde{E}$ and $$\left(\pi_{E}^{2,0}\right)^{\#}\left(\mathrm{Ann}\,\widetilde{E}^{1,0}\right)% \subset\widetilde{E}^{1,0},$$ (2.10) where $\mathrm{Ann}\,\widetilde{E}^{1,0}=\{\varphi\in(\widetilde{E}^{1,0})^{*}\,|\,% \varphi(s)=0\,,\,\forall\,s\in\widetilde{E}^{1,0}\}$. (ii) $J_{E}$–Lagrangian if $J_{E}(\widetilde{E})\subset\widetilde{E}$ and $$\left(\pi_{E}^{2,0}\right)^{\#}\left(\mathrm{Ann}\,\widetilde{E}^{1,0}\right)=% \widetilde{E}^{1,0}\cap\left(\pi_{E}^{2,0}\right)^{\#}\left((\widetilde{E}^{1,% 0})^{*}\right).$$ (2.11) As usual, we have Theorem 2.2. A $M$–morphism $\phi:\left(E_{1},\rho_{E_{1}},[\cdot,\cdot]_{E_{1}},J_{E_{1}},\pi_{E_{1}}^{2,0% }\right)\rightarrow\left(E_{2},\rho_{E_{2}},[\cdot,\cdot]_{E_{2}},J_{E_{2}},-% \pi_{E_{2}}^{2,0}\right)$ between two almost complex Poisson Lie algebroids is an almost complex Poisson morphism if and only if $\mathrm{Graph}\,\phi$ is a $J_{E}$–coisotropic Lie subalgebroid of $E=E_{1}\times E_{2}$, where $J_{E}$ is the product structure given in Example 2.4. Proof. We consider the almost complex Poisson Lie algebroids $\left(E_{1},\rho_{E_{1}},[\cdot,\cdot]_{E_{1}},J_{E_{1}},\pi_{E_{1}}^{2,0}\right)$ and $\left(E_{2},\rho_{E_{2}},[\cdot,\cdot]_{E_{2}},J_{E_{2}},-\pi_{E_{2}}^{2,0}\right)$ and the product structure described in Example 2.4 $(E=E_{1}\times E_{2},$ $J_{E}=J_{E_{1}}+J_{E_{2}},\pi_{E}^{2,0}=\pi_{E_{1}}^{2,0}-\pi_{E_{2}}^{2,0})$. Since $\mathrm{Graph}\,\phi=\{(s,\phi(s))\,|\,s\in\Gamma(E_{1})\}$ is a regular subalgebroid of $E=E_{1}\times E_{2}$, it is $J_{E}$–invariant if and only if $(J_{E_{1}}(s),J_{E_{2}}(\phi(s)))\in\mathrm{Graph}\,\phi$, for any $s\in\Gamma(E_{1})$, or equivalently, $(J_{E_{2}}\circ\phi)(s)=(\phi\circ J_{E_{1}})(s)$, which means that $\phi$ is an almost complex morphism of Lie algebroids. Now, let us assume the previous statement, that is, $J_{E}$ be an almost complex structure on $\mathrm{Graph}\,\phi$. Then, $$\mathrm{Ann}\,(\mathrm{Graph}\,\phi)^{1,0}=\{(-\phi^{\ast}(\varphi),\varphi)\,% |\,\varphi\in(E_{2}^{1,0})^{\ast}\},$$ and by a straightforward calculation we have that $\mathrm{Graph}\,\phi$ satisfies (2.10), and hence it is $J_{E}$–coisotropic, that is, $$\left((\pi_{E_{1}}^{2,0})^{\#}-(\pi_{E_{2}}^{2,0})^{\#}\right)\left(\mathrm{% Ann}\,(\mathrm{Graph}\,\phi)^{1,0}\right)\subset(\mathrm{Graph}\,\phi)^{1,0},$$ if and only if, $$\left(\pi_{E_{2}}^{2,0}\right)^{\#}(\varphi)=\left(\phi\circ\pi_{E_{1}}^{2,0}% \circ\phi^{\ast}\right)(\varphi)\,,\,\forall\,\varphi\in\Omega^{1,0}(E_{2}),$$ that is, $\phi$ is a Poisson type morphism. $\Box$ 3 Some other examples A general bisection on $E$ is a $\mathcal{C}^{\infty}(M)$-valued $2$-cochain $F$ of $E^{\ast}$. It can be regarded as well as defined equivalently by: – a vector bundle map $F^{\#}:E^{\ast}\rightarrow E$, or – a section $F$ of the vector bundle $E\otimes E$. The formula (1.10) can be used to define a non-necessary skew-symmetric map $[\cdot,\cdot]_{F}:\Gamma\left(E^{\ast}\right)\times\Gamma\left(E^{\ast}\right)% \rightarrow\Gamma\left(E^{\ast}\right)$, $$[\omega,\theta]_{F}=\mathcal{L}_{F^{\#}\omega}\theta-\mathcal{L}_{F^{\#}\theta% }\omega-d_{E}(F(\omega,\theta)).$$ (3.12) Let $G$ be an endomorphism of $E$, $G^{\ast}:E^{\ast}\rightarrow E^{\ast}$ be the dual endomorphism of $G$ and $F$ be a general bisection. Then consider the map $[\cdot,\cdot]_{F}:\Gamma\left(E^{\ast}\right)\times\Gamma\left(E^{\ast}\right)% \rightarrow\Gamma\left(E^{\ast}\right)$, $$[\omega,\theta]_{F}^{G}=\left[G^{\ast}\omega,\theta\right]_{F}+\left[\omega,G^% {\ast}\theta\right]_{F}-G^{\ast}\left[\omega,\theta\right]_{F},$$ (3.13) for every $\omega,\theta\in\Gamma(E^{\ast})$. If $F,G,\omega,\theta$ are as above, denote $C\left(F,G)(\omega,\theta\right)=[\omega,\theta]_{FG}-[\omega,\theta]_{F}^{G^{% \ast}}$, where $FG$ is the general bisection that corresponds to $\left(FG\right)^{\#}=F\circ G^{\#}$. One say that $F$ and $G$ are compatible if $G\circ F^{\#}=F^{\#}\circ G^{\ast}$ and $C(F,G)=0$. If $G$ has a null Nijenhuis tensor $N_{G}=0$ and $F$ is a Poisson tensor, then according to [12, 20, 21] $(E,F,G)$ is a Poisson-Nijenhuis structure if $F$ and $G$ are compatible. If we suppose now that $J_{E}$ is integrable and $\pi_{E}$ is an almost complex Poisson structure, then $J_{E}$ and $\pi_{E}$ are compatible if just $(E,\pi_{E},J_{E})$ is a Poisson-Nijenhuis structure. The compatibility condition read 1) $J_{E}\circ\pi_{E}^{\#}=\pi_{E}^{\#}\circ J_{E}^{\ast}$; 2) $[\omega,\theta]_{\pi_{E}J}=[\omega,\theta]_{\pi_{E}}^{J^{\ast}}$; for every $\omega,\theta\in\Gamma\left(E^{\ast}\right)$. We give below examples of integrable almost complex structures $J_{E}$ and almost complex Poisson structures $\pi_{E}$ that are not compatible and that arise naturally on each sphere $S^{2n-1}$, $n\geq 1$. These examples motivate the study of structures $J_{E}$ and $\pi_{E}$ that are not necessary compatible. Notice that the compatibility of $J_{E}$ and $\pi_{E}$ is not assumed in the results proved in our paper. Let $n\geq 1$, $\bar{x}=\left(x^{\alpha}\right)_{\alpha=\overline{1,2n}}\in S^{2n-1}\subset I% \!\!R^{2n}$, $\sum_{\alpha=1}^{2n}\left(x^{\alpha}\right)^{2}=1$, and $\bar{X}=\left(X^{\alpha}\right)_{\alpha=\overline{1,2n}}\in T_{\bar{x}}S^{2n-1}$, thus $\bar{X}\cdot\bar{x}=0$, i.e. $\sum_{\alpha=1}^{2n}x^{\alpha}X^{\alpha}=0$. We define the anchor $\rho:S^{2n-1}\times I\!\!R^{2n}=TI\!\!R_{|S^{2n-1}}^{2n}\rightarrow TS^{2n-1}$ as the orthogonal projection $\rho_{\bar{x}}:T_{\bar{x}}I\!\!R^{2n}\rightarrow T_{\bar{x}}S^{2n-1}$, $(\forall)\bar{x}\in S^{2n-1}$, $\rho_{\bar{x}}(\bar{X})=\bar{X}-\left(\bar{X}\cdot\bar{x}\right)\bar{x}$. Consider, for example, local coordinates for $\bar{x}\in S^{2n-1}$ as $$\left(x^{\alpha}\right)_{\alpha=\overline{{1,2n}}}\rightarrow\left(x^{1},% \ldots,x^{2n-1},t=\sqrt{1-\left(x^{1}\right)^{2}-\cdots-\left(x^{2n-1}\right)^% {2}}\right).$$ Using these local coordinates,$\dfrac{\partial}{\partial x^{\alpha}}\rightarrow\bar{e}_{\alpha}-\dfrac{x^{% \alpha}}{x^{2n}}\bar{e}_{2n}$. For $1\leq\alpha\leq 2n$ we have $\rho_{\bar{x}}(\bar{e}_{\alpha})=\dfrac{\partial}{\partial x^{\alpha}}-x^{% \alpha}\bar{r}$ and $\rho_{\bar{x}}(\bar{e}_{2n})=-x^{2n}{\bar{r}}$, where $\bar{r}=\sum_{\alpha=1}^{2n-1}x^{\alpha}\dfrac{\partial}{\partial x^{\alpha}}$. For $1\leq\alpha<\beta\leq 2n-1$ we have $[\rho_{\bar{x}}(\bar{e}_{\alpha}),\rho_{\bar{x}}(\bar{e}_{\beta})]=-x^{\beta}% \dfrac{\partial}{\partial x^{\alpha}}+x^{\alpha}\dfrac{\partial}{\partial x^{% \beta}}$, and $[\rho_{\bar{x}}(\bar{e}_{\alpha}),\rho_{\bar{x}}(\bar{e}_{2n})]=-t\dfrac{% \partial}{\partial x^{\alpha}}$ for $\alpha=\overline{1,2n-1}$. It follows that $\left(\mathcal{L}_{\bar{e}_{\alpha}}\bar{e}_{\beta}\right)_{\bar{x}}=\left[% \bar{e}_{\alpha},\bar{e}_{\beta}\right]_{\bar{x}}=-x^{\beta}\bar{e}_{\alpha}+x% ^{\alpha}\bar{e}_{\beta}$, for $1\leq\alpha<\beta\leq 2n$, extend to a bracket for sections of the vector bundle $E=S^{2n-1}\times I\!\!R^{2n}\rightarrow S^{2n-1}$, according to the anchor $\rho$. It can be easily proved that $\left(E,\rho_{E},[\cdot,\cdot]_{E}\right)$ is a Lie algebroid. Consider $n\geq 1$. The skew-symmetric matrix $J_{0}=\left(\begin{array}[]{cc}0&-I_{n}\\ I_{n}&0\end{array}\right)$ is the same for three structures related to $E$: an integrable complex structure $J$ on $E$, a Poisson bisection ${\widetilde{J}}:E^{\ast}\rightarrow E$ and the adjoint $J^{\ast}:E^{\ast}\rightarrow E^{\ast}$ of $J$. The image of $\rho\circ{\widetilde{J}}$ gives a regular foliation of $S^{2n-1}$ of codimension $1$. All these can be proved by straightforward verifications. For $n=1$, the foliation of the Lie algebroid is trivial (by points) on $S^{1}$, since $\rho\circ{\widetilde{J}}$ is null in this case. For $n=2$, the Poisson structure induced by the Poisson bisection $\tilde{J}$ has the matrix $$A\cdot J_{0}\cdot A^{t}=\left(\begin{array}[]{ccc}0&yz-tx&x^{2}+z^{2}-1\\ tx-yz&0&tz+xy\\ -x^{2}-z^{2}+1&-tz-xy&0\end{array}\right),$$ where $$A=\left(\begin{array}[]{cccc}1-x^{2}&-xy&-xz&-xt\\ -xy&1-y^{2}&-yz&-yt\\ -xz&-yz&1-z^{2}&-zt\end{array}\right)$$ The image of $\rho\circ{\widetilde{J}}$ gives a regular foliation of $S^{3}$ of codimension $1$. This remains true for any $n\geq 1$: $\rho\circ{\widetilde{J}}$ gives a regular foliation of $S^{2n-1}$ of codimension $1$. The fact that $J$ is integrable follows by a straightforward computation of the Nijenhuis tensor $N_{J}$ of $J$, that shows that $N_{J}=0$. We study now the compatibility condition between $J$ and ${\widetilde{J}}$, for $n\geq 1$. Consider $1\leq\alpha,\beta,\gamma\leq n$. Then: $$\left(\mathcal{L}_{\bar{e}_{\alpha}}\bar{e}^{\beta}\right)\left(\bar{e}_{% \gamma}\right)=-\bar{e}^{\beta}\left(\left[\bar{e}_{\alpha},\bar{e}_{\gamma}% \right]\right)=\delta_{\alpha}^{\beta}x^{\gamma}-\delta_{\gamma}^{\beta}x^{% \alpha}\,,\,\left(\mathcal{L}_{\bar{e}_{\alpha}}\bar{e}^{\beta}\right)\left(% \bar{e}_{\gamma+n}\right)=-\bar{e}^{\beta}\left(\left[\bar{e}_{\alpha},\bar{e}% _{\gamma+n}\right]\right)=\delta_{\alpha}^{\beta}x^{\gamma+n},$$ which implies $$\mathcal{L}_{\bar{e}_{\alpha}}\bar{e}^{\beta}=\delta_{\alpha}^{\beta}\sum_{% \gamma=1}^{n}\left(x^{\gamma}\bar{e}^{\gamma}+x^{\gamma+n}\bar{e}^{\gamma+n}% \right)-x^{\alpha}\bar{e}^{\beta}.$$ Analogously, $$\mathcal{L}_{\bar{e}_{\alpha}}\bar{e}^{\beta+n}=-x^{\alpha}\bar{e}^{\beta+n}\,% ,\,\mathcal{L}_{\bar{e}_{\alpha+n}}\bar{e}^{\beta}=-x^{\alpha+n}\bar{e}^{\beta% }\,,\,\mathcal{L}_{\bar{e}_{\alpha+n}}\bar{e}^{\beta+n}=\delta_{\alpha}^{\beta% }\sum_{\gamma=1}^{n}\left(x^{\gamma}\bar{e}^{\gamma}+x^{\gamma+n}\bar{e}^{% \gamma+n}\right)-x^{\alpha+n}\bar{e}^{\beta+n}.$$ We also have $$\left[\bar{e}^{\alpha},\bar{e}^{\beta}\right]_{{\widetilde{J}}}=\mathcal{L}_{{% \widetilde{J}}^{\#}\bar{e}^{\alpha}}\bar{e}^{\beta}-\mathcal{L}_{{\widetilde{J% }}^{\#}\bar{e}^{\beta}}\bar{e}^{\alpha}-d_{E}\left({\widetilde{J}}\left(\bar{e% }^{\alpha},\bar{e}^{\beta}\right)\right)=\mathcal{L}_{\bar{e}_{\alpha+n}}\bar{% e}^{\beta}-\mathcal{L}_{\bar{e}_{n+\beta}}\bar{e}^{\alpha}=-x^{\alpha+n}\bar{e% }^{\beta}+x^{\beta+n}\bar{e}^{\alpha}.$$ Analogously, $$\displaystyle\left[\bar{e}^{\alpha},\bar{e}^{\beta+n}\right]_{{\widetilde{J}}}% =2\delta_{\alpha}^{\beta}\sum_{\gamma=1}^{n}\left(x^{\gamma}\bar{e}^{\gamma}+x% ^{\gamma+n}\bar{e}^{\gamma+n}\right)-x^{\alpha+n}\bar{e}^{\beta+n}-x^{\beta}% \bar{e}^{\alpha},$$ $$\displaystyle\left[\bar{e}^{\alpha+n},\bar{e}^{\beta}\right]_{{\widetilde{J}}}% =-2\delta_{\alpha}^{\beta}\sum_{\gamma=1}^{n}\left(x^{\gamma}\bar{e}^{\gamma}+% x^{\gamma+n}\bar{e}^{\gamma+n}\right)+x^{\alpha}\bar{e}^{\beta}+x^{\beta+n}% \bar{e}^{\alpha+n},$$ $$\displaystyle\left[\bar{e}^{\alpha+n},\bar{e}^{\beta+n}\right]_{{\widetilde{J}% }}=x^{\alpha}\bar{e}^{\beta+n}-x^{\beta}\bar{e}^{\alpha+n}.$$ Using the formula $$\left[\omega,\theta\right]_{{\widetilde{J}}}^{J^{\ast}}=\left[J^{\ast}\omega,% \theta\right]_{{\widetilde{J}}}+\left[\omega,J^{\ast}\theta\right]_{{% \widetilde{J}}}-J^{\ast}\left[\omega,\theta\right]_{{\widetilde{J}}},$$ we have: $$\displaystyle\left[\bar{e}^{\alpha},\bar{e}^{\beta+n}\right]_{{\widetilde{J}}}% ^{J^{\ast}}$$ $$\displaystyle=$$ $$\displaystyle\left[J^{\ast}\bar{e}^{\alpha},\bar{e}^{\beta+n}\right]_{{% \widetilde{J}}}+\left[\bar{e}^{\alpha},J^{\ast}\bar{e}^{\beta+n}\right]_{{% \widetilde{J}}}-J^{\ast}\left[\bar{e}^{\alpha},\bar{e}^{\beta+n}\right]_{{% \widetilde{J}}}$$ $$\displaystyle=$$ $$\displaystyle\left[\bar{e}^{\alpha+n},\bar{e}^{\beta+n}\right]_{{\widetilde{J}% }}-\left[\bar{e}^{\alpha},\bar{e}^{\beta}\right]_{{\widetilde{J}}}$$ $$\displaystyle-J^{\ast}\left(2\delta_{\alpha}^{\beta}\sum_{\gamma=1}^{n}\left(x% ^{\gamma}\bar{e}^{\gamma}+x^{\gamma+n}\bar{e}^{\gamma+n}\right)-x^{\alpha+n}% \bar{e}^{\beta+n}-x^{\beta}\bar{e}^{\alpha}\right)$$ $$\displaystyle=$$ $$\displaystyle x^{\alpha}\bar{e}^{\beta+n}-x^{\beta}\bar{e}^{\alpha+n}-\left(-x% ^{\alpha+n}\bar{e}^{\beta}+x^{\beta+n}\bar{e}^{\alpha}\right)$$ $$\displaystyle-x^{\alpha+n}\bar{e}^{\beta}+x^{\beta}\bar{e}^{\alpha+n}-2\delta_% {\alpha}^{\beta}\sum_{\gamma=1}^{n}\left(x^{\gamma}\bar{e}^{\gamma+n}-x^{% \gamma+n}\bar{e}^{\gamma}\right)$$ $$\displaystyle=$$ $$\displaystyle x^{\alpha}\bar{e}^{\beta+n}-x^{\beta+n}\bar{e}^{\alpha}-2\delta_% {\alpha}^{\beta}\sum_{\gamma=1}^{n}\left(x^{\gamma}\bar{e}^{\gamma+n}-x^{% \gamma+n}\bar{e}^{\gamma}\right).$$ We have that $J{\widetilde{J}}:E^{\ast}\rightarrow E$ corresponds to the matrix $J_{0}^{2}=-I_{2n}$, thus $$\left(J{\widetilde{J}}\right)^{\#}\left(\bar{e}^{\alpha}\right)=-\bar{e}_{% \alpha}\,\,\mathrm{and}\,\,\left(J{\widetilde{J}}\right)^{\#}\left(\bar{e}^{% \alpha+n}\right)=-\bar{e}_{\alpha+n}.$$ It follows: $$\left[\bar{e}^{\alpha},\bar{e}^{\beta+n}\right]_{J{\widetilde{J}}}=\mathcal{L}% _{-\bar{e}_{\alpha}}\bar{e}^{\beta+n}-\mathcal{L}_{-\bar{e}_{\beta+n}}\bar{e}^% {\alpha}=-\mathcal{L}_{\bar{e}_{\alpha}}\bar{e}^{\beta+n}+\mathcal{L}_{\bar{e}% _{\beta+n}}\bar{e}^{\alpha}=x^{\alpha}\bar{e}^{\beta+n}-x^{\beta+n}\bar{e}^{% \alpha}.$$ For $n=1$, we have $\left[\bar{e}^{1},\bar{e}^{2}\right]_{{\widetilde{J}}}^{J^{\ast}}=-x^{1}\bar{e% }^{2}+x^{2}\bar{e}^{1}$ and $\left[\bar{e}^{1},\bar{e}^{2}\right]_{J{\widetilde{J}}}=x^{1}\bar{e}^{2}-x^{2}% \bar{e}^{1}$; thus replacing ${\widetilde{J}}$ by $-{\widetilde{J}}$, the compatibility condition can be fulfilled if $n=1$, but it is not the case if $n>1$. References [1] D. Alekseevsky, J. Grabowski, G. Marmo, P.W Michor, Poisson structures on the cotangent bundle of a Lie group or a principal bundle and their reductions, J. Math. Phys. 35 (1994) 4909-4927. [2]  C. M. Arcuş, Mechanical systems in the generalized Lie algebroids framework, Int. J. Geom. Methods Mod. Phys. (2014), DOI: 10.1142/S0219887814500236. [3] A. Baider, R. C. Churchill, D. L. Rod, Monodromy and non integrability in complex Hamiltonian dynamics, J. Dyn. Differ. Equ. 2 (1990) 451-481. [4] A. Blaga, M. Crasmareanu, C. Ida, Poisson and Hamiltonian structures on complex analytic foliated manifolds, J. Geom. Phys. 78 (2014) 19–28. [5] J. L. Brylinski, G. Zuckerman, The outer derivation of complex Poisson manifolds, J. Reine Angew. Math. 506 (1999) 181-189. [6] U. Bruzzo, V. N. Rubtsov, Cohomology of skew-holomorphic Lie algebroids, Theor. and Math. Phys. 165, 3 (2010) 1598–1609. [7] P. Cabau, F. Pelletier, Almost Lie structures on an anchored Banach bundle, J. Geom. and Phys. 62, 11 (2012) 2147–2169. [8] C. Chevalley, S. Eilenberg, Cohomology theory of Lie groups and Lie algebras, T. Am. Math. Soc. 63 (1948) 85-124. [9] L. A. Cordero, M. Fernández, R. Ibánez, L. Ugarte, Almost complex Poisson manifolds, Ann. Glob. Anal. Geom. 18, 3-4 (2000) 265-290. [10] J. Cortès, E. Martinez, Mechanical control systems on Lie algebroids, IMA J.Math. Control I. 21, 4 (2004) 457–492. [11] R. L. Fernandes, Lie Algebroids, Holonomy and Characteristic Classes, Adv. Math., 170, 1 (2002) 119–179. [12] J. Grabowski, P. Urbański, Lie algebroids and Poisson-Nijenhuis structures, Rep. Math. Phys. 40, 2 (1997) 195–208. [13] G. Gheorghiev, V. Oproiu, Finite and infinite dimensional smooth manifolds (in Romanian, Ed. Acad. Române, 1979). [14] P. J. Higgins, K. Mackenzie, Algebraic constructions in the category of Lie algebroids, J. Algebra 129 (1990) 194–230. [15] J. Ibrahim, A. Lichnerowicz, Fibrés vectoriels holomorphe, structures symplectiques ou modulaires complexes exactes, CR Math. 283 (1976) 713–717. [16] C. Ida, Lichnerowicz-Poisson cohomology and Banach Lie algebroids, Ann. Funct. Anal. 2, 2 (2011) 130–137. [17] C. Ida, Holomorphic symplectic and Poisson structures on the holomorphic cotangent bundle of a complex Lie group and of a holomorphic principal bundle, Asian-Eur. J. Math. 6, 3 (2013) 18 pp. [18] C. Ida, P. Popescu, On almost complex Lie algebroids, (submitted), math-dg/1311.2475v3. [19] D. Iglesias, J. C. Marrero, D. Martín de Diego, E. Martinez, E. Padrón, Reduction of Symplectic Lie Algebroids by a Lie Subalgebroid and a Symmetry Lie Group, SIGMA 3 (2007) 049. [20] Y. Kosmann-Schwarzbach, Poisson Manifolds, Lie Algebroids, Modular Classes: a Survey, SIGMA 4 (2008) 005. [21] C. Laurent-Gengoux, M. Stiénon, P. Xu, Holomorphic Poisson manifolds and holomorphic Lie algebroids, Int. Math. Res. Not. (2008) 088. [22] C. Laurent-Gengoux, M. Stiénon, P. Xu, Integration of holomorphic Lie algebroids. Math. Ann. 345 (2009) 895–923. [23] A. Lichnerowicz, Les variéetés de Poisson et leurs algébres de Lie associées, J. Diff. Geom. 12 (1977) 253–300. [24] A. Lichnerowicz, Variétés de Jacobi et espaces homogènes de contact complexes, J. Math. Pures Appl. 67, 9 (1988) 131–173. [25] K. Mackenzie, Lie groupoids and Lie algebroids in differential geometry (Cambridge Univ.Press., London, 1987). [26] K. Mackenzie, General theory of Lie groupoids and Lie algebroids (Cambridge Univ.Press., London, 2005). [27] C.-M. Marle, Calculus on Lie algebroids, Lie groupoids and Poisson manifolds, Diss. Math., 457 (2008) 57 pp. [28] E. Martínez, Geometric formulation of mechanics on Lie algebroids, in Proc. VIII Fall Workshop on Geom. and Phys. (1999, Medina del Campo), Publicaciones de la RSME 2 (2001) 209–222. [29] E. Martínez, Lagrangian mechanics on Lie algebroids, Acta Appl. Math. 67, 3 (2001) 295–320. [30] E. Martínez, Classical field theory on Lie algebroids: variational aspects, J. Phys. A 38, 32 (2005) 7145–7160. [31] A. Odzijewicz, Hamiltonian and quantum mechanics, in Lectures on Poisson Geometry, eds. Tudor Raţiu, Alan Weinstein and Nguyen Tien Zung (Geometry and Topology Monographs 17, 2011) pp. 385–472. [32] A. Odzijewicz, T. Raţiu, Banach Lie-Poisson spaces and reduction, Commun. Math. Phys. 243 (2003) 1–54. [33] A. Panasyuk, Isomorphisms of some complex Poisson brackets, Ann. Global Anal. Geom. 15 (1997) 313-324. [34] A. Panasyuk, The Darboux-type theorems for $\partial$–symplectic and $\partial$–contact structures, Ann. Glob. Anal. Geom. 23 (2003) 265–281. [35] L. Popescu, A note on Poisson Lie algebroids (I), Balkan J. Geom. Appl. 14, 2 (2009) 79–89. [36] P. Popescu, M. Popescu, Anchored vector bundles and Lie algebroids, Banach Cent. Publ. 54 (2001) 51 – 69. [37] B. Przybylski, Complex Poisson manifolds, in Proc. Conf. Differ. Geom. Appl., Opava, Czechoslovakia, 1992, , eds. O. Kowalski and D. Krupka, (Silesian University, Opava, 1993), pp. 227-241. [38] M. Stiénon, Holomorphic Koszul-Brylinski Homology, Int. Math. Res. Not. 2011.3 (2011) 553-571. [39] I. Vaisman, Lectures on the Geometry of Poisson Manifolds (Progress in Mathematics, vol. 118, Birkhäuser Verlag, Basel, 1994). [40] I. Vaisman, Hamiltonian structures on foliations, J. Math. Phys. 43, 10 (2002) 4966-4977. [41] I. Vaisman, Coupling Poisson and Jacobi structures on foliated manifolds, Int. J. Geom. Methods Mod. Phys. 1, 5 (2004) 607-637. [42] I. Vaisman, Poisson structures on foliated manifolds, Trav. math. XVI (2005) 139-161. [43] I. Vaisman, Foliated Lie and Courant Algebroids, Mediterr. J. Math. 7 (2010) 415–444. [44] I. Vaisman, Lie and Courant algebroids on foliated manifolds, Bull. Braz. Math. Soc. New Series 42, 4 (2011) 805–830. [45] A. Weinstein, The local structure of Poisson manifolds, J. Diff. Geom. 18, 3 (1983) 523–557. [46] A. Weinstein, The integration problem for complex Lie algebroids, in From Geometry to Quantum Mechanics (Birkhäuser Boston, 2007), pp. 93–109. [47] K. Yano, Differential Geometry on Complex and Almost Complex Spaces (Int. ser. monogr. math., vol. 10, Pergamon Press, Oxford, 1965) Paul Popescu Department of Applied Mathematics University of Craiova Address: Craiova, 200585, Str. Al. Cuza, No. 13, România email:paul${}_{-}$p${}_{-}[email protected]
Jarlskog’s Parametrization of Unitary Matrices and Qudit Theory Kazuyuki FUJII   ,   Kunio FUNAHASHI  and   Takayuki KOBAYASHI ${}^{*,\ \ddagger}$Department of Mathematical Sciences Yokohama City University Yokohama, 236–0027 Japan ${}^{\dagger}$Division of Natural Science, Izumi Campus Meiji University Tokyo, 168–8555 Japan E-mail address : [email protected] E-mail address : [email protected] E-mail address : [email protected] () Abstract In the paper (math–ph/0504049) Jarlskog gave an interesting simple parametrization to unitary matrices, which was essentially the canonical coordinate of the second kind in the Lie group theory (math–ph/0505047). In this paper we apply the method to a quantum computation based on multi–level system (qudit theory). Namely, by considering that the parametrization gives a complete set of modules in qudit theory, we construct the generalized Pauli matrices which play a central role in the theory and also make a comment on the exchange gate of two–qudit systems. Moreover we give an explicit construction to the generalized Walsh–Hadamard matrix in the case of $n=3$, $4$ and $5$. For the case of $n=5$ its calculation is relatively complicated. In general, a calculation to construct it tends to become more and more complicated as $n$ becomes large. To perform a quantum computation the generalized Walsh–Hadamard matrix must be constructed in a quick and clean manner. From our construction it may be possible to say that a qudit theory with $n\geq 5$ is not realistic. This paper is an introduction towards Quantum Engineering. 1 Introduction In the paper [1] Jarlskog gave a recursive parametrization to unitary matrices. See also [2] as a similar parametrization. One of the authors showed that the recursive method was essentially obtained by the so–called canonical coordinate of the second kind in the Lie group theory, [3]. We are working in Quantum Computation, therefore we are interested in some application to quantum computation. One of key points of quantum computation is to construct some unitary matrices (quantum logic gates) in an efficient manner like Discrete Fourier Transform (DFT) when $n$ is large enough, [4]. However, such a quick construction is in general not easy, see [5] or [6], [7]. The parametrization of unitary matrices given by Jarlskog may be convenient for our real purpose. We want to apply the method to quantum computation based on multi–level system (qudit theory). One of reasons to study qudit theory comes from a deep problem on decoherence (we don’t repeat the reason here). In the following let us consider an $n$ level system (for example, an atom has $n$ energy levels). Concerning an explicit construction of quantum logic gates in qudit theory, see for example [8], [9] and [10]. By use of the new parametrization to unitary matrices we want to construct important logic gates in an explicit manner111Quantum computation is not a pure mathematics, so we need an explicit construction, especially the generalized Pauli matrices and Walsh–Hadamard matrix, which play a central role in qudit theory. In this paper we construct the generalized Pauli matrices in a complete manner, while the Walsh–Hadamard matrix is constructed only for the case of $3$, $4$ and $5$ level systems. A calculation to construct it for the $5$ level system is relatively complicated compared to the $3$ and $4$ level systems. In general, a calculation tends to become more and more complicated as $n$ becomes large. The generalized Walsh–Hadamard matrix gives a superposition of states in qudit theory, which is the heart of quantum computation. It is natural for us to request a quick and clean construction to it. Therefore our calculation (or construction) may imply that a qudit theory with $n\geq 5$ is not realistic. Further study will be required. 2 Jarlskog’s Parametrization Let us make a brief introduction to the parametrization of unitary matrices by Jarlskog with the method developed in [3]. The unitary group is defined as $$U(n)=\left\{\,U\in M(n,{\mathbf{C}})\ |\ U^{\dagger}U=UU^{\dagger}={\bf 1}_{n}% \,\right\}$$ (1) and its (unitary) algebra is given by $$u(n)=\left\{\,X\in M(n,{\mathbf{C}})\ |\ X^{\dagger}=-X\,\right\}.$$ (2) Then the exponential map is $$\mathrm{exp}:u(n)\ \longrightarrow\ U(n)\ ;\ X\ \mapsto\ U\equiv\mathrm{e}^{X}.$$ (3) This map is canonical but not easy to calculate. We write down the element $X\in u(n)$ explicitly : $$X=\left(\begin{array}[]{cccccc}i\theta_{1}&z_{12}&z_{13}&\cdots&z_{1,n-1}&z_{1% n}\\ -\bar{z}_{12}&i\theta_{2}&z_{23}&\cdots&z_{2,n-1}&z_{2n}\\ -\bar{z}_{13}&-\bar{z}_{23}&i\theta_{3}&\cdots&z_{3,n-1}&z_{3n}\\ \vdots&\vdots&\vdots&\ddots&\vdots&\vdots\\ -\bar{z}_{1,n-1}&-\bar{z}_{2,n-1}&-\bar{z}_{3,n-1}&\cdots&i\theta_{n-1}&z_{n-1% ,n}\\ -\bar{z}_{1,n}&-\bar{z}_{2,n}&-\bar{z}_{3,n}&\cdots&-\bar{z}_{n-1,n}&i\theta_{% n}\end{array}\right).$$ (4) This $X$ is decomposed into $$X=X_{0}+X_{2}+\cdots+X_{j}+\cdots+X_{n}$$ where $$X_{0}=\left(\begin{array}[]{cccccc}i\theta_{1}&&&&&\\ &i\theta_{2}&&&&\\ &&i\theta_{3}&&&\\ &&&\ddots&&\\ &&&&i\theta_{n-1}&\\ &&&&&i\theta_{n}\end{array}\right)$$ (5) and for $2\leq j\leq n$ $$X_{j}=\left(\begin{array}[]{cccccc}0&&&&&\\ &\ddots&&{|{z_{j}}\rangle}&&\\ &&\ddots&&&\\ &-{\langle{z_{j}}|}&&0&&\\ &&&&\ddots&\\ &&&&&0\end{array}\right),\quad{|{z_{j}}\rangle}=\left(\begin{array}[]{c}z_{1j}% \\ z_{2j}\\ \vdots\\ z_{j-1,j}\end{array}\right).$$ (6) Then the canonical coordinate of the second kind in the unitary group (Lie group) is well–known and given by $$u(n)\ni X=X_{0}+X_{2}+\cdots+X_{j}+\cdots+X_{n}\ \longrightarrow\ \mathrm{e}^{% X_{0}}\mathrm{e}^{X_{2}}\cdots\mathrm{e}^{X_{j}}\cdots\mathrm{e}^{X_{n}}\in U(n)$$ (7) in this case 222There are of course some variations. Therefore we have only to calculate $\mathrm{e}^{X_{j}}$ for $j\geq 2$ ($j=0$ is trivial), which is easy (see Appendix). The result is $$\mathrm{e}^{X_{j}}=\left(\begin{array}[]{ccc}{\bf 1}_{j-1}-\left(1-\cos(\sqrt{% {\langle{z_{j}}|{z_{j}}\rangle}})\right){|{\tilde{z}_{j}}\rangle}{\langle{% \tilde{z}_{j}}|}&\sin(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}){|{\tilde{z}_{j}}% \rangle}&\\ -\sin(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}){\langle{\tilde{z}_{j}}|}&\cos(% \sqrt{{\langle{z_{j}}|{z_{j}}\rangle}})&\\ &&{\bf 1}_{n-j}\end{array}\right)$$ (8) where ${|{\tilde{z}_{j}}\rangle}$ is a normalized vector defined by $${|{\tilde{z}_{j}}\rangle}\equiv\frac{1}{\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}% }{|{z_{j}}\rangle}\ \Longrightarrow\ {\langle{\tilde{z}_{j}}|{\tilde{z}_{j}}% \rangle}=1.$$ (9) We make a comment on the case of $n=2$. Since $${|{\tilde{z}}\rangle}=\frac{z}{|z|}\equiv\mathrm{e}^{i\alpha},\quad{\langle{% \tilde{z}}|}=\frac{\bar{z}}{|z|}=\mathrm{e}^{-i\alpha}\ \Longrightarrow\ {|{% \tilde{z}}\rangle}{\langle{\tilde{z}}|}={\langle{\tilde{z}}|{\tilde{z}}\rangle% }=1,$$ we have $$\displaystyle\mathrm{e}^{X_{0}}\mathrm{e}^{X_{2}}$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{cc}\mathrm{e}^{i\theta_{1}}&\\ &\mathrm{e}^{i\theta_{2}}\end{array}\right)\left(\begin{array}[]{cc}\cos(|z|)&% \mathrm{e}^{i\alpha}\sin(|z|)\\ -\mathrm{e}^{-i\alpha}\sin(|z|)&\cos(|z|)\end{array}\right)$$ (10) $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{cc}\mathrm{e}^{i\theta_{1}}&\\ &\mathrm{e}^{i\theta_{2}}\end{array}\right)\left(\begin{array}[]{cc}\mathrm{e}% ^{i\alpha/2}&\\ &\mathrm{e}^{-i\alpha/2}\end{array}\right)\left(\begin{array}[]{cc}\cos(|z|)&% \sin(|z|)\\ -\sin(|z|)&\cos(|z|)\end{array}\right)\left(\begin{array}[]{cc}\mathrm{e}^{-i% \alpha/2}&\\ &\mathrm{e}^{i\alpha/2}\end{array}\right)$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{cc}\mathrm{e}^{i(\theta_{1}+\alpha/2)}&\\ &\mathrm{e}^{i(\theta_{2}-\alpha/2)}\end{array}\right)\left(\begin{array}[]{cc% }\cos(|z|)&\sin(|z|)\\ -\sin(|z|)&\cos(|z|)\end{array}\right)\left(\begin{array}[]{cc}\mathrm{e}^{-i% \alpha/2}&\\ &\mathrm{e}^{i\alpha/2}\end{array}\right).$$ This is just the Euler angle parametrization. Therefore the parametrization (7) may be considered as a kind of generalization of Euler’s angle one. In the following we set $$\displaystyle A_{0}\equiv\mathrm{e}^{X_{0}}$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{ccc}\mathrm{e}^{i\theta_{1}}&&\\ &\ddots&\\ &&\mathrm{e}^{i\theta_{n}}\end{array}\right),$$ $$\displaystyle A_{j}\equiv\mathrm{e}^{X_{j}}$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{ccc}{\bf 1}_{j-1}-\left(1-\cos{\beta_{j}}% \right){|{\tilde{z}_{j}}\rangle}{\langle{\tilde{z}_{j}}|}&\sin{\beta_{j}}{|{% \tilde{z}_{j}}\rangle}&\\ -\sin{\beta_{j}}{\langle{\tilde{z}_{j}}|}&\cos{\beta_{j}}&\\ &&{\bf 1}_{n-j}\end{array}\right)$$ (11) for $2\leq j\leq n$. More precisely, we write $$A_{0}=A_{0}(\{\theta_{1},\theta_{2},\cdots,\theta_{n}\}),\quad A_{j}=A_{j}(\{% \tilde{z}_{1j},\tilde{z}_{2j},\cdots,\tilde{z}_{j-1,j}\};\beta_{j})\quad% \textrm{for}\quad j=2,\cdots,n$$ (12) including parameters which we can manipulate freely. From now on we consider $A_{j}$ a kind of module of qudit theory for $j=0,2,\cdots,n$, namely $\{A_{j}|\ j=0,2,\cdots,n\}$ becomes a complete set of modules. By combining them many times 333we take no account of an ordering or a uniqueness of $\{A_{j}\}$ in the expression (7) we construct important matrices in qudit theory in an explicit manner. 3 Qudit Theory Let us make a brief introduction to a qudit theory. The theory is based on an atom with $n$ energy levels $\{({|{k}\rangle},E_{k})\ |\ 0\leq k\leq n-1\}$ , see the figure 1. First of all we summarize some properties of the Pauli matrices and Walsh–Hadamard matrix, and next state corresponding ones of the generalized Pauli matrices and generalized Walsh–Hadamard matrix within our necessity. Let $\{\sigma_{1},\sigma_{2},\sigma_{3}\}$ be Pauli matrices : $$\sigma_{1}=\left(\begin{array}[]{cc}0&1\\ 1&0\end{array}\right),\quad\sigma_{2}=\left(\begin{array}[]{cc}0&-i\\ i&0\end{array}\right),\quad\sigma_{3}=\left(\begin{array}[]{cc}1&0\\ 0&-1\end{array}\right).$$ (13) By (13) $\sigma_{2}=i\sigma_{1}\sigma_{3}$, so that the essential elements of Pauli matrices are $\{\sigma_{1},\sigma_{3}\}$ and they satisfy $$\sigma_{1}^{2}=\sigma_{3}^{2}={\bf 1}_{2}\ ;\quad\sigma_{1}^{\dagger}=\sigma_{% 1},\ \sigma_{3}^{\dagger}=\sigma_{3}\ ;\quad\sigma_{3}\sigma_{1}=-\sigma_{1}% \sigma_{3}=\mathrm{e}^{i\pi}\sigma_{1}\sigma_{3}.$$ (14) A Walsh–Hadamard matrix is defined by $$W=\frac{1}{\sqrt{2}}\left(\begin{array}[]{rr}1&1\\ 1&-1\end{array}\right)\ \in\ O(2)\ \subset U(2).$$ (15) This matrix is unitary and it plays a very important role in Quantum Computation. Moreover it is easy to realize it in Quantum Optics as shown in [8]. Let us list some important properties of $W$ : $$\displaystyle W^{2}={\bf 1}_{2},\ \ W^{\dagger}=W=W^{-1},$$ (16) $$\displaystyle\sigma_{1}=W\sigma_{3}W^{-1},$$ (17) The proof is very easy. Next let us generalize the Pauli matrices to higher dimensional cases. Let $\{\Sigma_{1},\Sigma_{3}\}$ be the following matrices in $M(n,{\mathbf{C}})$ $$\Sigma_{1}=\left(\begin{array}[]{cccccc}0&&&&&1\\ 1&0&&&&\\ &1&0&&&\\ &&1&\cdot&&\\ &&&\cdot&\cdot&\\ &&&&1&0\end{array}\right),\qquad\Sigma_{3}=\left(\begin{array}[]{cccccc}1&&&&&% \\ &\sigma&&&&\\ &&{\sigma}^{2}&&&\\ &&&\cdot&&\\ &&&&\cdot&\\ &&&&&{\sigma}^{n-1}\end{array}\right)$$ (18) where $\sigma$ is a primitive root of unity ${\sigma}^{n}=1$ ($\sigma=\mathrm{e}^{\frac{2\pi i}{n}}$). We note that $$\bar{\sigma}=\sigma^{n-1},\quad 1+\sigma+\cdots+\sigma^{n-1}=0.$$ Two matrices $\{\Sigma_{1},\Sigma_{3}\}$ are generalizations of the Pauli matrices $\{\sigma_{1},\sigma_{3}\}$, but they are not hermitian. Here we list some of their important properties : $$\Sigma_{1}^{n}=\Sigma_{3}^{n}={\bf 1}_{n}\ ;\quad\Sigma_{1}^{\dagger}=\Sigma_{% 1}^{n-1},\ \Sigma_{3}^{\dagger}=\Sigma_{3}^{n-1}\ ;\quad\Sigma_{3}\Sigma_{1}=% \sigma\Sigma_{1}\Sigma_{3}\ .$$ (19) For $n=3$ and $n=4$ $\Sigma_{1}$ and its powers are given respectively as $$\Sigma_{1}=\left(\begin{array}[]{ccc}0&&1\\ 1&0&\\ &1&0\end{array}\right),\quad\Sigma_{1}^{2}=\left(\begin{array}[]{ccc}0&1&\\ &0&1\\ 1&&0\end{array}\right)$$ (20) and $$\Sigma_{1}=\left(\begin{array}[]{cccc}0&&&1\\ 1&0&&\\ &1&0&\\ &&1&0\end{array}\right),\quad\Sigma_{1}^{2}=\left(\begin{array}[]{cccc}0&&1&\\ &0&&1\\ 1&&0&\\ &1&&0\end{array}\right),\quad\Sigma_{1}^{3}=\left(\begin{array}[]{cccc}0&1&&\\ &0&1&\\ &&0&1\\ 1&&&0\end{array}\right).$$ (21) If we define a Vandermonde matrix $W$ based on $\sigma$ as $$\displaystyle W$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\sqrt{n}}\left(\begin{array}[]{ccccccc}1&1&1&\cdot&\cdot% &\cdot&1\\ 1&\sigma^{n-1}&\sigma^{2(n-1)}&\cdot&\cdot&\cdot&\sigma^{(n-1)^{2}}\\ 1&\sigma^{n-2}&\sigma^{2(n-2)}&\cdot&\cdot&\cdot&\sigma^{(n-1)(n-2)}\\ \cdot&\cdot&\cdot&&&&\cdot\\ \cdot&\cdot&\cdot&&&&\cdot\\ 1&\sigma^{2}&\sigma^{4}&\cdot&\cdot&\cdot&\sigma^{2(n-1)}\\ 1&\sigma&\sigma^{2}&\cdot&\cdot&\cdot&\sigma^{n-1}\end{array}\right),$$ (22) $$\displaystyle W^{\dagger}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\sqrt{n}}\left(\begin{array}[]{ccccccc}1&1&1&\cdot&\cdot% &\cdot&1\\ 1&\sigma&\sigma^{2}&\cdot&\cdot&\cdot&\sigma^{n-1}\\ 1&\sigma^{2}&\sigma^{4}&\cdot&\cdot&\cdot&\sigma^{2(n-1)}\\ \cdot&\cdot&\cdot&&&&\cdot\\ \cdot&\cdot&\cdot&&&&\cdot\\ 1&\sigma^{n-2}&\sigma^{2(n-2)}&\cdot&\cdot&\cdot&\sigma^{(n-1)(n-2)}\\ 1&\sigma^{n-1}&\sigma^{2(n-1)}&\cdot&\cdot&\cdot&\sigma^{(n-1)^{2}}\end{array}% \right),$$ (23) then it is not difficult to see $$\displaystyle W^{\dagger}W=WW^{\dagger}={\bf 1}_{n},$$ (24) $$\displaystyle\Sigma_{1}=W\Sigma_{3}W^{\dagger}=W\Sigma_{3}W^{-1}.$$ (25) Since $W$ corresponds to the Walsh–Hadamard matrix (15), it may be possible to call $W$ the generalized Walsh–Hadamard matrix. If we write $W^{\dagger}=(w_{ab})$, then $$w_{ab}=\frac{1}{\sqrt{n}}\sigma^{ab}=\frac{1}{\sqrt{n}}\mathrm{exp}\left(\frac% {2\pi i}{n}ab\right)\quad\textrm{for}\quad 0\leq a,\ b\leq n-1.$$ This is just the coefficient matrix of Discrete Fourier Transform (DFT) if $n=2^{k}$ for some $k\in{\mathbf{N}}$, see [4]. For $n=3$ and $n=4$ $W$ is given respectively as $$W=\frac{1}{\sqrt{3}}\left(\begin{array}[]{ccc}1&1&1\\ 1&\sigma^{2}&\sigma\\ 1&\sigma&\sigma^{2}\end{array}\right)=\frac{1}{\sqrt{3}}\left(\begin{array}[]{% ccc}1&1&1\\ 1&\frac{-1-i\sqrt{3}}{2}&\frac{-1+i\sqrt{3}}{2}\\ 1&\frac{-1+i\sqrt{3}}{2}&\frac{-1-i\sqrt{3}}{2}\end{array}\right)$$ (26) and $$W=\frac{1}{2}\left(\begin{array}[]{cccc}1&1&1&1\\ 1&\sigma^{3}&\sigma^{2}&\sigma\\ 1&\sigma^{2}&1&\sigma^{2}\\ 1&\sigma&\sigma^{2}&\sigma^{3}\end{array}\right)=\frac{1}{2}\left(\begin{array% }[]{cccc}1&1&1&1\\ 1&-i&-1&i\\ 1&-1&1&-1\\ 1&i&-1&-i\end{array}\right).$$ (27) We note that the generalized Pauli and Walsh–Hadamard matrices in three and four level systems can be constructed in a quantum optical manner (by using Rabi oscillations of several types), see [8] and [9]. Concerning an interesting application of the generalized Walsh–Hadamard one in three and four level systems to algebraic equation, see [11]. 4 Explicit Construction of the Generalized Pauli and Generalized Walsh–Hadamard Matrices First let us construct the generalized Pauli matrices. From (12) it is easy to see $$A_{0}(\{0,2\pi/n,4\pi/n,\cdots,2(n-1)\pi/n\})=\Sigma_{3}.$$ (28) Next we construct $\Sigma_{1}$. From (12) we also set $$A_{j}=A_{j}(\{0,\cdots,0,1\};\pi/2)=\left(\begin{array}[]{cccc}{\bf 1}_{j-2}&&% &\\ &0&1&\\ &-1&0&\\ &&&{\bf 1}_{n-j}\end{array}\right)$$ (29) for $j=2,\cdots,n$. Then it is not difficult to see $$A_{2}A_{3}\cdots A_{n}=\left(\begin{array}[]{ccccc}0&&&&1\\ -1&0&&&\\ 0&-1&0&&\\ &&\ddots&\ddots&\\ &&&-1&0\end{array}\right).$$ (30) Therefore if we choose $A_{0}$ as $$A_{0}=A_{0}(\{0,\pi,\cdots,\pi\})=\left(\begin{array}[]{ccccc}1&&&&\\ &-1&&&\\ &&-1&&\\ &&&\ddots&\\ &&&&-1\end{array}\right)$$ (31) then we finally obtain $$A_{0}A_{2}A_{3}\cdots A_{n}=\Sigma_{1}.$$ (32) From (28) and (32) we can construct all the generalized Pauli matrices $\left\{\mathrm{e}^{i\phi(a,b)}\Sigma_{1}^{a}\Sigma_{3}^{b}\ |\ 0\leq a,\ b\leq n% -1\right\}$, where $\mathrm{e}^{i\phi(a,b)}$ is a some phase depending on $a$ and $b$. Similarly we can construct the matrix $$K=\left(\begin{array}[]{cccccc}1&&&&&\\ &&&&&1\\ &&&&1&\\ &&&\cdot&&\\ &&\cdot&&&\\ &1&&&&\end{array}\right)$$ (33) as follows. If $n=2k$, then $$\displaystyle A_{0}(\{\underbrace{0,\cdots,0}_{\mbox{$k+1$}},\underbrace{\pi,% \cdots,\pi}_{\mbox{$k-1$}}\})A_{k+2}(\{0,\cdots,0,1,0\};\pi/2)A_{k+3}(\{0,% \cdots,0,1,0,0\};\pi/2)\cdots$$ $$\displaystyle\times$$ $$\displaystyle A_{2k-1}(\{0,0,1,0,\cdots,0\};\pi/2)A_{2k}(\{0,1,0,\cdots,0\};% \pi/2)=K$$ (34) and if $n=2k-1$, then $$\displaystyle A_{0}(\{\underbrace{0,\cdots,0}_{\mbox{$k$}},\underbrace{\pi,% \cdots,\pi}_{\mbox{$k-1$}}\})A_{k+1}(\{0,\cdots,0,1\};\pi/2)A_{k+2}(\{0,\cdots% ,0,1,0\};\pi/2)\cdots$$ $$\displaystyle\times$$ $$\displaystyle A_{2k-2}(\{0,0,1,0,\cdots,0\};\pi/2)A_{2k-1}(\{0,1,0,\cdots,0\};% \pi/2)=K.$$ (35) Both $\Sigma_{1}$ and $K$ play an important role in constructing the exchange (swap) gate in two–qudit systems like the figure 2 where $\Sigma$=$\Sigma_{1}$. To be more precise, see [12]. It is interesting to note the simple relation $$W^{2}=K.$$ (36) Namely, the generalized Walsh–Hadamard matrix $W$ (22) is a square root of $K$. Second we want to construct the generalized Walsh–Hadamard matrix, which is however very hard. Let us show only the case of $n=3$, $4$ and $5$. (a) $n$ = 3 : For (26) we have $$W=A_{0}A_{3}A_{2}A_{0}^{{}^{\prime}}$$ (37) where each of matrices is given by $$\displaystyle A_{0}$$ $$\displaystyle=$$ $$\displaystyle A_{0}(\{0,2\pi/3,4\pi/3\})=\left(\begin{array}[]{ccc}1&&\\ &\mathrm{e}^{i2\pi/3}&\\ &&\mathrm{e}^{i4\pi/3}\end{array}\right),$$ $$\displaystyle A_{0}^{{}^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle A_{0}^{{}^{\prime}}(\{-\pi/12,7\pi/12,0\})=\left(\begin{array}[]% {ccc}\mathrm{e}^{-i\pi/12}&&\\ &\mathrm{e}^{i7\pi/12}&\\ &&1\end{array}\right)$$ and $$A_{3}=A_{3}\left(\{1/\sqrt{2},1/\sqrt{2}\};\cos^{-1}(1/\sqrt{3})\right)=\frac{% 1}{\sqrt{3}}\left(\begin{array}[]{rrr}\frac{\sqrt{3}+1}{2}&-\frac{\sqrt{3}-1}{% 2}&1\\ -\frac{\sqrt{3}-1}{2}&\frac{\sqrt{3}+1}{2}&1\\ -1&-1&1\end{array}\right)$$ and $$A_{2}=A_{2}\left(\{\mathrm{e}^{-i\pi/2}\};\pi/4\right)=\left(\begin{array}[]{% rrr}\frac{1}{\sqrt{2}}&-\frac{i}{\sqrt{2}}&\\ -\frac{i}{\sqrt{2}}&\frac{1}{\sqrt{2}}&\\ &&1\end{array}\right).$$ Here we have used $$\cos(\pi/12)=\frac{\sqrt{6}+\sqrt{2}}{4}\quad\mbox{and}\quad\sin(\pi/12)=\frac% {\sqrt{6}-\sqrt{2}}{4}.$$ In this case, the number of modules is $4$. (b) $n$ = 4 : For (27) we have $$W=A_{0}A_{4}SA_{3}A_{2}A_{0}^{{}^{\prime}}S$$ (38) where each of matrices is given by $$\displaystyle A_{0}$$ $$\displaystyle=$$ $$\displaystyle A_{0}(\{0,2\pi/4,4\pi/4,6\pi/4\})=\left(\begin{array}[]{cccc}1&&% &\\ &i&&\\ &&-1&\\ &&&-i\end{array}\right),$$ $$\displaystyle A_{0}^{{}^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle A_{0}(\{\pi/4,5\pi/4,0,0\})=\left(\begin{array}[]{cccc}\frac{1+i% }{\sqrt{2}}&&&\\ &-\frac{1+i}{\sqrt{2}}&&\\ &&1&\\ &&&1\end{array}\right)$$ and $$A_{4}=A_{4}\left(\{1/\sqrt{3},1/\sqrt{3},1/\sqrt{3}\};\pi/3\right)=\left(% \begin{array}[]{rrrr}\frac{5}{6}&-\frac{1}{6}&-\frac{1}{6}&\frac{1}{2}\\ -\frac{1}{6}&\frac{5}{6}&-\frac{1}{6}&\frac{1}{2}\\ -\frac{1}{6}&-\frac{1}{6}&\frac{5}{6}&\frac{1}{2}\\ -\frac{1}{2}&-\frac{1}{2}&-\frac{1}{2}&\frac{1}{2}\end{array}\right)$$ and $$A_{3}=A_{3}\left(\{1/\sqrt{2},1/\sqrt{2}\};\cos^{-1}(-1/3)\right)=\left(\begin% {array}[]{rrrr}\frac{1}{3}&-\frac{2}{3}&\frac{2}{3}&\\ -\frac{2}{3}&\frac{1}{3}&\frac{2}{3}&\\ -\frac{2}{3}&-\frac{2}{3}&-\frac{1}{3}&\\ &&&1\end{array}\right)$$ and $$S=A_{0}(\{0,0,\pi,0\})A_{3}(\{0,1\};\pi/2)=\left(\begin{array}[]{cccc}1&&&\\ &0&1&\\ &1&0&\\ &&&1\end{array}\right)$$ and $$A_{2}=A_{2}\left(\{\mathrm{e}^{i\pi/2}\};\pi/4\right)=\left(\begin{array}[]{% cccc}\frac{1}{\sqrt{2}}&\frac{i}{\sqrt{2}}&&\\ \frac{i}{\sqrt{2}}&\frac{1}{\sqrt{2}}&&\\ &&1&\\ &&&1\end{array}\right).$$ In this case, the number of modules is $9$. Last we show a calculation for the case of $n=5$. However, it is relatively complicated as shown in the following. (c) $n$ = 5 : We have $$W=A_{0}A_{5}A_{4}S_{1}A_{3}S_{2}A_{0}^{{}^{\prime}}$$ (39) where each of matrices is given by $$\displaystyle A_{0}$$ $$\displaystyle=$$ $$\displaystyle A_{0}(\{0,2\pi/5,4\pi/5,6\pi/5,8\pi/5\})=\left(\begin{array}[]{% ccccc}1&&&&\\ &\sigma&&&\\ &&\sigma^{2}&&\\ &&&\sigma^{3}&\\ &&&&\sigma^{4}\end{array}\right),$$ $$\displaystyle A_{0}^{{}^{\prime}}$$ $$\displaystyle=$$ $$\displaystyle A_{0}(\{9\pi/10,13\pi/10,-3\pi/10,\pi/10,0\})=\left(\begin{array% }[]{ccccc}\mathrm{e}^{i9\pi/10}&&&&\\ &\mathrm{e}^{i13\pi/10}&&&\\ &&\mathrm{e}^{-i3\pi/10}&&\\ &&&\mathrm{e}^{i\pi/10}&\\ &&&&1\end{array}\right)$$ where $$\sigma=\mathrm{e}^{i2\pi/5}=\cos(2\pi/5)+i\sin(2\pi/5)=\frac{\sqrt{5}-1}{4}+i% \frac{\sqrt{10+2\sqrt{5}}}{4}$$ and $$\displaystyle A_{5}$$ $$\displaystyle=$$ $$\displaystyle A_{5}\left(\{1/2,1/2,1/2,1/2\};\cos^{-1}(1/\sqrt{5})\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\sqrt{5}}\left(\begin{array}[]{ccccc}\frac{3\sqrt{5}+1}{% 4}&-\frac{\sqrt{5}-1}{4}&-\frac{\sqrt{5}-1}{4}&-\frac{\sqrt{5}-1}{4}&1\\ -\frac{\sqrt{5}-1}{4}&\frac{3\sqrt{5}+1}{4}&-\frac{\sqrt{5}-1}{4}&-\frac{\sqrt% {5}-1}{4}&1\\ -\frac{\sqrt{5}-1}{4}&-\frac{\sqrt{5}-1}{4}&\frac{3\sqrt{5}+1}{4}&-\frac{\sqrt% {5}-1}{4}&1\\ -\frac{\sqrt{5}-1}{4}&-\frac{\sqrt{5}-1}{4}&-\frac{\sqrt{5}-1}{4}&\frac{3\sqrt% {5}+1}{4}&1\\ -1&-1&-1&-1&1\end{array}\right)$$ and $$A_{4}=A_{4}\left(\{a/u,\alpha/u,-\bar{\alpha}/u\};\theta_{4}\right)=\left(% \begin{array}[]{ccccc}1-sa^{2}&-sa\bar{\alpha}&sa\alpha&\frac{a}{\sqrt{5}}&\\ -sa\alpha&1-s|\alpha|^{2}&s\alpha^{2}&\frac{\alpha}{\sqrt{5}}&\\ sa\bar{\alpha}&s\bar{\alpha}^{2}&1-s|\alpha|^{2}&-\frac{\bar{\alpha}}{\sqrt{5}% }&\\ -\frac{a}{\sqrt{5}}&-\frac{\bar{\alpha}}{\sqrt{5}}&\frac{\alpha}{\sqrt{5}}&-% \frac{a}{\sqrt{5}}&\\ &&&&1\end{array}\right)$$ where $$\displaystyle a\equiv\sin(2\pi/5),\quad\alpha\equiv\sin(\pi/5)+i\frac{\sqrt{5}% }{2}=\frac{\sqrt{10-2\sqrt{5}}}{4}+i\frac{\sqrt{5}}{2},\quad\cos(\theta_{4})% \equiv-\frac{a}{\sqrt{5}}$$ $$\displaystyle u\equiv\sqrt{4+\cos^{2}(2\pi/5)},\quad s\equiv\frac{2(35+\sqrt{5% })}{305}\left(1+\frac{\sin(2\pi/5)}{\sqrt{5}}\right)$$ and $$\displaystyle S_{1}$$ $$\displaystyle=$$ $$\displaystyle A_{0}(\{0,\pi,\pi,0,0\})A_{2}(\{1\};\pi/2)A_{3}(\{0,1\};\pi/2)=% \left(\begin{array}[]{ccccc}0&&1&&\\ 1&0&&&\\ &1&0&&\\ &&&1&\\ &&&&1\end{array}\right),$$ $$\displaystyle S_{2}$$ $$\displaystyle=$$ $$\displaystyle A_{0}(\{0,0,\pi,0,0\})A_{3}(\{1,0\};\pi/2)=\left(\begin{array}[]% {ccccc}0&&1&&\\ &1&&&\\ 1&&0&&\\ &&&1&\\ &&&&1\end{array}\right)$$ and $$A_{3}=A_{3}\left(\{-\beta/v,\bar{\beta}/v\};\theta_{3}\right)=\left(\begin{% array}[]{ccccc}1-\frac{|\beta|^{2}}{\sqrt{5}\left(\sqrt{5}-(a+ta^{2})\right)}&% \frac{\beta^{2}}{\sqrt{5}\left(\sqrt{5}-(a+ta^{2})\right)}&-\frac{\beta}{\sqrt% {5}}&&\\ \frac{\bar{\beta}^{2}}{\sqrt{5}\left(\sqrt{5}-(a+ta^{2})\right)}&1-\frac{|% \beta|^{2}}{\sqrt{5}\left(\sqrt{5}-(a+ta^{2})\right)}&\frac{\bar{\beta}}{\sqrt% {5}}&&\\ \frac{\bar{\beta}}{\sqrt{5}}&-\frac{\beta}{\sqrt{5}}&-\frac{a+ta^{2}}{\sqrt{5}% }&&\\ &&&1&\\ &&&&1\end{array}\right)$$ where $$\displaystyle\beta\equiv\bar{\alpha}+ta\alpha,\quad v\equiv\sqrt{5-(a+ta^{2})^% {2}}=\sqrt{2}|\beta|,\quad\cos(\theta_{3})\equiv-\frac{a+ta^{2}}{\sqrt{5}}$$ $$\displaystyle t\equiv\frac{2(7\sqrt{5}+1)}{61}\left(1+\frac{\sin(2\pi/5)}{% \sqrt{5}}\right).$$ A comment is in order. (1) Our construction is not necessarily minimal, namely a number of modules can be reduced, see [13]. (2) For $n\geq 6$ we have not succeeded in obtaining the formula like (37) or (38) or (39). In general, a calculation tends to become more and more complicated as $n$ becomes large. (3) The heart of quantum computation is a superposition of (possible) states. Therefore a superposition like $${|{0}\rangle}\longrightarrow\frac{{|{0}\rangle}+{|{1}\rangle}+{|{2}\rangle}+{|% {3}\rangle}+{|{4}\rangle}}{\sqrt{5}}$$ in the $5$ level system must be constructed in a quick and clean manner. The generalized Walsh–Hadamard matrix just gives such a superposition. Our construction in the system seems to be complicated, from which one may be able to conclude that a qudit theory with $n\geq 5$ is not realistic. 5 Discussion In this paper we treated the Jarlskog’s parametrization of unitary matrices as a complete set of modules in qudit theory and constructed the generalized Pauli matrices in the general case and generalized Walsh–Hadamard matrix in the case of $n=3$, $4$ and $5$. In spite of every efforts we could not construct the Walsh–Hadamard matrix in the general case, so its construction is left as a future task. However, our view is negative to this problem. In general, a calculation to construct it tends to become more and more complicated as $n$ becomes large. See (37) and (38) and (39). Therefore it may be possible to say that a qudit theory with $n\geq 5$ is not realistic. Further study will be required. Our next task is to realize these modules $\{A_{j}|\ j=0,2,\cdots,n\}$ in a quantum optical method, which will be discussed in a forthcoming paper. Acknowledgment. K. Fujii wishes to thank Mikio Nakahara and Shin’ichi Nojiri for their helpful comments and suggestions. Appendix Proof of the Formula (8) In this appendix we derive the formula (8) to make the paper self–contained. Since $$X_{j}=\left(\begin{array}[]{ccc}{\bf 0}_{j-1}&{|{z_{j}}\rangle}&\\ -{\langle{z_{j}}|}&0&\\ &&{\bf 0}_{n-j}\end{array}\right)\equiv\left(\begin{array}[]{cc}K&\\ &{\bf 0}_{n-j}\end{array}\right)\quad\Longrightarrow\quad\mathrm{e}^{X_{j}}=% \left(\begin{array}[]{cc}\mathrm{e}^{K}&\\ &{\bf 1}_{n-j}\end{array}\right)$$ we have only to calculate the term $\mathrm{e}^{K}$, which is an easy task. From $$\displaystyle K$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{cc}{\bf 0}_{j-1}&{|{z_{j}}\rangle}\\ -{\langle{z_{j}}|}&0\end{array}\right),\quad K^{2}=\left(\begin{array}[]{cc}-{% |{z_{j}}\rangle}{\langle{z_{j}}|}&\\ &-{\langle{z_{j}}|{z_{j}}\rangle}\end{array}\right),$$ $$\displaystyle K^{3}$$ $$\displaystyle=$$ $$\displaystyle\left(\begin{array}[]{cc}{\bf 0}_{j-1}&-{\langle{z_{j}}|{z_{j}}% \rangle}{|{z_{j}}\rangle}\\ {\langle{z_{j}}|{z_{j}}\rangle}{\langle{z_{j}}|}&0\end{array}\right)=-{\langle% {z_{j}}|{z_{j}}\rangle}K$$ (40) we have important relations $$K^{2n+1}=\left(-{\langle{z_{j}}|{z_{j}}\rangle}\right)^{n}K,\quad K^{2n+2}=% \left(-{\langle{z_{j}}|{z_{j}}\rangle}\right)^{n}K^{2}\quad\mbox{for}\quad n% \geq 0.$$ Therefore $$\displaystyle\mathrm{e}^{K}$$ $$\displaystyle=$$ $$\displaystyle{\bf 1}_{j}+\sum_{n=0}^{\infty}\frac{1}{(2n+2)!}K^{2n+2}+\sum_{n=% 0}^{\infty}\frac{1}{(2n+1)!}K^{2n+1}$$ $$\displaystyle=$$ $$\displaystyle{\bf 1}_{j}+\sum_{n=0}^{\infty}\frac{1}{(2n+2)!}\left(-{\langle{z% _{j}}|{z_{j}}\rangle}\right)^{n}K^{2}+\sum_{n=0}^{\infty}\frac{1}{(2n+1)!}% \left(-{\langle{z_{j}}|{z_{j}}\rangle}\right)^{n}K$$ $$\displaystyle=$$ $$\displaystyle{\bf 1}_{j}+\sum_{n=0}^{\infty}(-1)^{n}\frac{\left(\sqrt{{\langle% {z_{j}}|{z_{j}}\rangle}}\right)^{2n}}{(2n+2)!}K^{2}+\sum_{n=0}^{\infty}(-1)^{n% }\frac{\left(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}\right)^{2n}}{(2n+1)!}K$$ $$\displaystyle=$$ $$\displaystyle{\bf 1}_{j}-\frac{1}{{\langle{z_{j}}|{z_{j}}\rangle}}\sum_{n=0}^{% \infty}(-1)^{n+1}\frac{\left(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}\right)^{2n% +2}}{(2n+2)!}K^{2}+\frac{1}{\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}}\sum_{n=0}^% {\infty}(-1)^{n}\frac{\left(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}\right)^{2n+% 1}}{(2n+1)!}K$$ $$\displaystyle=$$ $$\displaystyle{\bf 1}_{j}-\frac{1}{{\langle{z_{j}}|{z_{j}}\rangle}}\sum_{n=1}^{% \infty}(-1)^{n}\frac{\left(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}\right)^{2n}}% {(2n)!}K^{2}+\frac{1}{\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}}\sum_{n=0}^{% \infty}(-1)^{n}\frac{\left(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}\right)^{2n+1% }}{(2n+1)!}K$$ $$\displaystyle=$$ $$\displaystyle{\bf 1}_{j}-\frac{1}{{\langle{z_{j}}|{z_{j}}\rangle}}\left(\cos(% \sqrt{{\langle{z_{j}}|{z_{j}}\rangle}})-1\right)K^{2}+\frac{\sin(\sqrt{{% \langle{z_{j}}|{z_{j}}\rangle}})}{\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}}K$$ $$\displaystyle=$$ $$\displaystyle{\bf 1}_{j}+\left(1-\cos(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}})% \right)\frac{1}{{\langle{z_{j}}|{z_{j}}\rangle}}K^{2}+\sin(\sqrt{{\langle{z_{j% }}|{z_{j}}\rangle}})\frac{1}{\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}}K.$$ If we define a normalized vector as $${|{\tilde{z}_{j}}\rangle}=\frac{1}{\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}}{|{z% _{j}}\rangle}\ \Longrightarrow\ {\langle{\tilde{z}_{j}}|{\tilde{z}_{j}}\rangle% }=1$$ then $$\frac{1}{\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}}K=\left(\begin{array}[]{cc}{% \bf 0}_{j-1}&{|{\tilde{z}_{j}}\rangle}\\ -{\langle{\tilde{z}_{j}}|}&0\end{array}\right),\quad\frac{1}{{\langle{z_{j}}|{% z_{j}}\rangle}}K^{2}=\left(\begin{array}[]{cc}-{|{\tilde{z}_{j}}\rangle}{% \langle{\tilde{z}_{j}}|}&\\ &-1\end{array}\right).$$ Therefore $$\mathrm{e}^{K}=\left(\begin{array}[]{cc}{\bf 1}_{j-1}-\left(1-\cos(\sqrt{{% \langle{z_{j}}|{z_{j}}\rangle}})\right){|{\tilde{z}_{j}}\rangle}{\langle{% \tilde{z}_{j}}|}&\sin(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}){|{\tilde{z}_{j}}% \rangle}\\ -\sin(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}){\langle{\tilde{z}_{j}}|}&\cos(% \sqrt{{\langle{z_{j}}|{z_{j}}\rangle}})\end{array}\right).$$ (41) As a result we obtain the formula (8) $$\mathrm{e}^{X_{j}}=\left(\begin{array}[]{ccc}{\bf 1}_{j-1}-\left(1-\cos(\sqrt{% {\langle{z_{j}}|{z_{j}}\rangle}})\right){|{\tilde{z}_{j}}\rangle}{\langle{% \tilde{z}_{j}}|}&\sin(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}){|{\tilde{z}_{j}}% \rangle}&\\ -\sin(\sqrt{{\langle{z_{j}}|{z_{j}}\rangle}}){\langle{\tilde{z}_{j}}|}&\cos(% \sqrt{{\langle{z_{j}}|{z_{j}}\rangle}})&\\ &&{\bf 1}_{n-j}\end{array}\right).$$ (42) References [1] C. Jarlskog : A recursive parametrization of unitary matrices, to appear in Journal of Mathematical Physics, math-ph/0504049. [2] P. Dita : Factorization of Unitary Matrices, J. Phys. A 36(2003), 2781, math-ph/0103005. [3] K. Fujii : Comment on “A Recursive Parametrisation of Unitary Matrices”, quant-ph/0505047. [4] P. W. Shor : Polynomial–Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer, SIAM J. Sci. Statist. Comput. 26(1997) 1484, quant-ph/9508027. [5] A. Barenco, C. H. Bennett, R. Cleve, D. P. Vincenzo, N. Margolus, P. Shor, T. Sleator, J. Smolin and H. Weinfurter : Elementary gates for quantum computation, Phys. Rev. A 52(1995), 3457, quant-ph/9503016. [6] K. Fujii : A Lecture on Quantum Logic Gates, The Bulletin of Yokohama City University, 53(2002), 81-90, quant-ph/0101054. [7] K. Fujii : Introduction to Grassmann Manifolds and Quantum Computation, J. Applied Math, 2(2002), 371, quant-ph/0103011. [8] K. Fujii : Quantum Optical Construction of Generalized Pauli and Walsh–Hadamard Matrices in Three Level Systems, Lecture note based on the lecture (April/2002–March/2003) at graduate course of Yokohama City University, quant-ph/0309132. [9] K. Fujii, K. Higashida, R. Kato and Y. Wada : A Rabi Oscillation in Four and Five Level Systems, to appear in Yokohama Mathematical Journal, quant-ph/0312060. [10] K. Funahashi : Explicit Construction of Controlled–U and Unitary Transformation in Two–Qudit, to appear in Yokohama Mathematical Journal, quant-ph/0304078. [11] K. Fujii : A Modern Introduction to Cardano and Ferrari Formulas in the Algebraic Equations, quant-ph/0311102. [12] K. Fujii : Exchange Gate on the Qudit Space and Fock Space, J. Opt. B : Quantum Semiclass. Opt, 5(2003), S613, quant-ph/0207002. [13] K. Funahashi : in progress.
A Radial Basis Function (RBF) Method for the Fully Nonlinear 1D Serre Green–Naghdi Equations Maurice S. Fabien Maurice S. Fabien Department of Applied Mathematics, Rice University, Houston, TX 77005 22email: [email protected] (June 2014) Abstract In this paper, we present a method based on Radial Basis Functions (RBFs) for numerically solving the fully nonlinear 1D Serre Green-Naghdi equations. The approximation uses an RBF discretization in space and finite differences in time; the full discretization is obtained by the method of lines technique. For select test cases the approximation achieves spectral (exponential) accuracy. Complete matlab code of the numerical implementation is included in this paper (the logic is easy to follow, and the code is under 100 lines). Keywords:radial basis functions mesh-free method-of-lines serre-green-naghdi ∎ 1 Introduction A spectral method that has gained attention recently is based on the so–called radial basis functions (RBFs). RBFs are relatively new, first being studied by Roland L. Hardy in the 1970s, and not gaining significant attention until the late 1980s. Originally RBFs were used in scattered data modeling, however, they have seen a wide range of applications including differential equations. RBFs offer a number of appealing properties, for instance they are capable of being spectrally accurate, they are a type of meshfree method, they are able to naturally produce multivariate approximations, and there is flexibility with the choice of a family of basis functions. RBFs can avoid costly mesh generation, scale to high dimensions, and have a diverse selection of basis functions with varying smoothness. 2 Introduction to RBF interpolation and differentiation In this section we provide an introduction to RBF interpolation and differentiation. Definition 1 Let $\Omega\subseteq\mathbb{R}^{d}$, and $\|\cdot\|_{2}$ is the usual Euclidean norm on $\mathbb{R}^{d}$. A function ${\bf\Phi}:\Omega\times\Omega\to\mathbb{R}$ is said to be radial if there exists a function $\varphi:[0,\infty)\to\mathbb{R}$, such that $$\Phi(\mathbf{x},\mathbf{y})=\varphi(r),$$ where $r=\lVert\mathbf{x}-\mathbf{y}\rVert_{2}$. Definition 2 The function $\varphi(r)$ in definition 1 is called a radial basis function. That is, a radial basis function is a real–valued function whose value depends only on the distance from some point $\vec{y}$ called a center, so that $\varphi(\vec{x},\vec{y})=\varphi(\lVert\vec{x}-\vec{y}\rVert_{2})$. In some cases radial basis functions contain a free parameter $\epsilon$, called a shape parameter. Suppose that a set of scattered node data is given: $$S=\{(\mathbf{x}_{i},f_{i}):\mathbf{x}_{i}\in\Omega,\,f_{i}\in\mathbb{R},\text{% for }i=1,2,\ldots,N\},$$ where $f:\Omega\to\mathbb{R}$ with $f(\mathbf{x}_{i})=f_{i}$. Then, an RBF interpolant applied to the scattered node data takes the following form: $$\displaystyle g(\mathbf{x})=\sum_{i=1}^{N}\alpha_{i}\phi(\|\mathbf{x}-\mathbf{% x}_{i}\|).$$ (1) The unknown linear combination weights $\{\alpha_{i}\}_{i=1}^{N}$ can be determined by enforcing $g|_{S}=f|_{S}$. This results in a linear system: $$\displaystyle\underbrace{\begin{bmatrix}\phi(||\mathbf{x}_{1}-\mathbf{x}_{1}||% _{2})&\phi(||\mathbf{x}_{1}-\mathbf{x}_{2}||_{2})&\ldots&\phi(||\mathbf{x}_{1}% -\mathbf{x}_{N}||_{2})\\ \phi(||\mathbf{x}_{2}-\mathbf{x}_{1}||_{2})&\phi(||\mathbf{x}_{2}-\mathbf{x}_{% 2}||_{2})&\ldots&\phi(||\mathbf{x}_{2}-\mathbf{x}_{N}||_{2})\\ \vdots&\vdots&\ddots&\vdots\\ \phi(||\mathbf{x}_{N}-\mathbf{x}_{1}||_{2})&\phi(||\mathbf{x}_{N}-\mathbf{x}_{% 2}||_{2})&\ldots&\phi(||\mathbf{x}_{N}-\mathbf{x}_{N}||_{2})\end{bmatrix}}_{{% \bf A}}\underbrace{\begin{bmatrix}\alpha_{1}\\ \alpha_{2}\\ \vdots\\ \alpha_{N}\end{bmatrix}}_{{\bf\alpha}}=\underbrace{\begin{bmatrix}f_{1}\\ f_{2}\\ \vdots\\ f_{N}\end{bmatrix}}_{{\bf f}}.$$ (2) The matrix $\bf A$ is sometimes called the RBF interpolation or RBF system matrix. The RBF system matrix is always nonsingular for select RBFs $\phi$. For instance the completely monotone multiquadratic RBF leads to an invertible RBF system matrix, and so do strictly positive definite RBFs like the inverse multiquadratics and Gaussians (see Micchelli and mf for more details). There is flexibility in the choice of a RBF. For instance, common RBF choices are: compactly supported and finitely smooth, global and finitely smooth, and global, infinitely differentiable (comes with a free parameter). Table 1 has a collection of some popular RBFs to illustrate the amount of variety there is in the selection of an RBF. Optimal choices for RBFs are still a current area of research. The wide applicability of RBFs makes the search for an optimal RBF challenging. In order to keep the spectral accuracy of RBFs and the inveribility of system matrix in (2), we will use the Gaussian RBFs (displayed in Table 1). It should be noted that many of the RBFs contain a shape parameter $\epsilon$. These parameters have a lot of control because each RBF center is assigned its own shape parameter (if so desired). The shape parameter modifies the “flatness” of a RBF – the smaller $\epsilon$ is, the flatter the RBF becomes. Figures 0(c) and 0(d) visualizes this behavior. In Figures 0(a) and 0(b) a contrasting behavior is shown for polynomial basis functions. As the degree of the polynomial basis increases, the basis functions become more and more oscillatory. The shape parameter can play a substantial role in the accuracy of approximations. A considerable amount of research has gone into the study of flat RBF interpolants. Flat RBF interpolants (global, infinitely differentiable) are interesting because spectral accuracy is obtainable in the limit as $\epsilon\to 0$. Interestingly enough, RBFs in the limit as $\epsilon\to 0$ have been shown to reproduce the classical pseudospectral methods (Chebyshev, Legendre, Jacobi, Fourier, etc.). More precise statements and further details can be obtained in fornbergaccuracy , Sarra , fornberg2004some , and driscoll2002interpolation . Even though RBF interpolants (global, infinitely differentiable) are capable of spectral accuracy in theory, and give rise to an invertible RBF system matrix, there are some computational considerations to be aware of. First, there is no agreed upon consensus for selecting RBF centers and shape parameters. Further, solving the linear system (2) directly in practice (often called RBF–Direct) is an unstable algorithm (see rbf_stable ). This is due to the trade off (uncertainty) principle; as $\epsilon\to 0$, the system matrix becomes more ill conditioned. Two stable algorithms currently exist for small shape parameters, Contour–Padé and RBF–QR methods (see mf and rbf_stable ). When RBF–Direct is used, selecting $\epsilon\ll 1$ is not always beneficial computationally. This is due to the linear dependency between the shape parameter and the condition number of the system matrix: as $\epsilon\to 0$, $\kappa({\bf A})\to\mathcal{O}(\epsilon^{-M})$ for large $M>0$ (see Figure 2). Many strategies currently exist for selecting shape parameters (with RBF–Direct in mind), the monograph in Sarra has a plentiful coverage. A popular strategy is based on controlling the condition number of the system matrix such that it is within a certain range, say $10^{9}\leq\kappa({\bf A})\leq 10^{15}$. If the condition number is not within the desired range, a different $\epsilon$ is selected and the condition number of the new system matrix is checked. RBF differentiation matrices depend on the system matrix, thus they inherit poor conditioning. To make this relationship more clear, we investigate how to construct RBF differentiation matrices. To approximate a derivative, we simply differentiate both sides of equation (1) to obtain $$\frac{\partial}{\partial\vec{x}_{i}}(g(\vec{x}))=\sum_{j=1}^{N}{\alpha_{j}% \frac{\partial}{\partial\vec{x}_{i}}\phi(\lVert\vec{x}-\vec{x}_{j}\rVert_{2})}% =\bigg{[}\frac{\partial}{\partial\vec{x}_{i}}{\bf A}\bigg{]}\vec{\alpha},$$ (3) for $i=1,2,\ldots,N$ and $\vec{\alpha}=[\alpha_{1},\ldots,\alpha_{N}]^{T}$. We define the first order evaluation matrix $\bf D_{1}$ such that $$({\bf D_{1}})_{ij}=\frac{\partial}{\partial\vec{x}_{i}}\phi(\lVert\vec{x}-\vec% {x}_{j}\rVert_{2}),~{}~{}~{}i,j=1,2,\ldots N.$$ If we select a RBF that gives rise to a nonsingular interpolation matrix, equation (2) implies that $\vec{\alpha}={\bf A}^{-1}\vec{f}$. Then, from equation (3), we have $$\frac{\partial}{\partial\vec{x}_{i}}(g(\vec{x}))=\bigg{[}\frac{\partial}{% \partial\vec{x}_{i}}{\bf A}\bigg{]}\vec{\alpha},~{}~{}~{}\text{ for }i=1,2,% \ldots,N\implies[{\bf D_{1}}]{\bf A}^{-1}\vec{f},$$ and we define the differentation matrix $\bf D_{x}$ to be $$\bf D_{x}={\bf D_{1}}{\bf A}^{-1}.$$ (4) Following this process one can easily construct differentiation matrices of arbitrary order: $$\frac{\partial^{(m)}}{\partial\vec{x}_{i}^{(m)}}(g(\vec{x}))=\bigg{[}\frac{% \partial^{(m)}}{\partial\vec{x}^{(m)}_{i}}{\bf A}\bigg{]}\vec{\alpha}=\bigg{[}% \frac{\partial^{(m)}}{\partial\vec{x}^{(m)}_{i}}{\bf A}\bigg{]}{\bf A}^{-1}% \vec{f}.$$ (5) Thus, the $m$th order RBF differentation matrix is given by $$\bigg{[}{\bf D_{m}}\bigg{]}{\bf A}^{-1}.$$ (6) Note that equation (4) is a matrix system, not a linear system. The unknown variable ($\bf D_{x}$) in the equation ${\bf D_{x}}{\bf A}={\bf D_{1}}$ is a matrix. In practice the matrix ${\bf A}^{-1}$ is never actually formed, a matrix system solver is used instead. In matlab this can be done by the forward slash operator, or for singular or near singular problems the pseudoinverse. The differentiation matrices derived above are called global, since they use information from every center. 2.1 Local RBF differentiation (RBF Finite Differences) This idea of local differentiation has been applied to RBFs – and it is very popular in the RBF literature, especially when concerning time dependent PDEs. Local RBFs, also known as RBF–FD (radial basis function finite differences) have produced a lot of interest due to their interpolation and differentiation matrix structure. The interpolation and differentiation matrices generated by local RBFs have a controllable amount of sparsity. This sparsity can allow for much larger problems and make use of parallelism. The main drawback of local RBFs is that spectral accuracy is no longer obtained. In fact, the accuracy for local RBFs is dictated by the stencil used (the situation is similar for finite differences). The literature for RBFs is currently leaning towards local RBFs, since global RBFs produce dense, ill conditioned matrices. This drastically limits the scalability of global RBFs. In this section we will examine the simplest of local RBFs, however, more advanced local RBFs can be found in Fornberg . A RBF–FD stencil of size $m$ requires the $m-1$ nearest neighbors (see Figure 3). The local RBF interpolant takes the form $$I_{m}g(\vec{x})=\sum_{k\in{\bf\mathcal{I}}_{i}}{\vec{\alpha}_{k}\phi(\lVert% \vec{x}-\vec{x}_{k}\rVert_{2})},$$ where $\vec{x}$ contains the $N$ RBF centers, ${\bf\mathcal{I}}_{i}$ is a set associated with RBF center $i$ and whose elements are RBF center $i$’s $m-1$ nearest neighbors. The vector $\vec{\alpha}$ is the unknown weights for the linear combination of RBFs. These weights can be calculated by enforcing $$I_{m}g(\vec{x}_{k})=g(\vec{x}_{k}),\text{ for each }k\in{\bf\mathcal{I}}_{i}.$$ This results in $N$ linear systems of size $m\times m$, ${\bf B}\vec{\alpha}=\vec{g}$. The vector $\vec{g}$ is given by $\vec{g}=[g(\vec{x}_{1}),g(\vec{x}_{2}),\ldots g(\vec{x}_{N})]^{T}$. The entries of ${\bf B}$ have a familiar form $${\bf B}_{jk}=\phi(\lVert\vec{x}_{j}-\vec{x}_{k}\rVert_{2}),~{}~{}~{}j,k\in{\bf% \mathcal{I}}_{i}.$$ By selecting global infinitely differentiable RBFs (GA, MQ, IMQ, etc.), the matrix ${\bf B}$ is guaranteed to be nonsingular. This implies that the coefficients on each stencil are uniquely defined. Local RBF derivatives are formed by evaluating $$\mathcal{L}g(\vec{x})=\sum_{j=1}^{N}{\vec{\alpha}_{j}\mathcal{L}\phi(\lVert% \vec{x}-\vec{x}_{j}\rVert_{2})},$$ at a RBF center where the stencil is based, and $\mathcal{L}$ is a linear operator. This equation can be simplified to $$\mathcal{L}g(\vec{x}_{i})=\vec{h}^{T}\vec{\alpha},$$ where $\vec{h}$ (a $m\times 1$ vector and $\vec{\alpha}$ contains the contains the RBF linear combination weights for the centers in $\mathcal{I}_{i}$) has components of the form $$(\vec{h})_{i}=\phi(\lVert\vec{x}-\vec{x}_{i}\rVert_{2}),~{}~{}~{}i=1,2,\ldots,m.$$ It then follows that $$\mathcal{L}g(\vec{x}_{i})=(\vec{h}^{T}{\bf B}^{-1})g\big{|}_{{\bf\mathcal{I}}_% {i}}=(\vec{w}_{i})g\big{|}_{{\bf\mathcal{I}}_{i}},$$ and $\vec{w}_{i}=\vec{h}^{T}{\bf B}^{-1}$ are the stencil weights at RBF center $i$ ($\vec{w}_{i}$ multiplied by the function values provides the derivative approximation). The $i$th row of the $m$th order local RBF differentiation matrix is then given by $({\bf W_{m}})_{i}=[\vec{w}_{i}]$. Figure 4 has an illustration of sparsity patterns and the relationship between accuracy and stencil size. The test problem is the the Runge function $1/(1+25x^{2})$ on the interval $[-1,1]$ (100 equally spaced points, and a blanket shape parameter of 5). For higher dimensional problems RBF–FDs have been shown to be a viable alternative. For examples see Fornberg , bollig2012solution , and Flyer . In this article, we only consider the one–dimensional Serre Green-Naghdi equations. And for this particular application of one–dimensional RBFs, local differentation provides comparable results to the global case. 3 Numerical Time Stepping Stability The spatial dimensions are to be discretized by RBFs. To achieve a complete discretization the well known method of lines will be employed. Details of this technique can be found in schiesser1991numerical . A rule of thumb for stability of the method of lines is to have the eigenvalues of the linearized spatial discretization operator scaled by $\Delta t$ to be contained in the stability region of the ODE solver invoked (see reddy1992stability ). The Serre Green-Naghdi equations are nonlinear, so it is more convenient for spectral methods if an explicit ODE solver is used. In this situation we would like the scaled eigenvalues to lie in the left half of the complex plane. Achieving stability in the method of lines discretization is still being researched. The eigenvalue location of RBF differentation matrices is irregular. Altering $\epsilon$, or $N$ can cause the locations to adjust nontrivially. For hyperbolic PDEs an answer to this difficultly has been found in the concept of hyperviscosity. Adding artificial viscosity has been shown in many cases to stabilize the numerical time stepping for hyperbolic PDEs. For instance, in bollig2012solution hypervisocity is used for the two–dimensional shallow water equations (with a RBF spatial discretization). Also, in fornberg2011stabilization hyperviscosity is applied to convective PDEs in an RBF setting. 4 Discretization In this section we discretize the fully nonlinear 1D Serre Green-Naghdi (SGN) equations. This nonlinear hyperbolic PDE system is given by $$\displaystyle h_{t}+(uh)_{x}$$ $$\displaystyle=0$$ (7) $$\displaystyle u_{t}+(0.5u^{2}+gh)_{x}$$ $$\displaystyle=\beta h^{-1}[h^{3}(u_{xt}+uu_{xx}-u_{x}^{2})]_{x},$$ (8) where $g$ is the acceleration due to gravity, and $\beta=1/3$. For spectral methods it is easier111Equation (8) creates difficulties for RBF spectral methods due to the term $u_{xt}$ located on the right hand side. to work with the following equivalent system (see Dutykh2 ) $$\displaystyle\eta_{t}+[u(d+\eta)]_{x}$$ $$\displaystyle=0$$ (9) $$\displaystyle q_{t}+[qu-0.5u^{2}+g\eta-0.5(d+\eta)^{2}u_{x}^{2}]_{x}$$ $$\displaystyle=0,$$ (10) $$\displaystyle q-u+\beta(d+\eta)^{2}u_{xx}+(d+\eta)\eta_{x}u_{x}$$ $$\displaystyle=0,$$ (11) where $\eta=\eta(x,t)$ is the free surface elevation ($h(x,t)=d+\eta(x,t)$, we will assume $d$ is constant), $u=u(x,t)$ is the depth-averaged velocity, and $q=q(x,t)$ is a conserved quantity of the form $q(x,t)=uh-\beta[h^{3}u_{x}]_{x}$. Equations (9) and (10) have time dependent derivatives, however, equation (11) has no time dependent derivatives. Hence, the numerical strategy will be to evolve equations (9) and (10) in time, and at each time step the elliptic PDE (11) will be approximated. The 1D SGN equations admit exact solitary wave solutions: $$\displaystyle\eta(x,t)$$ $$\displaystyle=a\cdot\text{sech}{(0.5\kappa(x-ct))^{2}},$$ (12) $$\displaystyle u(x,t)$$ $$\displaystyle=\frac{c\eta}{d+\eta},$$ (13) where $c=\sqrt{g(d+a)}$ is the wave speed, $a$ is the wave amplitude, and $(\kappa d)^{2}=a/(\beta(d+a))$. See Figure 6 for a visualization of relevant parameters. 4.1 Global RBF spectral method We take a radial basis function approach. To begin the collocation, partition the spatial domain (an interval in the 1D SGN case) as $x_{1},x_{2},\ldots,x_{N}$, and suppose that $N$ centers $y_{1},y_{2},\ldots y_{N}$ have been selected (for simplicity we take the centers to agree with the spatial domain partition). Then the RBF interpolation and differentiation matrices need to be constructed. The RBF interpolation matrix $\bf A$ has entries $${\bf A}_{ij}=\phi(\lVert x_{i}-y_{j}\rVert_{2}).$$ The first and second RBF order evaluation matrices are given by $${\bf D_{1}}_{ij}=\frac{\partial}{\partial x_{i}}\phi(\lVert x_{i}-y_{j}\rVert_% {2}),~{}~{}~{}{\bf D_{2}}_{ij}=\frac{\partial^{2}}{\partial x_{i}^{2}}\phi(% \lVert x_{i}-y_{j}\rVert_{2}),$$ for $i,j=1,\ldots,N.$ Then the first and second RBF differentiation matrices denoted by $\bf D_{x}$ and $\bf D_{xx}$, respectively, are defined as ${\bf D_{x}}={\bf D_{1}}{\bf A}^{-1}$ and ${\bf D_{xx}}={\bf D_{2}}{\bf A}^{-1}$. Let the variables $\vec{x},$ ${\vec{\eta}},$ ${\vec{q}}$ and $,{\vec{u}}$ be given by $${\vec{x}}=\begin{bmatrix}x_{1}\\ x_{2}\\ \vdots\\ x_{N}\end{bmatrix},~{}~{}~{}{\vec{\eta}}=\begin{bmatrix}\eta(x_{1},t)\\ \eta(x_{2},t)\\ \vdots\\ \eta(x_{N},t)\end{bmatrix},~{}~{}~{}{\vec{q}}=\begin{bmatrix}q(x_{1},t)\\ q(x_{2},t)\\ \vdots\\ q(x_{N},t)\end{bmatrix},~{}~{}~{}\text{and}~{}~{}~{}{\vec{u}}=\begin{bmatrix}u% (x_{1},t)\\ u(x_{2},t)\\ \vdots\\ u(x_{N},t)\end{bmatrix}.$$ In addition, let $\vec{d}\in\mathbb{R}^{N}$ be a vector with $d$ in each component. The variable $d$ represents the depth above the. Given $\vec{p}\in\mathbb{R}^{N}$, let the function $\text{diag}:\mathbb{R}^{N}\to\mathbb{R}^{N\times N}$ be defined as $$(\text{diag}\{\vec{p}\})_{ij}=\begin{cases}\vec{p}_{i},\text{ if }i=j\\ 0,\text{ otherwise},\end{cases}~{}~{}~{}\text{for }~{}~{}~{}i,j=1,2,\ldots,N.$$ The equations (9), (10) , and (11) can be expressed as a semidiscrete system $$\displaystyle\vec{\eta}_{t}+[{\bf D_{x}}(\vec{d}+\vec{\eta})]$$ $$\displaystyle=\vec{0}$$ (14) $$\displaystyle\vec{q}_{t}+{\bf D_{x}}[\vec{q}\otimes\vec{u}-0.5(\vec{u})^{2}+g% \vec{\eta}-0.5\text{diag}\{(\vec{d}+\vec{\eta})^{2}\}({\bf D_{x}}\vec{u})^{2}]$$ $$\displaystyle=\vec{0},$$ (15) $$\displaystyle\vec{q}-\vec{u}+\text{diag}\{\beta(\vec{d}+\vec{\eta})^{2}\}({\bf D% _{xx}}\vec{u})+\text{diag}\{(\vec{d}+\vec{\eta})\otimes({\bf D_{x}}\vec{\eta})% \}({\bf D_{x}}\vec{u})$$ $$\displaystyle=\vec{0}.$$ (16) The $\otimes$ operator is element–wise multiplication. Also, in equations (15) and (16) the square operator on vectors is element–wise. The method of lines can be employed from here to fully discretize the 1D Serre-Green Nagdhi equations. Equations (14) and (15) are of evolution type, and equation (16) can be treated like an elliptic PDE in the variable $\vec{u}$. To initialize $\eta$ and $u$, equations (12) and (13) are used. The differentiation matrix ${\mathbf{D_{x}}}$ is used to initialize $q(x,t)=uh-\beta[h^{3}u_{x}]_{x}$. The sample code given in Appendix A uses matlab’s ode113 (variable order Adams–Bashforth–Moulton PECE solver). We take full advantage of the error tolerance specifications matlab’s ODE solvers provide. We do this to attempt to match the accuracy of the temporal discretization with the spatial discretization. Below is a high level algorithm for the implementation of the RBF spectral method for 1D SGN. • Step 1: Select a RBF, RBF centers, collocation points, and a time step. • Step 2: Construct the required RBF interpolation and differentation matrices outside of the main time stepping loop. • Step 3: Use an ODE solver to advance the coupled evolution equations (14) and (15) to time level $t_{n+1}$. • Step 4: Solve the linear system in equation (16) at the time level $t_{n}$. This updates the variable $u$ to time level $t_{n+1}$. • Step 5: Repeat steps 3 and 4 until final time is reached. Algorithm (RBF spectral method) for 1D SGN 4.2 Test Cases for the 1D Serre–Green Nagdhi equations For the fully non–linear 1D Serre–Green Nagdhi equations we examine three test cases. Two come from Bonneton et al. Bonneton , and the other from Dutykh et al. Dutykh2 . Kim in Kim and Bonneton et al. in Bonneton have studies of finite volume operator splitting methods applied to the 1D SGN equations. For the test cases examined in both Kim and Bonneton , a second order convergence rate is observed with a Strang splitting. Figure 9 confirms a near–spectral convergence rate for the same test cases described in Bonneton (the spatial domains in table 2 are slightly larger; also, since the solution decays rapidly for large $x$, zero flux boundary conditions are imposed). The values of $N$ are much more moderate in the GA–RBF method. For instance, the largest $\Delta x$ used in Bonneton is $\Delta x=1/256$. For a domain of $[-15,15]$, this corresponds to 7681 evenly spaced grid points. The RBF spectral method uses dense, highly ill conditioned matrices; so a grid spacing of that magnitude is not tractable. In Figure 9 one can see that the relative error is near machine precision for $N=500$, which corresponds to a $\Delta x=0.0601$ on a domain of length 30. As far as the author is aware, these results for the RBF pseudospectral method applied to the 1D SGN equations are new. In Dutykh2 a Fourier pseudospectral method is applied to the 1D SGN equations. Figures 9 and 7 demonstrate that the global RBF approximation is indeed a spectral method. It is clear that spectral convergence of the spatial error occurs: as the number of grid points increases linearly, the error follows an exponential decay $\mathcal{O}(C^{-N})$ for $C>1$. The errors measured below are for the function $\eta(x,t)$ (free surface elevation). In Figure 8, a head on collision of two solitons is simulated. 5 Conclusion We presented a RBF spectral method to the fully nonlinear one–dimensional Serre Green-Naghdi equations. The numerical method investigated used explicit time stepping and an RBF discretization in the spatial dimesnions. Spectral accuracy in the spatial dimension was observed for test cases in Bonneton et al. Bonneton and Dutykh et al. Dutykh2 . The accuracy is much higher than more robust numerical approaches based on finite volumes (which only have a second order convergence rate). Further work includes investigating more efficient local RBFs, and possible extensions to two–dimensions. Acknowledgments I would like to acknowledge useful discussions concerning this work with Professor Randall J. LeVeque the University of Washington. Appendix A {Verbatim} function RBF_SGN() clear all; close all; clc; format short n = 200; Tfin = 3; tspan = [0 Tfin]; L = 50; x = linspace(-L,L,n)’; Nx = length(x); cx = (2)*ones(Nx,1); [Ax,D1x,D2x] = deal(zeros(Nx)); for j = 1 : Nx [Ax(:,j),D1x(:,j),D2x(:,j)] = gau(x,x(j),cx(j)); end D1x = D1x / Ax; D2x = D2x / Ax; D1x(1,:) = zeros(size(D1x(1,:))); D1x(end,:) = zeros(size(D1x(1,:))); D2x(1,:) = zeros(size(D1x(1,:))); D2x(end,:) = zeros(size(D1x(1,:))); d = 0.5; g = (1/(0.45*sqrt(d)))^2; a = 0.025; BETA = 1.0 / 3.0; c = sqrt(g*(d+a)); kappa = sqrt(3*a)/(d*sqrt(a+d)); eta = @(x,t) a * sech( 0.5*kappa*(x - c * t)).^2; u = @(x,t) c*eta(x,t) ./ (d + eta(x,t)); q = @(x,t) u(x,t) - (d+eta(x,t)).*((BETA)*(d+eta(x,t)).*(D2x*(u(x,t)))… + (D1x*(eta(x,t))).*(D1x*(u(x,t)))); Q = q(x,0); ETA = eta(x,0.0); init = [ETA; Q]; options = odeset(’RelTol’,2.3e-14,’AbsTol’,eps); tic [t,w] = ode113(@(t,q) RHS(t,q,0,D1x,D2x,g,d,BETA), tspan,init,options); toc ETA = w(end,1:Nx)’; Q = w(end,Nx+1:end)’; L1 = BETA*diag((d+ETA).^2)*D2x+diag((d+ETA).*(D1x*eta(x,Tfin)))*D1x-eye(n); U = L1  (-Q); Exact_Error_ETA = norm( ETA - eta(x,Tfin) , inf) Relative_Error_ETA = norm( ETA - eta(x,Tfin) , inf)/norm( eta(x,Tfin) , inf) Exact_Error_U = norm( U - u(x,Tfin) , inf) Relative_Error_U = norm( U - u(x,Tfin) , inf)/norm( u(x,Tfin) , inf) Exact_Error_Q = norm( Q - q(x,Tfin) , inf) Relative_Error_Q = norm( Q - q(x,Tfin) , inf)/norm( q(x,Tfin) , inf) plot(x,eta(x,0.0),’g’,x,eta(x,Tfin),’b’,x,ETA,’r.’) xlabel(’x’), ylabel(’η’) legend(’η(x,0)’,’η(x,3)’,’RBF approximation’,’Location’,’best’) end function [phi,phi1,phi2] = gau(x,xc,c) f = @(r,c) exp(-(c*r).^2); r = x - xc; phi = f(r,c); if nargout ¿ 1 phi1 = -2*r*c^2.*exp(-(c*r).^2); if nargout ¿ 2 phi2 = 2*c^2*exp(-c^2*r.^2).*(2*c^2*r.^2 - 1); end end end function f = RHS(t,q,dummy,D1x,D2x,g,d,BETA) n = length(D1x); I = eye(n); ETA = q(1:n); ETA_x = D1x*ETA; Q = q(n+1 : 2*n); L = ( BETA*diag((d+ETA).^2)*D2x + diag((d+ETA).*ETA_x)*D1x - I ); U = L  (-Q); rhs1 = -D1x*((d+ETA) .*U ); rhs2 = -D1x*(Q.*U - 0.5*(U).^2 + g*ETA - 0.5*(d+ETA).^2.*((D1x*U).^2)); f = [rhs1; rhs2]; end References (1) Bollig, E.F., Flyer, N., Erlebacher, G.: Solution to PDEs using radial basis function finite-differences (RBF-FD) on multiple GPUs. Journal of Computational Physics 231(21), 7133–7151 (2012) (2) Bonneton, P., Chazel, F., Lannes, D., Marche, F., Tissier, M.: A splitting approach for the fully nonlinear and weakly dispersive Green-Naghdi model. Journal of Computational Physics 230, 1479–1498 (2011). DOI 10.1016/j.jcp.2010.11.015 (3) Driscoll, T.A., Fornberg, B.: Interpolation in the limit of increasingly flat radial basis functions. Computers & Mathematics with Applications 43(3), 413–422 (2002) (4) Dutykh, D., Clamond, D., Milewski, P., Mitsotakis, D.: Finite volume and pseudo-spectral schemes for the fully nonlinear 1D Serre equations (2011) (5) Fasshauer, G.E.: Meshfree Approximations Methods with MATLAB. Interdisciplinary Mathematical Sciences. World Scientific (2007) (6) Flyer, N., Lehto, E., Blaise, S., Wright, G.B., St-Cyr, A.: A guide to RBF-generated finite differences for nonlinear transport: Shallow water simulations on a sphere. Journal of Computational Physics 231(11), 4078 – 4095 (2012) (7) Fornberg, B., Flyer, N.: Accuracy of radial basis function interpolation and derivative approximations on 1-d infinite grids (8) Fornberg, B., Larsson, E., Flyer, N.: Stable computations with gaussian radial basis functions. SIAM Journal on Scientific Computing 33(2), 869–892 (2011) (9) Fornberg, B., Lehto, E.: Stabilization of rbf-generated finite difference methods for convective pdes. Journal of Computational Physics 230(6), 2270–2285 (2011) (10) Fornberg, B., Wright, G., Larsson, E.: Some observations regarding interpolants in the limit of flat radial basis functions. Computers & mathematics with applications 47(1), 37–55 (2004) (11) Kim., J.: Finite volume methods for Tsunamis genereated by submarine landslides. PhD thesis, University of Washington. (2014) (12) Micchelli, C.A.: Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation 2(1), 11–22 (1986) (13) Reddy, S.C., Trefethen, L.N.: Stability of the method of lines. Numerische Mathematik 62(1), 235–267 (1992) (14) Sarra, S.A.: Multiquadric radial basis function approximation methods for the numerical solution of partial differential equations (2009) (15) Schiesser, W.E.: The Numerical Method of Lines: Integration of Partial Differential Equations. Academic Press (1991) (16) Wright, G., Fornberg, B.: Scattered node compact finite difference-type formulas generated from radial basis functions pp. 1391–1395 (2006)
First Measurement of the Neutral Current Excitation of the $\Delta$ Resonance on a Proton Target D. Androić D. S. Armstrong J. Arvieux 111Deceased S. L. Bailey D. H. Beck E. J. Beise J. Benesch F. Benmokhtar L. Bimbot J. Birchall P. Bosted H. Breuer C. L. Capuano [email protected] Y.-C. Chao A. Coppens C. A. Davis C. Ellis G. Flores G. Franklin C. Furget D. Gaskell J. Grames M. T. W. Gericke G. Guillard J. Hansknecht T. Horn M. K. Jones P. M. King W. Korsch S. Kox L. Lee J. Liu A. Lung J. Mammei J. W. Martin R. D. McKeown A. Micherdzinska M. Mihovilovic H. Mkrtchyan M. Muether S. A. Page V. Papavassiliou S. F. Pate S. K. Phillips P. Pillot M. L. Pitt M. Poelker B. Quinn W. D. Ramsay J.-S. Real J. Roche P. Roos J. Schaub T. Seva N. Simicevic G. R. Smith D. T. Spayde M. Stutzman R. Suleiman V. Tadevosyan W. T. H. van Oers M. Versteegen E. Voutier W. Vulcan S. P. Wells S. E. Williamson S. A. Wood Department of Physics, University of Zagreb, Zagreb HR-41001 Croatia Department of Physics, College of William and Mary, Williamsburg, VA 23187 USA Institut de Physique Nucléaire d’Orsay, Université Paris-Sud, F-91406 Orsay Cedex FRANCE Loomis Laboratory of Physics, University of Illinois, Urbana, IL 61801 USA Department of Physics, University of Maryland, College Park, MD 20742 USA Thomas Jefferson National Accelerator Facility, Newport News, VA 23606 USA Department of Physics, Carnegie Mellon University, Pittsburgh, PA 15213 USA Department of Physics, University of Manitoba, Winnipeg, MB R3T 2N2 CANADA TRIUMF, Vancouver, BC V6T 2A3 CANADA Department of Physics, New Mexico State University, Las Cruces, NM 88003 USA LPSC, Université Joseph Fourier Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Grenoble, FRANCE Department of Physics and Astronomy, Ohio University, Athens, OH 45701 USA Department of Physics and Astronomy, University of Kentucky, Lexington, KY 40506 USA Kellogg Radiation Laboratory, California Institute of Technology, Pasadena, CA 91125 USA Department of Physics, Virginia Tech, Blacksburg, VA 24061 USA Department of Physics, University of Winnipeg, Winnipeg, MB R3B 2E9 CANADA Department of Physics, The George Washington University, Washington, DC 20052 USA Joẑef Stefan Institute, 1000 Ljubljana, SLOVENIA Yerevan Physics Institute, Yerevan 375036 ARMENIA Department of Physics, Louisiana Tech University, Ruston, LA 71272 USA Department of Physics, Hendrix College, Conway, AR 72032 USA (January 17, 2021) Abstract The parity-violating asymmetry arising from inelastic electron-nucleon scattering at backward angle ($\sim 95^{\circ}$) near the $\Delta(1232)$ resonance has been measured using a hydrogen target. From this asymmetry, we extracted the axial transition form factor $G^{A}_{N\Delta}$, a function of the axial Adler form factors $C^{A}_{i}$. Though $G^{A}_{N\Delta}$ has been previously studied using charged current reactions, this is the first measurement of the weak neutral current excitation of the $\Delta$ using a proton target. For $Q^{2}$ = 0.34 (GeV/c)${}^{2}$ and $W$ = 1.18 GeV, the asymmetry was measured to be $-33.4\pm(5.3)_{stat}\pm(5.1)_{sys}$ ppm. The value of $G^{A}_{N\Delta}$ determined from the hydrogen asymmetry was $-0.05\pm(0.35)_{stat}\pm(0.34)_{sys}\pm(0.06)_{theory}$. These findings agree within errors with theoretical predictions for both the total asymmetry and the form factor. In addition to the hydrogen measurement, the asymmetry was measured at the same kinematics using a deuterium target. The asymmetry for deuterium was determined to be $-43.6\pm(14.6)_{stat}\pm(6.2)_{sys}$ ppm. keywords: electroproduction, Delta, axial, form factor, inelastic, asymmetry ††journal: Physics Letters B 1 Introduction A measurement of the parity-violating (PV) asymmetry in inelastic electron-nucleon scattering near the $\Delta$ resonance has been performed as part of the $G^{0}$ experiment Wells et al. (2001). The study of the PV asymmetry in $\Delta$ electroproduction using a neutral weak probe was first considered by Cahn and Gilman Cahn and Gilman (1978) as a test of the Standard Model, due to the strong dependence of the predicted asymmetry on the weak mixing angle, $\theta_{W}$. However, it is now clear that uncertainties in the theoretical predictions of the asymmetry arising from hadron structure and radiative corrections prevent a precision Standard Model test using this reaction. Instead, the inelastic asymmetry can be used to study other physics topics, including the axial $N\rightarrow\Delta$ transition. In this paper we report the first measurement of this neutral current excitation of the $\Delta$ with hydrogen and deuterium targets. The axial transition form factor, $G^{A}_{N\Delta}$, a linear combination of the axial Adler form factors, was determined from the results on hydrogen. $\Delta$ production occurs in both the charged current (CC) and neutral current (NC) channels of the weak interaction. In the constituent-quark model for the nucleon, two of the quark spins are aligned while the third is anti-aligned. The $\Delta$ results from interactions that flip the spin of the anti-aligned quark, leading to a total $\Delta$ spin of $J=\frac{3}{2}$. In CC reactions, there is a quark flavor change in addition to the spin flip. Theoretical predictions for the PV asymmetry rely on isospin symmetry and the assumption that the CC form factors will not differ significantly from their NC analogues. A precise measurement of the inelastic PV asymmetry could confirm that these assumptions are valid. The axial vector response in the $N\rightarrow\Delta$ transition has been studied theoretically using chiral constituent quark models Barquilla-Cano et al. (2007), chiral perturbation theory Procura (2008); Geng et al. (2008), light cone QCD sum rules Aliev et al. (2008), and lattice gauge theory Alexandrou et al. (2007a, b, 2011). Available data are sparse, however, and the only data from nucleon targets come from CC neutrino experiments using bubble chambers Kitagaki et al. (1990); Radecky et al. (1982); Barish et al. (1979). This subject is also topical because lack of knowledge of $G^{A}_{N\Delta}$ is one of the dominant uncertainties in understanding neutral current pion production Alvarez-Ruso et al. (2007), an important background in $\nu_{e}$ appearance neutrino oscillation experiments. 2 Theoretical Background In the scattering of longitudinally polarized electrons, the interference between $\gamma$ and $Z^{0}$ exchange amplitudes leads to a parity-violating dependence of the cross section on the electron helicity. One can form an experimental asymmetry $$A=\frac{d\sigma_{R}-d\sigma_{L}}{d\sigma_{R}+d\sigma_{L}}~{},$$ (1) where $d\sigma_{R}$ and $d\sigma_{L}$ are the differential cross sections for scattering of right- and left-handed electrons, respectively. A measurement of this asymmetry then allows study of the neutral-current contribution to the reaction. The PV asymmetry in electron-proton scattering near the $\Delta$ resonance can be written Musolf et al. (1994) $$\displaystyle A_{inel}$$ $$\displaystyle=\frac{1}{2}A^{0}\left[\Delta^{\pi}_{(1)}+\Delta^{\pi}_{(2)}+% \Delta^{\pi}_{(3)}\right]$$ (2) $$\displaystyle=A_{1}+A_{2}+A_{3},$$ where the use of the $\pi$ superscript indicates single pion production and $A^{0}$ is defined as $$A^{0}=-\frac{G_{F}Q^{2}}{2\pi\alpha\sqrt{2}}~{},$$ (3) where $G_{F}$ is the Fermi constant, $\alpha$ is the fine structure constant, and $Q^{2}$ is the negative four-momentum transfer squared ($Q^{2}=-q^{2}$). $A^{0}$ evaluates to $-61$ ppm at the experimental kinematics (see Section 3). The three $\Delta^{\pi}_{(i)}$ terms represent a decomposition of the hadronic response into resonant vector, non-resonant vector, and axial vector contributions, respectively. Here, “vector” and “axial vector” refer to the Lorentz structure of the hadronic current in the scattering. In the discussion that follows, we present our formalism for $A_{inel}$ which combines formalisms available in the literature to develop our method for the determination of the axial response. The present formalism was primarily derived from the work in Refs Musolf et al. (1994), Nath et al. (1982) and Zhu et al. (2001). The resonant vector hadron term, $\Delta_{(1)}^{\pi}$, is the dominant term in the asymmetry and the only one that is not dependent on hadronic structure. It depends only on the weak mixing angle, $\theta_{W}$. $\Delta_{(1)}^{\pi}$ contains the full contribution of the resonant vector current at the hadronic vertex to the asymmetry and can be written Musolf et al. (1994) $$\displaystyle\Delta_{(1)}^{\pi}$$ $$\displaystyle=g^{e}_{A}\xi_{V}^{T=1}$$ $$\displaystyle=2(1-2\sin^{2}\theta_{W})~{},$$ (4) where $g^{e}_{A}$ is the axial vector coupling of the electron to the $Z$ boson, which is equal to 1 in the Standard Model, and $\xi_{V}^{T=1}$ is the isovector hadron coupling to the vector $Z$, which is $2(1-2\sin^{2}\theta_{W})$ in the Standard Model. Using the world value $\sin^{2}\theta_{W}$ = 0.2353 $\pm$ 0.00013 K. Nakamura, et al. (Particle Data Group) (2010), $\Delta_{(1)}^{\pi}$ is computed to be 1.06, leading to an $A_{1}=-32.2$ ppm contribution to the asymmetry. Two different formalisms were applied to the terms $\Delta^{\pi}_{(2)}$ and $\Delta^{\pi}_{(3)}$. The first is a phenomenological approach, outlined by Musolf et al. Musolf et al. (1994). The second uses the dynamical model of Matsui, Sato, and Lee Matsui et al. (2005), and is described later. In the phenomenological approach, the non-resonant vector term, $\Delta^{\pi}_{(2)}$, is written as a sum of longitudinal, transverse magnetic, and transverse electric multipoles using an isospin decomposition Musolf et al. (1994). We computed the sum through $l=2$ using multipoles from MAID2007 D. Drechsel, S.S. Kamalov, L. Tiator (1999). Further details on our implementation are available elsewhere Capuano (2011). The theoretical value of $\Delta^{\pi}_{(2)}$ was determined to be 0.018, leading to an asymmetry of $A_{2}=-0.55$ ppm. An uncertainty of $\sigma_{2}^{th}$ = 0.72 ppm is applied to account for estimates made in the model. The axial term, $\Delta^{\pi}_{(3)}$, can be written in terms of $F(Q^{2},s)$, which contains both axial and electromagnetic form factors, $$\displaystyle\Delta_{(3)}^{\pi}$$ $$\displaystyle\approx g^{e}_{V}\xi_{A}^{T=1}F(Q^{2},s)$$ $$\displaystyle\approx 2(1-4\sin^{2}\theta_{W})F(Q^{2},s)~{},$$ (5) where $s$ is the Mandelstam invariant and $g^{e}_{V}$ and $\xi_{A}^{T=1}$ have been replaced with their respective Standard Model tree level values of $g^{e}_{V}$ = $(-1+4\sin^{2}\theta_{W})$ and $\xi_{A}^{T=1}$= $-$2. In Ref. Musolf et al. (1994), $\Delta^{\pi}_{(3)}$ is defined as the total axial contribution, including both resonant and non-resonant terms. Here, we neglect any non-resonant axial contributions (hence the use of “$\approx$” in Equation 5), because theoretical studies Hammer and Dreschel (1995) Mukhopadhyay et al. (1998) indicate that these are small. The function $F(Q^{2},s)$ can be written as a product of two functions of form factors, $$F(Q^{2},s)=\frac{E+E^{\prime}}{M}H^{EM}(Q^{2},\theta)G^{A}_{N\Delta}(Q^{2})~{},$$ (6) where $H^{EM}(Q^{2},\theta)$ and $G^{A}_{N\Delta}(Q^{2})$ are linear combinations of electromagnetic ($C^{\gamma}_{i}$) and axial ($C^{A}_{i}$) form factors, respectively, and $M$ is the nucleon mass. The explicit functional form of $G^{A}_{N\Delta}(Q^{2})$ will be presented below, while $H^{EM}(Q^{2},\theta)$ is described in Ref. Capuano (2011). In order to calculate a theoretical asymmetry, it is necessary to compute the form factors $C^{\gamma}_{i}$ and $C^{A}_{i}$. One convenient way to express the $Q^{2}$ dependence of the form factors is through the use of dipole forms. In this notation, referred to as the Adler parameterization Adler (1968)Adler (1975), the form factors are expressed as $$\displaystyle C^{\gamma}_{i}(Q^{2})$$ $$\displaystyle=C^{\gamma}_{i}(0)G^{V}_{D}(Q^{2})~{},$$ (7) $$\displaystyle C^{A}_{i}(Q^{2})$$ $$\displaystyle=C^{A}_{i}(0)G^{A}_{D}(Q^{2})\xi^{A}(Q^{2})~{},$$ (8) where the functions $G^{V,A}_{D}(Q^{2})$ are dipole form factors defined as $$\displaystyle G^{V,A}_{D}(Q^{2})=\bigg{[}1+\frac{Q^{2}}{M^{2}_{V,A}}\bigg{]}^{% -2}~{}.$$ (9) The parameters $M_{V,A}$ are the vector (V) and axial (A) dipole masses, which have been determined from fits to existing data. The current world values for these masses are $M_{V}$ = 0.84 GeV Arrington et al. (2011) and $M_{A}$ = 1.03 $\pm$ 0.02 GeV Bernard et al. (2002). The function $\xi^{A}$ is used to give additional structure to the $Q^{2}$ dependence of the axial response and is written $$\xi^{A}(Q^{2})=1+\Bigg{(}\frac{a^{\prime}Q^{2}}{b^{\prime}+Q^{2}}\Bigg{)}~{},$$ (10) with the parameters $a^{\prime}$ and $b^{\prime}$ determined from a fit to CC neutrino data performed by Schreiner and von Hippel Schreiner and von Hippel (1973). For the Adler model, $a^{\prime}$ was found to be $-$1.2 and $b^{\prime}$ was 2 (GeV/c)${}^{2}$. These results hold only for $Q^{2}~{}<$ 0.5 (GeV/c)${}^{2}$, which covers the present experimental kinematics. The values for $C_{i}(0)$ are determined from fits to charged current data and are fit-dependent. In this work, the Adler values of these coefficients, as quoted by Nath Nath et al. (1982), were used. They are $$\displaystyle C^{A}_{3}(0)$$ $$\displaystyle=0~{},$$ $$\displaystyle C^{\gamma}_{3}(0)$$ $$\displaystyle=1.85~{},$$ $$\displaystyle C^{A}_{4}(0)$$ $$\displaystyle=-0.35~{},$$ $$\displaystyle C^{\gamma}_{4}(0)$$ $$\displaystyle=-0.89~{},$$ (11) $$\displaystyle C^{A}_{5}(0)$$ $$\displaystyle=1.20~{}.$$ Note that not all the $C_{i}$’s contribute to the asymmetry; $C^{\gamma}_{6}$ vanishes due to CVC and $C^{A}_{6}$ vanishes if we neglect the electron mass. From the coefficients above, we can also neglect $C^{A}_{3}$. Further, the photo- and electroproduction data can be fit using the assumption that $C^{\gamma}_{5}$ = 0 and that $C^{\gamma}_{4}$ = $-\frac{M}{M+M_{\Delta}}C^{\gamma}_{3}$ Jones and Petcov (1980). Thus, only the $i$ = 3,4 terms of the electromagnetic and the i = 4,5 terms of the axial form factors contribute to the asymmetry. The axial transition form factor expressed in terms of the Adler form factors is then $$\displaystyle G^{A}_{N\Delta}(Q^{2})=\frac{1}{2}[M^{2}-M^{2}_{\Delta}+Q^{2}]C^% {A}_{4}(Q^{2})-M^{2}C^{A}_{5}(Q^{2})~{},$$ (12) where $M_{\Delta}$ is the mass of the $\Delta(1232)$. Using this formalism leads to the value $A_{3}=-1.8$ ppm for the resonant axial-vector hadron contribution to the asymmetry. We assigned an uncertainty of $\sigma^{th}_{3}=0.65$ ppm to this value to account for the neglect of non-resonant contributions and uncertainties related to the model and fits used to compute $H^{EM}$. Summing the three individual components leads to a total theoretical asymmetry of $A^{th}=-34.6\pm 1.0$ ppm at the present kinematics. Electroweak radiative corrections (EWRC) have been applied to the theoretical asymmetry. We distinguish between ‘one-quark’ and ‘multi-quark’ corrections, adopting the terminology of Zhu et al. Zhu et al. (2001). One-quark radiative corrections are those in which the electron interacts only with a single quark in the nucleon, and are calculable with sufficient precision. In contrast, multi-quark corrections for the axial response are largely model dependent. One-quark EWRC were applied to each $\Delta_{(i)}^{\pi}$ term independently using the corrections reported by the Particle Data Group K. Nakamura, et al. (Particle Data Group) (2010) for the $\overline{MS}$ renormalization scheme. These corrections have been included in the theoretical asymmetries quoted previously. The one-quark EWRC are small ($<1.5\%$) for both of the vector hadron terms. For the axial term, however, these EWRC are large, leading to a 58% reduction in $A_{3}$. Zhu et al. Zhu et al. (2001) have studied the multi-quark axial EWRC and their possible impact on the PV asymmetries. Specifically, they highlighted two effects present at the PV $\gamma N\Delta$ vertex that may contribute to the present measurement. The first effect is an anapole moment analogous to that previously seen in elastic scattering Zhu et al. (2000). The second, referred to as the Siegert term, is of interest because it could lead to a non-zero asymmetry at $Q^{2}$ = 0. In a separate analysis, as part of the $G^{0}$ experiment, we have taken pion photoproduction data on a deuterium target at very low $Q^{2}$ Androić et al. (2012). These data allow us to bound the impact of the Siegert term at the present kinematics to $|A_{Siegert}|\leq$ 0.15 ppm. Given the large overall theoretical uncertainty in these effects, multi-quark EWRC are not applied in the present analysis. The second asymmetry formalism is that of Matsui, Sato, and Lee who have developed a dynamical model of pion electroproduction near the $\Delta$ resonance Matsui et al. (2005) and performed a calculation of the inelastic asymmetry at the present kinematics. As in the phenomenological approach, Matsui et al. write the asymmetry as a sum of resonant vector, non-resonant vector, and axial vector hadron pieces. In their notation, $A_{inel}$ is given by $$A_{inel}=\frac{1}{2}A^{0}\bigg{[}(2-4\sin^{2}\theta_{W})+\Delta_{V}+\Delta_{A}% \bigg{]}~{},$$ (13) where $(2-4\sin^{2}\theta_{W})$ is identical to $\Delta^{\pi}_{(1)}$ as defined in Equation 4 . The two remaining terms, $\Delta_{V}$ and $\Delta_{A}$, are equivalent to $\Delta^{\pi}_{(2)}$ and $\Delta^{\pi}_{(3)}$, respectively. However, the formalism used to calculate these terms differs from that which was presented previously. The differences in these two formalisms will be discussed here; more detailed information on the dynamical model is available in Refs Matsui et al. (2005) and Hemmert et al. (1995). For the term $\Delta_{V}$, Matsui et al. derive an expression in terms of structure functions analogous to that used to determine the resonant form of $\Delta^{\pi}_{(3)}$ as presented in Ref Nath et al. (1982). This allows them to determine the contribution of the non-resonant vector hadron reactions to the asymmetry using their dynamical model rather than through the use of phenomenological multipoles. For the axial term, the definitions of $\Delta_{A}$ and $\Delta^{\pi}_{(3)}$ in terms of structure functions are the same. Where Matsui et al. differ is in their parameterization of the form factors. As before, a dipole form is used for both the vector and the axial vector form factors. However, the additional $Q^{2}$ parameterization present in the function $\xi^{A}$ takes on an exponential form rather than that of Equation 10. Explicitly, $\xi^{A}$ is given by $$\xi^{A}(Q^{2})=(1+aQ^{2})e^{-bQ^{2}}~{},$$ (14) where $a$ = 0.154 (GeV/c)${}^{-2}$ and $b$ = 0.166 (GeV/c)${}^{-2}$ were determined by fits to CC neutrino data. The resulting theoretical values for $A_{2}$ and $A_{3}$ computed with the phenomenological approach and the dynamical model each differ by $<1$ ppm. 3 Experiment The $G^{0}$ experiment was performed using a beam of longitudinally polarized electrons provided by the accelerator at the Thomas Jefferson National Accelerator Facility. The electrons were scattered from a 20 cm long unpolarized liquid hydrogen or deuterium target. The $G^{0}$ experiment ran in two phases, collecting data at both forward and backward scattering angles. The inelastic measurement was performed in the backward angle configuration. Since the primary concern of the $G^{0}$ collaboration was the elastic electron scattering measurement, the experimental apparatus was optimized for elastic kinematics. For the backward angle measurement, the $G^{0}$ spectrometer consisted of a toroidal magnet, scintillation detectors and aerogel Cherenkov counters. Measurements were taken at beam energies of 687 MeV and 362 MeV, though only the higher of these energies is relevent to the inelastic measurement. Detailed descriptions of each of the components of the $G^{0}$ experimental apparatus, and their use in both configurations, are given elsewhere Androić et al. (2011). Electrons that scattered from the target were bent in a magnetic field and passed through a collimator system before entering the detector system. The collimators defined the experimental acceptance, leading to an effective scattering angle of $\sim 95^{\circ}$ for inelastic events and $\sim 108^{\circ}$ for elastic events. The detector system was segmented into octants arranged symmetrically around the beamline with each detector octant covering one of the eight gaps between adjacent magnet coils. Each detector octant consisted of two sets of plastic scintillators and an aerogel Cherenkov detector. The scintillators, labeled Focal Plane Detectors (FPDs) and Cryostat Exit Detectors (CEDs), were used for a rough tracking of the scattered electron’s path. This led to a two dimensional CED$\cdot$FPD detector space which allowed for a kinematic separation between elastically and inelastically scattered electrons. The Cherenkov detector (CER) was used to distinguish between electrons and pions. A DAQ system consisting of specially-designed logic boards counted coincidences of these detectors, with CED$\cdot$FPD$\cdot$CER coincidences counted as electron events and CED$\cdot$FPD$\cdot\mathrm{\overline{CER}}$ counted as pion events. Thus, electrons and pions were counted side-by-side and their rates recorded separately by scalers. The helicity of the beam was flipped at a rate of 30 Hz, resulting in a series of 1/30 s segments of common helicity called macropulses (MPSs). The helicity pattern was generated as a collection of four MPSs, referred to as a quartet. The use of quartets, coupled with the fast helicity reversal, cancels linear drifts that can affect the asymmetry. The sequence of the helicity reversal for each quartet was chosen to be either $+--+$ or $-++-$ depending on a randomly generated initial MPS. The coincidence count for each CED$\cdot$FPD pair was recorded for a single MPS, then normalized to the beam current to create an MPS yield. The asymmetry was then computed for each quartet by combining the detector yields for the positive and negative helicity MPSs according to $$A_{qrt}=\frac{\sum Y_{i}^{+}-\sum Y_{i}^{-}}{\sum Y_{i}^{+}+\sum Y_{i}^{-}}~{}.$$ (15) Figure 1 shows the octant-averaged electron yields for the hydrogen data for each CED$\cdot$FPD coincidence cell. Coincidence cells with common kinematic acceptance were grouped into loci. The inelastic and elastic loci are indicated on the figure. The measured asymmetry for a given locus is taken as the average of the asymmetries in each locus cell, weighted by statistics. The average kinematics for the inelastic measurement were determined through the use of a GEANT3 simulation CERN Applications Software Group (1993). The detector acceptance is defined by the collimators that are part of the spectrometer. For events in the inelastic locus, the range of accepted scattering angles was $85^{\circ}<\theta<105^{\circ}$ and the average $\theta$ = 95${}^{\circ}$. This results in a $Q^{2}$ range of $0.25<Q^{2}<0.5$ (GeV/c)${}^{2}$ with an average of $Q^{2}$ = 0.34 (GeV/c)${}^{2}$. The range of invariant mass, $W$, covers the entire region from the pion threshold to just past the peak of the $\Delta$, with a range of $1.07<W<1.26$ GeV. Most of the events were on the low $W$ side of the peak, leading to an average just below the resonance at $W$ = 1.18 GeV. 4 Data Analysis In order to determine the physics asymmetry, several corrections were applied to the raw asymmetry. An overview of the corrections will be provided in this section, while more detailed descriptions of the data analysis related to beam and instrumentation $G^{0}$ Collaboration (tbd), and the inelastic result Capuano (2011) are available elsewhere. A summary of the corrections applied and their contributions to the systematic uncertainty is given in Table 2. There were three rate-dependent effects that needed to be considered. Corrections were applied to the measured yield to account for two of these: dead time in the detectors and electronics, and randomly triggered Cherenkov events. Typical dead times were $\sim$30 ns. Applying these corrections resulted in an increase in the magnitude of the asymmetry for both targets. For hydrogen, there was a 2 ppm, or 10%, increase, while for the deuterium the change was much larger at 12.6 ppm, or nearly 90% of the uncorrected value. This large shift in the asymmetry for the deuterium data was due to the Cherenkov corrections, which were more significant because of high rates of $\pi^{-}$. Error analysis showed that for each data set the two corrections applied provided a negligible contribution to the uncertainty. The third rate-dependent effect, random CED$\cdot$FPD coincidences, was not included in the corrections applied. This effect was instead treated as an uncertainty. For each data set, an upper bound on the residual false asymmetry due to the presence of CED$\cdot$FPD randoms was estimated using information obtained from the elastic electron locus, where rate-correlated effects were more pronounced. The bounds determined from this analysis, 0.16 ppm for hydrogen and 1.2 ppm for deuterium, were assigned as uncertainties for the rate corrections. Corrections were also applied to account for any false asymmetry arising from helicity-correlated beam properties, including angle, position and energy of the electron beam. Because of the high quality of beam that was provided by the Jefferson Lab accelerator, helicity-correlated effects were negligible. As a result, the impact of these effects was also small ($A_{false}<$ 0.3 ppm). A conservative 100% uncertainty was assigned for this correction. The beam polarization was determined using a Møller polarimeter located upstream of the target Hauger et al. (2001). Measurements taken periodically throughout the experimental run found the polarization to be steady at $P$ = 85.8 % $\pm$ (0.07)${}_{stat}\pm$ (1.4)${}_{sys}$. Though the beam polarization was nominally longitudinal, there was a small transverse component present. However, the symmetrical nature of the $G^{0}$ spectrometer meant this component’s impact on the asymmetry was negligible and no correction was necessary. Instead, using information determined through a measurement of the asymmetry with transverse beam and an estimate of detector misalignment, an upper bound on the false asymmetry was determined for each target and treated as an uncertainty. (See Table 2) The most significant correction applied to the inelastic asymmetry was the background correction. Since the spectrometer was optimized for elastic, not inelastic, scattering, the yield in the inelastic region of the matrix contained a high percentage of background. In order to correct for backgrounds, a procedure was developed Capuano (2011) that made use of both background measurements and simulated yields to determine the fractional contribution, or dilution factor ($f^{bg}_{i}$), of each process. The dilution factors were then used to subtract the background asymmetries ($A^{bg}_{i}$) from the cell average asymmetry ($A_{meas}$) according to $$A_{inel}=\frac{A_{meas}-\sum f^{bg}_{i}A^{bg}_{i}}{1-\sum f^{bg}_{i}}~{}.$$ (16) In any given CED$\cdot$FPD cell, there were up to five major processes contributing to the total yield and average asymmetry: electrons scattered elastically from the target liquid, electrons scattered inelastically from the target liquid, electrons scattered elastically or inelastically from the aluminum windows of the target cell, $\pi^{0}$ decay, and misidentified $\pi^{-}$. Note that the $\pi^{-}$ contribution here differs from that which is removed by the rate corrections, with the contamination resulting both from rate-independent effects causing false Cherenkov triggers and from $\delta$-ray production. The $\pi^{0}$ contribution to the background is due to electrons that are emitted through secondary processes following pion decays, primarily electron-positron pairs from decay photons interacting in the shielding. The yields due to scattering from the target windows and from pion contamination were determined using data from special measurements made during the experimental run. For the three remaining processes, simulation was used to model the individual yield distributions across the detector acceptance while a fitting routine determined the appropriate normalizations. Before applying the fit, the target window and $\pi^{-}$ yields were subtracted from the total yield leading to a reduced yield, $Y_{R}$, consisting of elastic, inelastic and $\pi^{0}$ decay yields. $Y_{R}$ was then fit as a function of FPD for each CED, or across each row in the yield matrix shown in Fig. 1, by assigning scale factors to the simulated yields and allowing them to vary independently. Since our symmetrical spectrometer implies constant yields across the detector octants, all octants were fit simultaneously to find a single scale factor that represented an octant average for the CED. Once the scale factors were determined, a second fit of scale factor as a function of CED was performed for each process to ensure that the scale factors varied smoothly across the matrix. The yield contribution for a given process in a given cell was then defined as the simulated cell yield multiplied by the fitted scale factor for that cell. Details on the methods used to determine the cell-by-cell yield contribution from each process are given in Capuano (2011). Figure 2 shows the full background separation for a typical CED in a typical octant for the hydrogen data. Measured asymmetries determined from $G^{0}$ data were used for each of the background processes. An asymmetry of 0 $\pm$ 3 ppm, based on our measured $\pi^{-}$ asymmetry and experimental uncertainty, was used for both the $\pi^{0}$ decay and $\pi^{-}$ backgrounds. Our measured elastic electron asymmetry Androić et al. (2010) was used with a scaling, determined through simulation, to account for the fact that the background in the inelastic locus is mainly from the elastic radiative tail, which has a lower average $Q^{2}$ than our elastic data. The scaling was computed cell-by-cell, with those closest to the elastic locus having the smallest scale factors. The locus average scaling was about 30% for hydrogen and over 50% for deuterium. For the empty target background, the dominant process was $\Delta$-electroproduction from aluminum. Although we were able to use data to determine the yields, the data did not have sufficient precision for an asymmetry measurement. Instead, the aluminum asymmetry was approximated using the measured deuterium asymmetry. This asymmetry differs slightly from the true aluminum asymmetry due to the differing proton to neutron ratio between the two nuclei, however, this difference is small compared to the error in the measurement. Note that it was not necessary to remove the empty target background from the deuterium data since the asymmetries were considered equal. Table 1 summarizes the dilution factors and their corresponding background asymmetries. The total background fraction in the inelastic locus was (52.5 $\pm$ 4)% for the hydrogen data and (64.9 $\pm$ 5)% for the deuterium data. For both targets, the radiative tail of the elastic electrons was the dominant background, contributing 25% of the hydrogen yield and 30% of the deuterium yield. The higher total background fraction in deuterium was due both to the presence of $\pi^{-}$ contamination and to the widening of the elastic peak due to Fermi motion in the deuterium nucleus. Subtraction of the backgrounds results in a $-7.3$ ppm change in the hydrogen asymmetry and a $-12.6$ ppm change in the deuterium asymmetry. The background correction also results in an approximately 5 ppm systematic error due to the uncertainties from both the measured asymmetries and the dilution factors. For the hydrogen asymmetry, two additional corrections were needed before comparing our results to theory. The first of these accounts for electromagnetic radiative effects by using simulation to compute corrections according to the procedure of Mo and Tsai Mo and Tsai (1969) Tsai (1971). The total correction was determined by comparing the simulated locus-average value of the asymmetry with and without radiative corrections included and was found to be (1.17 $\pm$ 0.6)%. The uncertainty arises from the model chosen for the inelastic cross section. We then accounted for bias due to acceptance by comparing the locus average asymmetry, $\langle A(Q^{2},W)\rangle$, to the asymmetry at the average kinematics, $A(\langle Q^{2}\rangle,\langle W\rangle)$, and found this to be a (-1.6 $\pm$ 0.6)% effect. The uncertainty here was determined by comparing the asymmetries computed with the two formalisms discussed in Section 2. Table 2 summarizes the corrections applied, the systematic uncertainty assigned for each, and their impact on the asymmetry. Backgrounds constituted the largest correction to the asymmetry and the largest contribution to the uncertainty. Note that, in addition to its impact on systematic uncertainty, the background correction led to an increase in the statistical uncertainty as background events were removed. Additionally, since the hydrogen result made use of the deuterium result for the aluminum asymmetry, the systematic uncertainty and the large statistical error present in the deuterium asymmetry were factored in to the systematic uncertainty for hydrogen. The second largest impact was that of the rate corrections, with the larger contribution to the deuterium asymmetry due to generally higher scattering rates and greater contamination caused by high pion rates. The total uncertainty, including both systematic and statistical errors, is 7.4 ppm for the hydrogen asymmetry and 16 ppm for deuterium. 5 Results and Conclusion Upon applying the corrections outlined in the previous section, we arrive at the final measured asymmetries for the hydrogen and deuterium data. These asymmetries are $$\displaystyle A_{inel}$$ $$\displaystyle=-33.4\pm(5.3)_{stat}\pm(5.1)_{sys},$$ (17) $$\displaystyle A^{D}_{inel}$$ $$\displaystyle=-43.6\pm(14.6)_{stat}\pm(6.2)_{sys}.$$ (18) We found the deuterium asymmetry to be consistent with the hydrogen asymmetry within errors as would be expected. As discussed previously, due to the lack of a model for the deuterium asymmetry, no further information was extracted from $A^{D}_{inel}$. For the hydrogen asymmetry, however, theoretical input can be used to determine the form factor of interest in this measurement. The measured $G^{A}_{N\Delta}$ can be extracted from the asymmetry by first using the theoretical values of $A_{1}$ and $A_{2}$ discussed in Section 2 to isolate the axial response, $A_{3}$. Subtracting off the vector portions of $A_{inel}$ leads to $A_{3}=-0.69\pm(5.3)_{stat}\pm(5.1)_{sys}\pm(0.7)_{th}$ ppm, where the subscript ”th” denotes the theoretical uncertainty. The theoretical value for the resonant contribution calculated using the formalism of Musolf et al. is $A_{3}=-$1.8 ppm, while the Matsui et al. formalism leads to $A_{3}=-$1.7 ppm. The measured value for $A_{3}$ is consistent within errors with both of these values. While these results indicate that there are no major deficiencies in the theory, or in the assumption that the non-resonant contribution can be neglected, there is insufficient precision in the present measurement to select one prediction over the other. From this value of $A_{3}$, $G^{A}_{N\Delta}$ was found to be $-0.05\pm(0.35)_{stat}\pm(0.34)_{sys}\pm(0.06)_{th}$. Here the full theoretical uncertainty of 1.0 ppm has been included. The resulting $G^{A}_{N\Delta}$ is consistent, within errors, with the theoretical value of $-$0.196. Figure 3 shows a comparison of the measured hydrogen asymmetry to the theoretical values computed using the definitions in Section 2. The statistical, systematic and theoretical uncertainties have been summed in quadrature to yield a single total uncertainty. The measured asymmetry, $A_{inel}$ is consistent with the total theoretical asymmetry, $A_{th}$ = $A_{1}+A_{2}+A_{3}$, within errors. In summary, we have measured the parity-violating asymmetry in inelastic electron-proton scattering and found excellent agreement with theoretical expectations based on the formalisms of Musolf et al. Musolf et al. (1994) and Matsui et al. Matsui et al. (2005). In addition, the axial transition form factor, $G^{A}_{N\Delta}$, has been determined to be consistent with the theoretical formalism presented by Musolf et al. Musolf et al. (1994) using the Adler parameterization. This represents the first neutral current measurement of the axial N$\rightarrow\Delta$ response from a nucleon target. Acknowledgements We gratefully acknowledge the strong technical contributions to this experiment from many groups: Caltech, Illinois, LPSC-Grenoble, IPN-Orsay, TRIUMF, and particularly the Accelerator and Hall C groups at Jefferson Lab. CNRS (France), DOE (U.S.), NSERC (Canada), and NSF (U.S.) supported this work in part. We also thank Harry Lee and collaborators for providing their calculations at our kinematics. References Wells et al. (2001) S. P. Wells, N. Simicevic, the $G^{0}$ Collaboration, JLab experiment E01-115, 2001. Cahn and Gilman (1978) R. N. Cahn, F. J. Gilman, Phys. Rev. D 17 (1978) 1313. Barquilla-Cano et al. (2007) D. Barquilla-Cano, A. J. Buchmann, E. Hernández, Phys. Rev. C 75 (2007) 065203. Procura (2008) M. Procura, Phys. Rev. D 78 (2008) 094021. Geng et al. (2008) L. S. Geng, et al., Phys. Rev. D 78 (2008) 014011. Aliev et al. (2008) T. Aliev, K. Azizi, A. Ozpineci, Nuclear Physics A 799 (2008) 105 – 126. Alexandrou et al. (2007a) C. Alexandrou, et al., Phys. Rev. Lett. 98 (2007a) 052003. Alexandrou et al. (2007b) C. Alexandrou, et al., Phys. Rev. D 76 (2007b) 094511. Alexandrou et al. (2011) C. Alexandrou, et al., Phys. Rev. D 83 (2011) 014501. Kitagaki et al. (1990) T. Kitagaki, et al., Phys. Rev. D 42 (1990) 1331–1338. Radecky et al. (1982) G. M. Radecky, et al., Phys. Rev. D 25 (1982) 1161–1173. Barish et al. (1979) S. J. Barish, et al., Phys. Rev. D 19 (1979) 2521–2542. Alvarez-Ruso et al. (2007) L. Alvarez-Ruso, L. S. Geng, M. J. V. Vacas, Phys. Rev. C 76 (2007) 068501. Musolf et al. (1994) M. J. Musolf, et al., Phys. Rept. 239 (1994). Nath et al. (1982) L. M. Nath, K. Schilcher, M. Kretzschmar, Phys. Rev. D 25 (1982) 2300. Zhu et al. (2001) S.-L. Zhu, C. M. Maekawa, G. Sacco, B. R. Holstein, M. J. Ramsey-Musolf, Phys. Rev. D 65 (2001) 033001. K. Nakamura, et al. (Particle Data Group) (2010) K. Nakamura, et al. (Particle Data Group), J. Phys. G 37 (2010) 075021. Matsui et al. (2005) K. Matsui, T. Sato, T. S. H. Lee, Phys. Rev. C 72 (2005) 025204. D. Drechsel, S.S. Kamalov, L. Tiator (1999) D. Drechsel, S.S. Kamalov, L. Tiator, Nucl. Phys. A645 (1999) 145–174. Capuano (2011) C. L. Capuano, Parity-Violating Asymmetry in the Nucleon to $\Delta$ Transition: A Study of Inelastic Electron Scattering in the $G^{0}$ Experiment, Ph.D. thesis, The College of William and Mary, 2011. JLAB-PHY-12-1484. Hammer and Dreschel (1995) H. W. Hammer, D. Dreschel, Z. Phys A. 353 (1995) 321. Mukhopadhyay et al. (1998) N. C. Mukhopadhyay, et al., Nucl. Phys. A 633 (1998) 481. Adler (1968) S. L. Adler, Ann. Phys. 50 (1968) 189 – 311. Adler (1975) S. L. Adler, Phys. Rev. D 12 (1975) 2644–2665. Arrington et al. (2011) J. Arrington, K. de Jager, C. F. Perdrisat, J. Phys. : Conf. Ser. 299 (2011) 012002. Bernard et al. (2002) V. Bernard, L. Elouadrhiri, U.-G. Meissner, J. Phys. G 28 (2002) R1. Schreiner and von Hippel (1973) P. A. Schreiner, F. von Hippel, Nucl. Phys. B 58 (1973) 333–362. Jones and Petcov (1980) D. Jones, S. Petcov, Phys. Lett. B 91 (1980) 137 – 141. Zhu et al. (2000) S.-L. Zhu, S. J. Puglia, B. R. Holstein, M. J. Ramsey-Musolf, Phys. Rev. D 62 (2000) 033008. Androić et al. (2012) D. Androić, et al., Phys. Rev. Lett. 108 (2012) 122002. Hemmert et al. (1995) T. R. Hemmert, B. R. Holstein, N. C. Mukhopadhyay, Phys. Rev. D 51 (1995) 158–167. Androić et al. (2011) D. Androić, et al., Nucl. Instrum. Meth. A646 (2011) 59. CERN Applications Software Group (1993) CERN Applications Software Group, GEANT, Detector Description and Simulation Tool, 1993. CERN Program Library Long Writeup W5013, unpublished. $G^{0}$ Collaboration (tbd) $G^{0}$ Collaboration (tbd). In preparation. Hauger et al. (2001) M. Hauger, et al., Nucl. Instrum. Meth. A462 (2001) 382–392. Androić et al. (2010) D. Androić, et al., Phys. Rev. Lett. 104 (2010) 012001. Mo and Tsai (1969) L. Mo, Y. Tsai, Rev. Mod. Phys 41 (1969) 205. Tsai (1971) Y. Tsai, Radiative corrections to electron scatterings, 1971. SLAC-PUB-848.
The Hypothesis About Expansion of Multiple Stratonovich Stochastic Integrals of Arbitrary Multiplicity Dmitriy F. Kuznetsov Dmitriy Feliksovich Kuznetsov iii Peter the Great Saint-Petersburg Polytechnic University, iii Polytechnicheskaya ul., 29, iii 195251, Saint-Petersburg, Russia [email protected] Mathematics Subject Classification: 60H05, 60H10, 42B05, 42C10 Keywords: Multiple It$\hat{\rm o}$ stochastic integral, Multiple Stratonovich stochastic integral, Multiple Fourier series, Legendre Polynomial, Approximation, Expansion. Abstract. In this review article we collected more than ten theorems about expansions of multiple It$\hat{\rm o}$ and Stratonovich stochastic integrals, which was formulated and proved by the author. These theorems open a new direction for study of properties of multiple stochastic integrals. The expansions based on multiple and repeated Fourier-Legendre series as well as on multiple and repeated trigonomectic Fourier series, converging in the mean and pointwise are presented. Some of these theorems connected with multiple stochastic integrals of second, third and fourth multiplicities. Also we consider two theorems about expansions of multiple It$\hat{\rm o}$ stochastic integrals of arbitrary multiplicity $k$, based on generalized multiple Fourier series converging in $L_{2}([t,T]^{k})$ and two theorems about expansion of multiple Stratonovich stochastic integrals of arbitrary multiplicity $k$, based on generalized repeated Fourier series converging pointwise. On the base of the presented theorems we formulate the hypothesis about expansion of multiple Stratonovich stochastic integrals of arbitrary multiplicity $k$, based on generalized multiple Fourier series converging in $L_{2}([t,T]^{k}).$ The results of the article can be useful for numerical integration of It$\hat{\rm o}$ stochastic differential equations. 1. Introduction Let $(\Omega,$ ${\rm F},$ ${\sf P})$ be a complete probubility space, let $\{{\rm F}_{t},t\in[0,T]\}$ be a nondecreasing right-continous family of $\sigma$-subfields of ${\rm F},$ and let ${\bf f}_{t}$ be a standard $m$-dimensional Wiener stochastic process, which is ${\rm F}_{t}$-measurable for any $t\in[0,T].$ We assume that the components ${\bf f}_{t}^{(i)}$ $(i=1,\ldots,m)$ of this process are independent. As well known [1] - [4] one of the effective approaches to numerical integration of It$\hat{\rm o}$ stochastic differential equations (SDEs) is an approach based on Taylor-It$\hat{\rm o}$ and Taylor-Stratonovich expansions [1] - [6]. The most important feature of such expansions is a presence in them of so called multiple It$\hat{\rm o}$ and Stratonovich stochastic integrals, which play the key role for solving the problem of numerical integration of It$\hat{\rm o}$ SDEs and has the following form: (1) $$J[\psi^{(k)}]_{T,t}=\int\limits_{t}^{T}\psi_{k}(t_{k})\ldots\int\limits_{t}^{t% _{2}}\psi_{1}(t_{1})d{\bf w}_{t_{1}}^{(i_{1})}\ldots d{\bf w}_{t_{k}}^{(i_{k})},$$ (2) $$J^{*}[\psi^{(k)}]_{T,t}=\int\limits_{t}^{*T}\psi_{k}(t_{k})\ldots\int\limits_{% t}^{*t_{2}}\psi_{1}(t_{1})d{\bf w}_{t_{1}}^{(i_{1})}\ldots d{\bf w}_{t_{k}}^{(% i_{k})},$$ where every $\psi_{l}(\tau)$ $(l=1,\ldots,k)$ is a continuous non-random function on $[t,T];$ ${\bf w}_{\tau}^{(i)}={\bf f}_{\tau}^{(i)}$ for $i=1,\ldots,m$ and ${\bf w}_{\tau}^{(0)}=\tau;$ $i_{1},\ldots,i_{k}=0,\ 1,\ldots,m;$ and $$\int\limits\ \hbox{and}\ \int\limits^{*}$$ denote It$\hat{\rm o}$ and Stratonovich integrals, respectively. Note that $\psi_{l}(\tau)\equiv 1$ $(l=1,\ldots,k);$ $i_{1},\ldots,i_{k}=0,\ 1,\ldots,m$ in [1] - [4] and $\psi_{l}(\tau)\equiv(t-\tau)^{q_{l}}$ ($l=1,\ldots,k$; $q_{1},\ldots,q_{k}=0,1,2,\ldots$); $i_{1},\ldots,i_{k}=1,\ldots,m$ in [5], [6]. Effective solution of the problem of combined mean-square approximation for collections of multiple It$\hat{\rm o}$ and Stratonovich stochastic integrals (1) and (2) composes the subject of this article. In this context we say some words about one approach. In [3] (see also [1], [2], [4], [7] - [14]), Milstein proposed to expand (2) into repeated series in terms of products of standard Gaussian random variables by representing the Wiener process as a trigonometric Fourier series with random coefficients (so called Karhunen-Loeve expansion). To obtain the Milstein expansion of (2), the truncated Fourier expansions of components of Wiener process ${\bf f}_{s}$ must be iteratively substituted in the single integrals, and the integrals must be calculated, starting from the innermost integral. This is a complicated procedure that doesn’t lead to a general expansion of (2) valid for an arbitrary multiplicity $k.$ For this reason, only expansions of simplest single, double, and triple integrals (2) were presented in [1], [2], [7] ($k=1,2,3$) and in [3], [4], [9] ($k=1,2$) for the case $\psi_{1}(s),\psi_{2}(s),\psi_{3}(s)\equiv 1;$ $i_{1},i_{2},i_{3}=0,1,\ldots,m.$ Moreover, generally speaking the approximations of triple integrals in [1], [2], [7] may not converge in the mean-square sence to appropriate triple integrals due to iterative limit transitions in the Milstein method [3]. In this review article we consider another approach to expansion of multiple It$\hat{\rm o}$ and Stratonovich stochastic integrals, based on generalized multiple Fourier series [15] - [34]. This approach (as we will see below) has some advantages and new possibilities in comparison with the Milstein expansion [3]. We collect more than 10 theorems [15] - [34] which was formulated and proven by the author and which develop the mentioned direction of investigations. Moreover, on the base of the presented theorems, we formulate the hypothesis [23] - [26] about expansion of multiple Stratonovich stochastic integrals of arbitrary multiplicity $k.$ The results of the article prove the hypothesis in the cases of $k=1,2,3,4$ and can be useful for numerical integration of It$\hat{\rm o}$ SDEs. 2. The Hypothesis About Expansion of Multiple Stratonovich Stochastic Integrals of Arbitrary Multiplicity $k$ Taking into account the theorems 1-12 (see below) let’s formulate the following hypothesis about expansion of multiple Stratonovich stochastic integrals of arbitrary multiplicity $k$. Hypothesis 1 (see [23] - [26]). Assume, that $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials or trigonometric functions in the space $L_{2}([t,T])$. Then, for multiple Stratonovich stochastic integral of kth multiplicity (3) $$I_{(\lambda_{1}\ldots\lambda_{k})T,t}^{*(i_{1}\ldots i_{k})}={\int\limits_{t}^% {*}}^{T}\ldots{\int\limits_{t}^{*}}^{t_{3}}{\int\limits_{t}^{*}}^{t_{2}}d{\bf w% }_{t_{1}}^{(i_{1})}d{\bf w}_{t_{2}}^{(i_{2})}\ldots d{\bf w}_{t_{k}}^{(i_{k})}\ $$ $$(i_{1},i_{2},\ldots,i_{k}=0,1,\ldots,m)$$ the following converging in the mean-square sense expansion (4) $${I_{(\lambda_{1}\ldots\lambda_{k})T,t}^{*(i_{1}\ldots i_{k})}=\hbox{\vtop{% \offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{\scriptstyle}}{{{}_{p% \to\infty}}}$}} }\sum\limits_{j_{1},\ldots j_{k}=0}^{p}C_{j_{k}\ldots j_{2}j_{% 1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}\ldots\zeta_{j_{k}}^{(i_{k})}$$ is reasonable, where the Fourier coefficient $C_{j_{k}\ldots j_{2}j_{1}}$ has the form $$C_{j_{k}\ldots j_{2}j_{1}}=\int\limits_{t}^{T}\phi_{j_{k}}(t_{k})\ldots\int% \limits_{t}^{t_{3}}\phi_{j_{2}}(t_{2})\int\limits_{t}^{t_{2}}\phi_{j_{1}}(t_{1% })dt_{1}dt_{2}\ldots dt_{k};$$ ${\rm l.i.m.}$ is a limit in the mean-square sense; every $$\zeta_{j_{l}}^{(i_{l})}=\int\limits_{t}^{T}\phi_{j_{l}}(s)d{\bf w}_{s}^{(i_{l})}$$ is a standard Gaussian random variable for various $i_{l}$ or $j_{l}$ (if $i_{l}\neq 0$); ${\bf w}_{\tau}^{(i)}={\bf f}_{\tau}^{(i)}$ — are independent standard Wiener processes $(i=1,\ldots,m)$ and ${\bf w}_{\tau}^{(0)}=\tau;$ $\lambda_{l}=0$ if $i_{l}=0$ and $\lambda_{l}=1$ if $i_{l}=1,\ldots,m.$ The hypothesis 1 allows to approximate multiple Stratonovich stochastic integral $I_{(\lambda_{1}\ldots\lambda_{k})T,t}^{*(i_{1}\ldots i_{k})}$ by the sum: (5) $$I_{(\lambda_{1}\ldots\lambda_{k})T,t}^{*(i_{1}\ldots i_{k})p}=\sum\limits_{j_{% 1},\ldots j_{k}=0}^{p}C_{j_{k}\ldots j_{2}j_{1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{% j_{2}}^{(i_{2})}\ldots\zeta_{j_{k}}^{(i_{k})},$$ where $$\lim_{p\to\infty}M\left\{\left(I_{(\lambda_{1}\ldots\lambda_{k})T,t}^{*(i_{1}% \ldots i_{k})}-I_{(\lambda_{1}\ldots\lambda_{k})T,t}^{*(i_{1}\ldots i_{k})p}% \right)^{2}\right\}=0.$$ Integrals (3) — are integrals from the Taylor–Stratonovich expansion [1] - [4]. It means, that approximations (5) may be very useful for numerical integration of It$\hat{\rm o}$ SDEs. The expansion (4) has only one operation of limit transition and by this reason is comfortable for approximation of multiple Stratonovich stochastic integrals. In the next section we consider two theorems about axpansion of multiple Stratonovich stochastic integrals of arbitrary multiplicity. From the one side the integrals presented in the theorems 1 and 2 (see below) has more general form in comparison with (3). From the other side the theorems 1 ans 2 contain iterative operation of limit transition in comparison with the hypothesis 1. This feature creates some difficulties for estimation of the mean-square error of approximation of multiple Stratonovich stochastic integrals. Nevertheless the theorems 1 and 2 contains the same members of expansions of multiple Stratonovich stochastic integrals as in hypothesis 1. 3. Expansion of Multiple Stratonovich Stochastic Integrals of Arbitrary Multiplicity, Based on Generalized Repeated Fourier Series, Converging Pointwise Let’s formulate the following statement. Theorem 1 (see [15], [20] - [26], [29]). Suppose that every $\psi_{l}(\tau)$ $(l=1,\ldots,k)$ is a continuously differentiable function at the interval $[t,T]$ and $\{\phi_{j}(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of Legendre polynomials or trigonometric functions in the space $L_{2}([t,T])$. Then, the multiple Stratonovich stochastic integral $$J^{*}[\psi^{(k)}]_{T,t}=\int\limits_{t}^{*T}\psi_{k}(t_{k})\ldots\int\limits_{% t}^{*t_{2}}\psi_{1}(t_{1})d{\bf w}_{t_{1}}^{(i_{1})}\ldots d{\bf w}_{t_{k}}^{(% i_{k})}$$ $$(i_{1},i_{2},\ldots,i_{k}=0,1,\ldots,m)$$ of the type (2) is expanded in the converging in the mean of degree $2n;\ n\in N,$ repeated series (6) $$J^{*}[\psi^{(k)}]_{T,t}=\sum_{j_{1}=0}^{\infty}\ldots\sum_{j_{k}=0}^{\infty}C_% {j_{k}\ldots j_{1}}\prod_{l=1}^{k}\zeta^{(i_{l})}_{j_{l}}$$ where every $$\zeta_{j_{l}}^{(i_{l})}=\int\limits_{t}^{T}\phi_{j_{l}}(s)d{\bf w}_{s}^{(i_{l})}$$ is a standard Gaussian random variable for different $i_{l}$ or $j_{l}$ (if $i_{l}\neq 0$); the Fourier coefficient $C_{j_{k}\ldots j_{1}}$ has the form $$C_{j_{k}\ldots j_{1}}=\int\limits_{[t,T]^{k}}K^{*}(t_{1},\ldots,t_{k})\prod_{l% =1}^{k}\phi_{j_{l}}(t_{l})dt_{1}\ldots dt_{k}=$$ (7) $$=\int\limits_{t}^{T}\psi_{k}(t_{k})\phi_{j_{k}}(t_{k})\ldots\int\limits_{t}^{t% _{3}}\psi_{2}(t_{2})\phi_{j_{2}}(t_{2})\int\limits_{t}^{t_{2}}\psi_{1}(t_{1})% \phi_{j_{1}}(t_{1})dt_{1}dt_{2}\ldots dt_{k},$$ where $$K^{*}(t_{1},\ldots,t_{k})=\prod\limits_{l=1}^{k}\psi_{l}(t_{l})\prod_{l=1}^{k-% 1}\left({\bf 1}_{\{t_{l}<t_{l+1}\}}+\frac{1}{2}{\bf 1}_{\{t_{l}=t_{l+1}\}}% \right);\ t_{1},\ldots,t_{k}\in[t,T];\ k\geq 2,$$ and $K^{*}(t_{1})=\psi_{1}(t_{1});\ t_{1}\in[t,T],$ where ${\bf 1}_{A}$ is the indicator of the set $A$. Note that (6) means the following (8) $$\lim\limits_{p_{1}\to\infty}\ldots\lim\limits_{p_{k}\to\infty}{\sf M}\biggl{\{% }\biggl{(}J^{*}[\psi^{(k)}]_{T,t}-\sum_{j_{1}=0}^{p_{1}}\ldots\sum_{j_{k}=0}^{% p_{k}}C_{j_{k}\ldots j_{1}}\prod_{l=1}^{k}\zeta^{(i_{l})}_{j_{l}}\biggr{)}^{2n% }\biggr{\}}=0,\ n\in N.$$ Let’s consider the variant of the theorem 1. Theorem 2 (see [20] - [26], [29]). Suppose that every $\psi_{l}(\tau)$ $(l=1,\ldots,k)$ is a continuously differentiable function at the interval $[t,T]$ and $\{\phi_{j}(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of Legendre polynomials or trigonometric functions in the space $L_{2}([t,T])$. Then, the multiple Stratonovich stochastic integral $$J^{*}[\psi^{(k)}]_{T,t}=\int\limits_{t}^{*T}\psi_{k}(t_{k})\ldots\int\limits_{% t}^{*t_{2}}\psi_{1}(t_{1})d{\bf w}_{t_{1}}^{(i_{1})}\ldots d{\bf w}_{t_{k}}^{(% i_{k})}$$ $$(i_{1},i_{2},\ldots,i_{k}=0,1,\ldots,m)$$ of the type (2) is expanded in the converging in the mean of degree $2n;\ n\in N,$ repeated series $$J^{*}[\psi^{(k)}]_{T,t}=\sum_{j_{k}=0}^{\infty}\ldots\sum_{j_{1}=0}^{\infty}C_% {j_{k}\ldots j_{1}}\prod_{l=1}^{k}\zeta^{(i_{l})}_{j_{l}};\ $$ another denotations see in the theorem 1. Not difficult to see that if $\psi_{l}(s)\equiv 1$ $(l=1,\ldots,k)$, then the members of the expansions (4) and (6) are the same. However, as mentioned before, the expansion (4) has only one operation of limit transition. At the same time the expansion (6) has iterative operation of limit transiton. The approach considered in the next section as it tured out gives the key to proof of hypothesis 1 at least for the cases $k=1,\ 2,\ 3,\ 4.$ 4. Expansion of Multiple It$\hat{\rm o}$ Stochastic Integrals of Arbitrary Multiplicity, Based on Generalized Multiple Fourier Series, Converging in the Mean Suppose that every $\psi_{l}(\tau)$ $(l=1,\ldots,k)$ is a continuous on $[t,T]$ non-random function. Define the following function on a hypercube $[t,T]^{k}:$ (9) $$K(t_{1},\ldots,t_{k})=\prod\limits_{l=1}^{k}\psi_{l}(t_{l})\prod\limits_{l=1}^% {k-1}{\bf 1}_{\{t_{l}<t_{l+1}\}};\ t_{1},\ldots,t_{k}\in[t,T];\ k\geq 2,$$ and $K(t_{1})=\psi_{1}(t_{1});\ t_{1}\in[t,T],$ where ${\bf 1}_{A}$ is the indicator of the set $A$. Suppose that $\{\phi_{j}(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of functions in $L_{2}([t,T])$. The function $K(t_{1},\ldots,t_{k})$ is sectionally continuous in the hypercube $[t,T]^{k}.$ At this situation it is well known, that the multiple Fourier series of $K(t_{1},\ldots,t_{k})\in L_{2}([t,T]^{k})$ is converging to $K(t_{1},\ldots,t_{k})$ in the hypercube $[t,T]^{k}$ in the mean-square sense, i.e. (10) $${\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm lim}\cr$\stackrel{{\scriptstyle}% }{{{}_{p_{1},\ldots,p_{k}\to\infty}}}$}} }\biggl{\|}K(t_{1},\ldots,t_{k})-\sum% _{j_{1}=0}^{p_{1}}\ldots\sum_{j_{k}=0}^{p_{k}}C_{j_{k}\ldots j_{1}}\prod_{l=1}% ^{k}\phi_{j_{l}}(t_{l})\biggr{\|}=0,$$ where (11) $$C_{j_{k}\ldots j_{1}}=\int\limits_{[t,T]^{k}}K(t_{1},\ldots,t_{k})\prod_{l=1}^% {k}\phi_{j_{l}}(t_{l})dt_{1}\ldots dt_{k}.$$ and $$\left\|f\right\|^{2}=\int\limits_{[t,T]^{k}}f^{2}(t_{1},\ldots,t_{k})dt_{1}% \ldots dt_{k}.$$ Consider the partition $\{\tau_{j}\}_{j=0}^{N}$ of $[t,T]$ such that (12) $${t=\tau_{0}<\ldots<\tau_{N}=T,\ \Delta_{N}=\hbox{\vtop{\offinterlineskip\halign% {\cr}{\rm max}\cr$\stackrel{{\scriptstyle}}{{{}_{0\leq j\leq N-1}}}$}} }\Delta% \tau_{j}\to 0\ \hbox{if}\ N\to\infty,\ \Delta\tau_{j}=\tau_{j+1}-\tau_{j}.$$ Theorem 3 (see [16] - [26], [31]). Suppose that every $\psi_{l}(\tau)$ $(l=1,\ldots,k)$ is a continuous on $[t,T]$ function and $\{\phi_{j}(x)\}_{j=0}^{\infty}$ is a complete orthonormal system of continuous functions in $L_{2}([t,T]).$ Then (13) $${{J[\psi^{(k)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$% \stackrel{{\scriptstyle}}{{{}_{p_{1},\ldots,p_{k}\to\infty}}}$}} }\sum_{j_{1}=% 0}^{p_{1}}\ldots\sum_{j_{k}=0}^{p_{k}}C_{j_{k}\ldots j_{1}}\biggl{(}\prod_{g=1% }^{k}\zeta_{j_{g}}^{(i_{g})}-\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i% .m.}\cr$\stackrel{{\scriptstyle}}{{{}_{N\to\infty}}}$}} }\sum_{(l_{1},\ldots,l% _{k})\in{G}_{k}}\prod_{g=1}^{k}\phi_{j_{g}}(\tau_{l_{g}})\Delta{\bf w}_{\tau_{% l_{g}}}^{(i_{g})}\biggr{)},$$ where $${\rm G}_{k}={\rm H}_{k}\backslash{\rm L}_{k};\ {\rm H}_{k}=\{(l_{1},\ldots,l_{% k}):\ l_{1},\ldots,l_{k}=0,\ 1,\ldots,N-1\};$$ $${\rm L}_{k}=\{(l_{1},\ldots,l_{k}):\ l_{1},\ldots,l_{k}=0,\ 1,\ldots,N-1;\ l_{% g}\neq l_{r}\ (g\neq r);\ g,r=1,\ldots,k\};$$ ${\rm l.i.m.}$ is a limit in the mean-square sense; $i_{1},\ldots,i_{k}=0,1,\ldots,m;$ every (14) $$\zeta_{j}^{(i)}=\int\limits_{t}^{T}\phi_{j}(s)d{\bf w}_{s}^{(i)}$$ is a standard Gaussian random variable for various $i$ or $j$ (if $i\neq 0$); $C_{j_{k}\ldots j_{1}}$ is the Fourier coefficient (11); $\Delta{\bf w}_{\tau_{j}}^{(i)}={\bf w}_{\tau_{j+1}}^{(i)}-{\bf w}_{\tau_{j}}^{% (i)}$ $(i=0,\ 1,\ldots,m);$ $\left\{\tau_{j}\right\}_{j_{l}=0}^{N-1}$ is a partition of $[t,T],$ which satisfies the condition (12). Let’s consider transformed particular cases of the theorem 3 for $k=1,\ldots,5$ [16] - [26] (the cases $k=6$ and 7 see in [19] - [21]): (15) $${J[\psi^{(1)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$% \stackrel{{\scriptstyle}}{{{}_{p_{1}\to\infty}}}$}} }\sum_{j_{1}=0}^{p_{1}}C_{% j_{1}}\zeta_{j_{1}}^{(i_{1})},$$ (16) $${J[\psi^{(2)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$% \stackrel{{\scriptstyle}}{{{}_{p_{1},p_{2}\to\infty}}}$}} }\sum_{j_{1}=0}^{p_{% 1}}\sum_{j_{2}=0}^{p_{2}}C_{j_{2}j_{1}}\biggl{(}\zeta_{j_{1}}^{(i_{1})}\zeta_{% j_{2}}^{(i_{2})}-{\bf 1}_{\{i_{1}=i_{2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}% \biggr{)},$$ $${J[\psi^{(3)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$% \stackrel{{\scriptstyle}}{{{}_{p_{1},\ldots,p_{3}\to\infty}}}$}} }\sum_{j_{1}=% 0}^{p_{1}}\sum_{j_{2}=0}^{p_{2}}\sum_{j_{3}=0}^{p_{3}}C_{j_{3}j_{2}j_{1}}% \biggl{(}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}\zeta_{j_{3}}^{(i_{3})}% -\biggr{.}$$ (17) $$-{\bf 1}_{\{i_{1}=i_{2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}\zeta_{j_{3}}^{(i_{3}% )}-{\bf 1}_{\{i_{2}=i_{3}\neq 0\}}{\bf 1}_{\{j_{2}=j_{3}\}}\zeta_{j_{1}}^{(i_{% 1})}-{\bf 1}_{\{i_{1}=i_{3}\neq 0\}}{\bf 1}_{\{j_{1}=j_{3}\}}\zeta_{j_{2}}^{(i% _{2})}\biggr{)},$$ $${J[\psi^{(4)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$% \stackrel{{\scriptstyle}}{{{}_{p_{1},\ldots,p_{4}\to\infty}}}$}} }\sum_{j_{1}=% 0}^{p_{1}}\ldots\sum_{j_{4}=0}^{p_{4}}C_{j_{4}\ldots j_{1}}\biggl{(}\prod_{l=1% }^{4}\zeta_{j_{l}}^{(i_{l})}\biggr{.}-$$ $$-{\bf 1}_{\{i_{1}=i_{2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}\zeta_{j_{3}}^{(i_{3}% )}\zeta_{j_{4}}^{(i_{4})}-{\bf 1}_{\{i_{1}=i_{3}\neq 0\}}{\bf 1}_{\{j_{1}=j_{3% }\}}\zeta_{j_{2}}^{(i_{2})}\zeta_{j_{4}}^{(i_{4})}-$$ $$-{\bf 1}_{\{i_{1}=i_{4}\neq 0\}}{\bf 1}_{\{j_{1}=j_{4}\}}\zeta_{j_{2}}^{(i_{2}% )}\zeta_{j_{3}}^{(i_{3})}-{\bf 1}_{\{i_{2}=i_{3}\neq 0\}}{\bf 1}_{\{j_{2}=j_{3% }\}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{4}}^{(i_{4})}-$$ $$-{\bf 1}_{\{i_{2}=i_{4}\neq 0\}}{\bf 1}_{\{j_{2}=j_{4}\}}\zeta_{j_{1}}^{(i_{1}% )}\zeta_{j_{3}}^{(i_{3})}-{\bf 1}_{\{i_{3}=i_{4}\neq 0\}}{\bf 1}_{\{j_{3}=j_{4% }\}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}+$$ $$+{\bf 1}_{\{i_{1}=i_{2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}{\bf 1}_{\{i_{3}=i_{4% }\neq 0\}}{\bf 1}_{\{j_{3}=j_{4}\}}+{\bf 1}_{\{i_{1}=i_{3}\neq 0\}}{\bf 1}_{\{% j_{1}=j_{3}\}}{\bf 1}_{\{i_{2}=i_{4}\neq 0\}}{\bf 1}_{\{j_{2}=j_{4}\}}+$$ (18) $$+\biggl{.}{\bf 1}_{\{i_{1}=i_{4}\neq 0\}}{\bf 1}_{\{j_{1}=j_{4}\}}{\bf 1}_{\{i% _{2}=i_{3}\neq 0\}}{\bf 1}_{\{j_{2}=j_{3}\}}\biggr{)},$$ $${J[\psi^{(5)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$% \stackrel{{\scriptstyle}}{{{}_{p_{1},\ldots,p_{5}\to\infty}}}$}} }\sum_{j_{1}=% 0}^{p_{1}}\ldots\sum_{j_{5}=0}^{p_{5}}C_{j_{5}\ldots j_{1}}\Biggl{(}\prod_{l=1% }^{5}\zeta_{j_{l}}^{(i_{l})}-\Biggr{.}$$ $$-{\bf 1}_{\{i_{1}=i_{2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}\zeta_{j_{3}}^{(i_{3}% )}\zeta_{j_{4}}^{(i_{4})}\zeta_{j_{5}}^{(i_{5})}-{\bf 1}_{\{i_{1}=i_{3}\neq 0% \}}{\bf 1}_{\{j_{1}=j_{3}\}}\zeta_{j_{2}}^{(i_{2})}\zeta_{j_{4}}^{(i_{4})}% \zeta_{j_{5}}^{(i_{5})}-$$ $$-{\bf 1}_{\{i_{1}=i_{4}\neq 0\}}{\bf 1}_{\{j_{1}=j_{4}\}}\zeta_{j_{2}}^{(i_{2}% )}\zeta_{j_{3}}^{(i_{3})}\zeta_{j_{5}}^{(i_{5})}-{\bf 1}_{\{i_{1}=i_{5}\neq 0% \}}{\bf 1}_{\{j_{1}=j_{5}\}}\zeta_{j_{2}}^{(i_{2})}\zeta_{j_{3}}^{(i_{3})}% \zeta_{j_{4}}^{(i_{4})}-$$ $$-{\bf 1}_{\{i_{2}=i_{3}\neq 0\}}{\bf 1}_{\{j_{2}=j_{3}\}}\zeta_{j_{1}}^{(i_{1}% )}\zeta_{j_{4}}^{(i_{4})}\zeta_{j_{5}}^{(i_{5})}-{\bf 1}_{\{i_{2}=i_{4}\neq 0% \}}{\bf 1}_{\{j_{2}=j_{4}\}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{3}}^{(i_{3})}% \zeta_{j_{5}}^{(i_{5})}-$$ $$-{\bf 1}_{\{i_{2}=i_{5}\neq 0\}}{\bf 1}_{\{j_{2}=j_{5}\}}\zeta_{j_{1}}^{(i_{1}% )}\zeta_{j_{3}}^{(i_{3})}\zeta_{j_{4}}^{(i_{4})}-{\bf 1}_{\{i_{3}=i_{4}\neq 0% \}}{\bf 1}_{\{j_{3}=j_{4}\}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}% \zeta_{j_{5}}^{(i_{5})}-$$ $$-{\bf 1}_{\{i_{3}=i_{5}\neq 0\}}{\bf 1}_{\{j_{3}=j_{5}\}}\zeta_{j_{1}}^{(i_{1}% )}\zeta_{j_{2}}^{(i_{2})}\zeta_{j_{4}}^{(i_{4})}-{\bf 1}_{\{i_{4}=i_{5}\neq 0% \}}{\bf 1}_{\{j_{4}=j_{5}\}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}% \zeta_{j_{3}}^{(i_{3})}+$$ $$+{\bf 1}_{\{i_{1}=i_{2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}{\bf 1}_{\{i_{3}=i_{4% }\neq 0\}}{\bf 1}_{\{j_{3}=j_{4}\}}\zeta_{j_{5}}^{(i_{5})}+{\bf 1}_{\{i_{1}=i_% {2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}{\bf 1}_{\{i_{3}=i_{5}\neq 0\}}{\bf 1}_{% \{j_{3}=j_{5}\}}\zeta_{j_{4}}^{(i_{4})}+$$ $$+{\bf 1}_{\{i_{1}=i_{2}\neq 0\}}{\bf 1}_{\{j_{1}=j_{2}\}}{\bf 1}_{\{i_{4}=i_{5% }\neq 0\}}{\bf 1}_{\{j_{4}=j_{5}\}}\zeta_{j_{3}}^{(i_{3})}+{\bf 1}_{\{i_{1}=i_% {3}\neq 0\}}{\bf 1}_{\{j_{1}=j_{3}\}}{\bf 1}_{\{i_{2}=i_{4}\neq 0\}}{\bf 1}_{% \{j_{2}=j_{4}\}}\zeta_{j_{5}}^{(i_{5})}+$$ $$+{\bf 1}_{\{i_{1}=i_{3}\neq 0\}}{\bf 1}_{\{j_{1}=j_{3}\}}{\bf 1}_{\{i_{2}=i_{5% }\neq 0\}}{\bf 1}_{\{j_{2}=j_{5}\}}\zeta_{j_{4}}^{(i_{4})}+{\bf 1}_{\{i_{1}=i_% {3}\neq 0\}}{\bf 1}_{\{j_{1}=j_{3}\}}{\bf 1}_{\{i_{4}=i_{5}\neq 0\}}{\bf 1}_{% \{j_{4}=j_{5}\}}\zeta_{j_{2}}^{(i_{2})}+$$ $$+{\bf 1}_{\{i_{1}=i_{4}\neq 0\}}{\bf 1}_{\{j_{1}=j_{4}\}}{\bf 1}_{\{i_{2}=i_{3% }\neq 0\}}{\bf 1}_{\{j_{2}=j_{3}\}}\zeta_{j_{5}}^{(i_{5})}+{\bf 1}_{\{i_{1}=i_% {4}\neq 0\}}{\bf 1}_{\{j_{1}=j_{4}\}}{\bf 1}_{\{i_{2}=i_{5}\neq 0\}}{\bf 1}_{% \{j_{2}=j_{5}\}}\zeta_{j_{3}}^{(i_{3})}+$$ $$+{\bf 1}_{\{i_{1}=i_{4}\neq 0\}}{\bf 1}_{\{j_{1}=j_{4}\}}{\bf 1}_{\{i_{3}=i_{5% }\neq 0\}}{\bf 1}_{\{j_{3}=j_{5}\}}\zeta_{j_{2}}^{(i_{2})}+{\bf 1}_{\{i_{1}=i_% {5}\neq 0\}}{\bf 1}_{\{j_{1}=j_{5}\}}{\bf 1}_{\{i_{2}=i_{3}\neq 0\}}{\bf 1}_{% \{j_{2}=j_{3}\}}\zeta_{j_{4}}^{(i_{4})}+$$ $$+{\bf 1}_{\{i_{1}=i_{5}\neq 0\}}{\bf 1}_{\{j_{1}=j_{5}\}}{\bf 1}_{\{i_{2}=i_{4% }\neq 0\}}{\bf 1}_{\{j_{2}=j_{4}\}}\zeta_{j_{3}}^{(i_{3})}+{\bf 1}_{\{i_{1}=i_% {5}\neq 0\}}{\bf 1}_{\{j_{1}=j_{5}\}}{\bf 1}_{\{i_{3}=i_{4}\neq 0\}}{\bf 1}_{% \{j_{3}=j_{4}\}}\zeta_{j_{2}}^{(i_{2})}+$$ $$+{\bf 1}_{\{i_{2}=i_{3}\neq 0\}}{\bf 1}_{\{j_{2}=j_{3}\}}{\bf 1}_{\{i_{4}=i_{5% }\neq 0\}}{\bf 1}_{\{j_{4}=j_{5}\}}\zeta_{j_{1}}^{(i_{1})}+{\bf 1}_{\{i_{2}=i_% {4}\neq 0\}}{\bf 1}_{\{j_{2}=j_{4}\}}{\bf 1}_{\{i_{3}=i_{5}\neq 0\}}{\bf 1}_{% \{j_{3}=j_{5}\}}\zeta_{j_{1}}^{(i_{1})}+$$ (19) $$+\Biggl{.}{\bf 1}_{\{i_{2}=i_{5}\neq 0\}}{\bf 1}_{\{j_{2}=j_{5}\}}{\bf 1}_{\{i% _{3}=i_{4}\neq 0\}}{\bf 1}_{\{j_{3}=j_{4}\}}\zeta_{j_{1}}^{(i_{1})}\Biggr{)},$$ where ${\bf 1}_{A}$ is the indicator of the set $A$. The convergence in the mean of degree $2n,$ $n\in N$ in the theorem 3 is proven in [16] - [26], [29]. The convergence with probubility 1 (w. p. 1) for some approximations from the theorem 3 for multiple It$\hat{\rm o}$ stochastic integrals of multiplicity 1 and 2 (the case of binomial weight functions and Legendre polynimials) is proven in [22] - [26], [30]. As it turned out, the theorem 3 is valid for some complete discontinuous orthonormal system of functions in $L_{2}([t,T]).$ For example, the theorem 3 is valid for the system of Haar functions as well as the system of Rademacher-Walsh functions [16] - [26]. In [16] - [26] we demonstrate, that approach to expansion of multiple It$\hat{\rm o}$ stochastic integrals considered in the theorem 3 is essentially general and allows some transformations for other types of multiple stochastic integrals. In [16] - [26] we considered the versions of the theorem 3 for multiple stochastic integrals according to martingale Poisson measures and for multiple stochastic integrals according to martingales. Considered theorems are sufficiently natural according to general properties of martingales. Moreover the theorem 3 allows to calculate exactly the mean-square error of approximation of multiple It$\hat{\rm o}$ stochastic integrals of arbitrary multiplicity (see [26], [28]). Here we consider an approxination as a prelimit expression in (13). Let’s generalize formulas (15) – (19) for the case of any arbitrary multiplicity of $J[\psi^{(k)}]_{T,t}$. In order to do it we will introduce several denotations. Let’s examine the unregulated set $\{1,2,\ldots,k\}$ and separate it up in two parts: the first part consists of $r$ unordered pairs (sequence order of these pairs is also unimportant) and the second one — of the remains $k-2r$ numbers. So, we have: (20) $$(\{\underbrace{\{g_{1},g_{2}\},\ldots,\{g_{2r-1},g_{2r}\}}_{\small{\hbox{part % 1}}}\},\{\underbrace{q_{1},\ldots,q_{k-2r}}_{\small{\hbox{part 2}}}\}),$$ where $\{g_{1},g_{2},\ldots,g_{2r-1},g_{2r},q_{1},\ldots,q_{k-2r}\}=\{1,2,\ldots,k\},$ curly braces mean an unordered set, and the round braces mean an ordered set. Let’s call (20) as partition and examine the sum using all possible partitions: (21) $$\sum_{\stackrel{{\scriptstyle(\{\{g_{1},g_{2}\},\ldots,\{g_{2r-1},g_{2r}\}\},% \{q_{1},\ldots,q_{k-2r}\})}}{{{}_{\{g_{1},g_{2},\ldots,g_{2r-1},g_{2r},q_{1},% \ldots,q_{k-2r}\}=\{1,2,\ldots,k\}}}}}a_{g_{1}g_{2},\ldots,g_{2r-1}g_{2r},q_{1% }\ldots q_{k-2r}}.$$ Let’s consider some examples of sums in the form (21): $$\sum_{\stackrel{{\scriptstyle(\{g_{1},g_{2}\},\{q_{1},q_{2},q_{3}\})}}{{{}_{\{% g_{1},g_{2},q_{1},q_{2},q_{3}\}=\{1,2,3,4,5\}}}}}a_{g_{1}g_{2},q_{1}q_{2}q_{3}% }=a_{12,345}+a_{13,245}+a_{14,235}+a_{15,234}+a_{23,145}+a_{24,135}+$$ $$+a_{25,134}+a_{34,125}+a_{35,124}+a_{45,123},$$ $$\sum_{\stackrel{{\scriptstyle(\{\{g_{1},g_{2}\},\{g_{3},g_{4}\}\},\{q_{1}\})}}% {{{}_{\{g_{1},g_{2},g_{3},g_{4},q_{1}\}=\{1,2,3,4,5\}}}}}a_{g_{1}g_{2},g_{3}g_% {4},q_{1}}=a_{12,34,5}+a_{13,24,5}+a_{14,23,5}+a_{12,35,4}+a_{13,25,4}+a_{15,2% 3,4}+$$ $$+a_{12,54,3}+a_{15,24,3}+a_{14,25,3}+a_{15,34,2}+a_{13,54,2}+a_{14,53,2}+a_{52% ,34,1}+a_{53,24,1}+a_{54,23,1}.$$ Now we can formulate the theorem 3 (formula (13)) using alternative and more comfortable form. Theorem 4 (see [20] - [26], [31]). In conditions of the theorem 3 the following converging in the mean-square sense expansion is valid: $${J[\psi^{(k)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$% \stackrel{{\scriptstyle}}{{{}_{p_{1},\ldots,p_{k}\to\infty}}}$}} }\sum\limits_% {j_{1}=0}^{p_{1}}\ldots\sum\limits_{j_{k}=0}^{p_{k}}C_{j_{k}\ldots j_{1}}% \Biggl{(}\prod_{l=1}^{k}\zeta_{j_{l}}^{(i_{l})}+\Biggr{.}$$ (22) $$+\sum\limits_{r=1}^{[k/2]}(-1)^{r}\cdot\sum_{\stackrel{{\scriptstyle(\{\{g_{1}% ,g_{2}\},\ldots,\{g_{2r-1},g_{2r}\}\},\{q_{1},\ldots,q_{k-2r}\})}}{{{}_{\{g_{1% },g_{2},\ldots,g_{2r-1},g_{2r},q_{1},\ldots,q_{k-2r}\}=\{1,2,\ldots,k\}}}}}% \prod\limits_{s=1}^{r}{\bf 1}_{\{i_{g_{{}_{2s-1}}}=\leavevmode\nobreak\ i_{g_{% {}_{2s}}}\neq 0\}}\Biggl{.}{\bf 1}_{\{j_{g_{{}_{2s-1}}}=\leavevmode\nobreak\ j% _{g_{{}_{2s}}}\}}\prod_{l=1}^{k-2r}\zeta_{j_{q_{l}}}^{(i_{q_{l}})}\Biggr{)}.$$ In particular from (22) if $k=5$ we obtain: $$J[\psi^{(5)}]_{T,t}=\sum_{j_{1},\ldots,j_{5}=0}^{\infty}C_{j_{5}\ldots j_{1}}% \Biggl{(}\prod_{l=1}^{5}\zeta_{j_{l}}^{(i_{l})}-\Biggr{.}\sum\limits_{% \stackrel{{\scriptstyle(\{g_{1},g_{2}\},\{q_{1},q_{2},q_{3}\})}}{{{}_{\{g_{1},% g_{2},q_{1},q_{2},q_{3}\}=\{1,2,3,4,5\}}}}}{\bf 1}_{\{i_{g_{{}_{1}}}=% \leavevmode\nobreak\ i_{g_{{}_{2}}}\neq 0\}}{\bf 1}_{\{j_{g_{{}_{1}}}=% \leavevmode\nobreak\ j_{g_{{}_{2}}}\}}\prod_{l=1}^{3}\zeta_{j_{q_{l}}}^{(i_{q_% {l}})}+$$ $$+\sum_{\stackrel{{\scriptstyle(\{\{g_{1},g_{2}\},\{g_{3},g_{4}\}\},\{q_{1}\})}% }{{{}_{\{g_{1},g_{2},g_{3},g_{4},q_{1}\}=\{1,2,3,4,5\}}}}}{\bf 1}_{\{i_{g_{{}_% {1}}}=\leavevmode\nobreak\ i_{g_{{}_{2}}}\neq 0\}}{\bf 1}_{\{j_{g_{{}_{1}}}=% \leavevmode\nobreak\ j_{g_{{}_{2}}}\}}\Biggl{.}{\bf 1}_{\{i_{g_{{}_{3}}}=% \leavevmode\nobreak\ i_{g_{{}_{4}}}\neq 0\}}{\bf 1}_{\{j_{g_{{}_{3}}}=% \leavevmode\nobreak\ j_{g_{{}_{4}}}\}}\zeta_{j_{q_{1}}}^{(i_{q_{1}})}\Biggr{)}.$$ The last equality obviously agree with (19). 5. The Idea of the Proof of the Hypothesis 1 Let’s introduce the following denotations: $$J[\psi^{(k)}]_{T,t}^{s_{l},\ldots,s_{1}}\stackrel{{\scriptstyle\rm def}}{{=}}% \prod_{p=1}^{l}{\bf 1}_{\{i_{s_{p}}=i_{s_{p}+1}\neq 0\}}\int\limits_{t}^{T}% \psi_{k}(t_{k})\ldots\int\limits_{t}^{t_{s_{l}+3}}\psi_{s_{l}+2}(t_{s_{l}+2})% \int\limits_{t}^{t_{s_{l}+2}}\psi_{s_{l}}(t_{s_{l}+1})\psi_{s_{l}+1}(t_{s_{l}+% 1})\times$$ $$\times\int\limits_{t}^{t_{s_{l}+1}}\psi_{s_{l}-1}(t_{s_{l}-1})\ldots\int% \limits_{t}^{t_{s_{1}+3}}\psi_{s_{1}+2}(t_{s_{1}+2})\int\limits_{t}^{t_{s_{1}+% 2}}\psi_{s_{1}}(t_{s_{1}+1})\psi_{s_{1}+1}(t_{s_{1}+1})\times$$ $$\times\int\limits_{t}^{t_{s_{1}+1}}\psi_{s_{1}-1}(t_{s_{1}-1})\ldots\int% \limits_{t}^{t_{2}}\psi_{1}(t_{1})d{\bf w}_{t_{1}}^{(i_{1})}\ldots d{\bf w}_{t% _{s_{1}-1}}^{(i_{s_{1}-1})}dt_{s_{1}+1}d{\bf w}_{t_{s_{1}+2}}^{(i_{s_{1}+2})}\ldots$$ (23) $$\ldots d{\bf w}_{t_{s_{l}-1}}^{(i_{s_{l}-1})}dt_{s_{l}+1}d{\bf w}_{t_{s_{l}+2}% }^{(i_{s_{l}+2})}\ldots d{\bf w}_{t_{k}}^{(i_{k})},$$ where in (23), (2), (1): $\psi^{(k)}\stackrel{{\scriptstyle\rm def}}{{=}}(\psi_{k},\ldots,\psi_{1}),$ $\psi^{(1)}\stackrel{{\scriptstyle\rm def}}{{=}}\psi_{1},$ (24) $${\rm A}_{k,l}=\left\{(s_{l},\ldots,s_{1}):\ s_{l}>s_{l-1}+1,\ldots,s_{2}>s_{1}% +1;\ s_{l},\ldots,s_{1}=1,\ldots,k-1\right\},$$ $(s_{l},\ldots,s_{1})\in{\rm A}_{k,l};$ $l=1,\ldots,\left[k/2\right];$ $i_{s}=0,\ 1,\ldots,m;$ $s=1,\ldots,k;$ $[x]$ — is an integer part of a number $x;$ ${\bf 1}_{A}$ — is an indicator of the set $A$. Let’s formulate the statement about connection between multiple It$\hat{\rm o}$ and Stratonovich stochastic integrals $J[\psi^{(k)}]_{T,t},$ $J^{*}[\psi^{(k)}]_{T,t}$ of arbitrary multiplicity $k$. Theorem 5 (see [16] - [26]). Suppose that every $\psi_{l}(\tau)(l=1,\ldots,k)$ is a continuously differentiable function at the interval $[t,T]$. Then, the following relation between multiple It$\hat{\rm o}$ and Stratonovich stochastic integrals is correct: (25) $$J^{*}[\psi^{(k)}]_{T,t}=J[\psi^{(k)}]_{T,t}+\sum_{r=1}^{\left[k/2\right]}\frac% {1}{2^{r}}\sum_{(s_{r},\ldots,s_{1})\in{\rm A}_{k,r}}J[\psi^{(k)}]_{T,t}^{s_{r% },\ldots,s_{1}}\ \hbox{{\rm w.\ p.\ 1}},$$ where $\sum\limits_{\emptyset}$ is supposed to be equal to zero. According to (13) we have: $${\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{% \scriptstyle}}{{{}_{p_{1},\ldots,p_{k}\to\infty}}}$}} }\sum_{j_{1}=0}^{p_{1}}% \ldots\sum_{j_{k}=0}^{p_{k}}C_{j_{k}\ldots j_{1}}\prod_{g=1}^{k}\zeta_{j_{g}}^% {(i_{g})}=J[\psi^{(k)}]_{T,t}+$$ (26) $${{+\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{% \scriptstyle}}{{{}_{p_{1},\ldots,p_{k}\to\infty}}}$}} }\sum_{j_{1}=0}^{p_{1}}% \ldots\sum_{j_{k}=0}^{p_{k}}C_{j_{k}\ldots j_{1}}\leavevmode\nobreak\ \hbox{% \vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{\scriptstyle}}{{% {}_{N\to\infty}}}$}} }\sum_{(l_{1},\ldots,l_{k})\in{G}_{k}}\prod_{g=1}^{k}\phi% _{j_{g}}(\tau_{l_{g}})\Delta{\bf w}_{\tau_{l_{g}}}^{(i_{g})}.$$ From (5) and (25) it follows that (27) $${J^{*}[\psi^{(k)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}% \cr$\stackrel{{\scriptstyle}}{{{}_{p_{1},\ldots,p_{k}\to\infty}}}$}} }\sum_{j_% {1}=0}^{p_{1}}\ldots\sum_{j_{k}=0}^{p_{k}}C_{j_{k}\ldots j_{1}}\prod_{g=1}^{k}% \zeta_{j_{g}}^{(i_{g})}$$ if $$\sum_{r=1}^{\left[k/2\right]}\frac{1}{2^{r}}\sum_{(s_{r},\ldots,s_{1})\in{\rm A% }_{k,r}}J[\psi^{(k)}]_{T,t}^{s_{r},\ldots,s_{1}}=$$ $${{=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{% \scriptstyle}}{{{}_{p_{1},\ldots,p_{k}\to\infty}}}$}} }\sum_{j_{1}=0}^{p_{1}}% \ldots\sum_{j_{k}=0}^{p_{k}}C_{j_{k}\ldots j_{1}}\leavevmode\nobreak\ \hbox{% \vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{\scriptstyle}}{{% {}_{N\to\infty}}}$}} }\sum_{(l_{1},\ldots,l_{k})\in{G}_{k}}\prod_{g=1}^{k}\phi% _{j_{g}}(\tau_{l_{g}})\Delta{\bf w}_{\tau_{l_{g}}}^{(i_{g})}\ \hbox{{\rm w.\ p% .\ 1.}}$$ In the case $p_{1}=\ldots=p_{k}=p$ amd $\psi_{l}(s)\equiv 1$ $(l=1,\ldots,k)$ from the (27) we obtain the statement of the hypothesis 1 (4). In the following sections we consider the theorems prooving the hypothesis 1 for the cases $k=2,\ 3,\ 4.$ The case $k=1$ obviously directly follows from the theorem 3 if $k=1$ (see (15). 6. Expansions of Multiple Stratonovich Stochastc Integrals of Multiplicity 2 As it turned out, approximations of multiple Stratonovich stochastic integrals, based on the theorem 3 are essentially simpler than approximations of multiple It$\hat{\rm o}$ stochastic integrals from the theorem 3. We begin the consideration from the multiplicity $k=2$ because of according to (15) it follows that expansions for multiple It$\hat{\rm o}$ and Stratonovich stochastic integrals of 1st multiplicity are identical. The following theorems adapt the theorem 3 for the integrals (2) of multiplicity 2. Theorem 6 (see [23] - [26], [32]). Assume, that the following conditions are met: 1. The function $\psi_{2}(\tau)$ is continuously differentiable at the interval $[t,T]$ and the function $\psi_{1}(\tau)$ is two times continuously differentiable at the interval $[t,T]$. 2. $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials or system of trigonometric functions in the space $L_{2}([t,T]).$ Then, the multiple Stratonovich stochastic integral of the second multiplicity $$J^{*}[\psi^{(2)}]_{T,t}={\int\limits_{t}^{*}}^{T}\psi_{2}(t_{2}){\int\limits_{% t}^{*}}^{t_{2}}\psi_{1}(t_{1})d{\bf f}_{t_{1}}^{(i_{1})}d{\bf f}_{t_{2}}^{(i_{% 2})}\ (i_{1},i_{2}=1,\ldots,m)$$ is expanded into the converging in the mean-square sense multiple series $${J^{*}[\psi^{(2)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}% \cr$\stackrel{{\scriptstyle}}{{{}_{p_{1},p_{2}\to\infty}}}$}} }\sum_{j_{1}=0}^% {p_{1}}\sum_{j_{2}=0}^{p_{2}}C_{j_{2}j_{1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}% }^{(i_{2})},$$ where the meaning of notations introduced in the formulations of the theorem 3 is remained. Prooving the theorem 6 [23] - [26], [32] we used theorem 3 and double integration by parts. This procedure leads to the condition of double continuously differentiability of function $\psi_{1}(\tau)$ at the interval $[t,T]$. The mentioned condition can be weakened. In result we have the following theorem. Theorem 7 (see [34]). Assume, that the following conditions are met: 1. Every function $\psi_{l}(\tau)$ $(l=1,2)$ is continuously differentiable at the interval $[t,T]$. 2. $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials or system of trigonometric functions in the space $L_{2}([t,T]).$ Then, the multiple Stratonovich stochastic integral of the second multiplicity $$J^{*}[\psi^{(2)}]_{T,t}={\int\limits_{t}^{*}}^{T}\psi_{2}(t_{2}){\int\limits_{% t}^{*}}^{t_{2}}\psi_{1}(t_{1})d{\bf f}_{t_{1}}^{(i_{1})}d{\bf f}_{t_{2}}^{(i_{% 2})}\ (i_{1},i_{2}=1,\ldots,m)$$ is expanded into the converging in the mean-square sense multiple series $${J^{*}[\psi^{(2)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}% \cr$\stackrel{{\scriptstyle}}{{{}_{p_{1},p_{2}\to\infty}}}$}} }\sum_{j_{1}=0}^% {p_{1}}\sum_{j_{2}=0}^{p_{2}}C_{j_{2}j_{1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}% }^{(i_{2})},$$ where the meaning of notations introduced in the formulations of the theorem 3 is remained. The proof of the theorem 2 [34] based on the theorem 3 and double Fourier-Legendre series as well as double trigonometric Fourier series at the square $[t,T]^{2}.$ The series summarized by the Prinsheim method. 7. Expansions of Multiple Stratonovich Stochastc Integrals of Multiplicity 3 The following theorems adapt the theorem 3 for the integrals (2) of multiplicity 3. Theorem 8 (see [22] - [26], [27]). Assume, that $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials or trigonometric functions in the space $L_{2}([t,T])$. Then, for multiple Stratonovich stochastic integral of 3rd multiplicity $${\int\limits_{t}^{*}}^{T}{\int\limits_{t}^{*}}^{t_{3}}{\int\limits_{t}^{*}}^{t% _{2}}d{\bf f}_{t_{1}}^{(i_{1})}d{\bf f}_{t_{2}}^{(i_{2})}d{\bf f}_{t_{3}}^{(i_% {3})}\ $$ $(i_{1},i_{2},i_{3}=1,\ldots,m)$ the following converging in the mean-square sense expansion $${{\int\limits_{t}^{*}}^{T}{\int\limits_{t}^{*}}^{t_{3}}{\int\limits_{t}^{*}}^{t% _{2}}d{\bf f}_{t_{1}}^{(i_{1})}d{\bf f}_{t_{2}}^{(i_{2})}d{\bf f}_{t_{3}}^{(i_% {3})}\ =\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{% \scriptstyle}}{{{}_{p_{1},p_{2},p_{3}\to\infty}}}$}} }\sum_{j_{1}=0}^{p_{1}}% \sum_{j_{2}=0}^{p_{2}}\sum_{j_{3}=0}^{p_{3}}C_{j_{3}j_{2}j_{1}}\zeta_{j_{1}}^{% (i_{1})}\zeta_{j_{2}}^{(i_{2})}\zeta_{j_{3}}^{(i_{3})}$$ is reasonable, where $$C_{j_{3}j_{2}j_{1}}=\int\limits_{t}^{T}\phi_{j_{3}}(s)\int\limits_{t}^{s}\phi_% {j_{2}}(s_{1})\int\limits_{t}^{s_{1}}\phi_{j_{1}}(s_{2})ds_{2}ds_{1}ds;$$ another denotations see in the theorem 1. Let’s consider the generalization of the theorem 8 (the case of Legendre polynomials) for the binomial weight functions. Theorem 9 (see [22] - [26], [27]). Assume, that $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials in the space $L_{2}([t,T])$. Then, for multiple Stratonovich stochastic integral of 3rd multiplicity $$I_{{l_{1}l_{2}l_{3}}_{T,t}}^{*(i_{1}i_{2}i_{3})}={\int\limits_{t}^{*}}^{T}(t-t% _{3})^{l_{3}}{\int\limits_{t}^{*}}^{t_{3}}(t-t_{2})^{l_{2}}{\int\limits_{t}^{*% }}^{t_{2}}(t-t_{1})^{l_{1}}d{\bf f}_{t_{1}}^{(i_{1})}d{\bf f}_{t_{2}}^{(i_{2})% }d{\bf f}_{t_{3}}^{(i_{3})}\ $$ $(i_{1},i_{2},i_{3}=1,\ldots,m)$ the following converging in the mean-square sense expansion $${I_{{l_{1}l_{2}l_{3}}_{T,t}}^{*(i_{1}i_{2}i_{3})}=\hbox{\vtop{\offinterlineskip% \halign{\cr}{\rm l.i.m.}\cr$\stackrel{{\scriptstyle}}{{{}_{p_{1},p_{2},p_{3}% \to\infty}}}$}} }\sum_{j_{1}=0}^{p_{1}}\sum_{j_{2}=0}^{p_{2}}\sum_{j_{3}=0}^{p% _{3}}C_{j_{3}j_{2}j_{1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}\zeta_{j% _{3}}^{(i_{3})}$$ is reasonable for each of the following cases: 1. $i_{1}\neq i_{2},\ i_{2}\neq i_{3},\ i_{1}\neq i_{3}$ and $l_{1},\ l_{2},\ l_{3}=0,\ 1,\ 2,\ldots;$ 2. $i_{1}=i_{2}\neq i_{3}$ and $l_{1}=l_{2}\neq l_{3}$ and $l_{1},\ l_{2},\ l_{3}=0,\ 1,\ 2,\ldots;$ 3. $i_{1}\neq i_{2}=i_{3}$ and $l_{1}\neq l_{2}=l_{3}$ and $l_{1},\ l_{2},\ l_{3}=0,\ 1,\ 2,\ldots;$ 4. $i_{1},i_{2},i_{3}=1,\ldots,m;$ $l_{1}=l_{2}=l_{3}=l$ and $l=0,\ 1,\ 2,\ldots$, where $$C_{j_{3}j_{2}j_{1}}=\int\limits_{t}^{T}(t-s)^{l_{3}}\phi_{j_{3}}(s)\int\limits% _{t}^{s}(t-s_{1})^{l_{2}}\phi_{j_{2}}(s_{1})\int\limits_{t}^{s_{1}}(t-s_{2})^{% l_{1}}\phi_{j_{1}}(s_{2})ds_{2}ds_{1}ds;$$ another denotations see in the theorem 3. We can introduce weight functions $\psi_{l}(s)$ $(l=1,2,3)$ with some properties of smoothness, however we consider (in this case) more specific method of series summation ($p_{1}=p_{2}=p_{3}=p\to\infty$). Theorem 10. (see [23] - [26]). Assume, that $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials or trigonometric functions in the space $L_{2}([t,T])$ and $\psi_{1}(s),$ $\psi_{2}(s),$ $\psi_{3}(s)$ — are continuously differentiable functions at the interval $[t,T]$. Then, for multiple Stratonovich stochastic integral of 3rd multiplicity $$J^{*}[\psi^{(3)}]_{T,t}={\int\limits_{t}^{*}}^{T}\psi_{3}(t_{3}){\int\limits_{% t}^{*}}^{t_{3}}\psi_{2}(t_{2}){\int\limits_{t}^{*}}^{t_{2}}\psi_{1}(t_{1})d{% \bf f}_{t_{1}}^{(i_{1})}d{\bf f}_{t_{2}}^{(i_{2})}d{\bf f}_{t_{3}}^{(i_{3})}\ $$ $(i_{1},i_{2},i_{3}=1,\ldots,m)$ the following converging in the mean-square sense expansion (28) $${J^{*}[\psi^{(3)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}% \cr$\stackrel{{\scriptstyle}}{{{}_{p\to\infty}}}$}} }\sum\limits_{j_{1},j_{2},% j_{3}=0}^{p}C_{j_{3}j_{2}j_{1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}% \zeta_{j_{3}}^{(i_{3})}$$ is reasonable for each of the following cases: 1. $i_{1}\neq i_{2},\ i_{2}\neq i_{3},\ i_{1}\neq i_{3};$ 2. $i_{1}=i_{2}\neq i_{3}$ and $\psi_{1}(s)\equiv\psi_{2}(s)$; 3. $i_{1}\neq i_{2}=i_{3}$ and $\psi_{2}(s)\equiv\psi_{3}(s)$; 4. $i_{1},i_{2},i_{3}=1,\ldots,m$ and $\psi_{1}(s)\equiv\psi_{2}(s)\equiv\psi_{3}(s)$; $$C_{j_{3}j_{2}j_{1}}=\int\limits_{t}^{T}\psi_{3}(s)\phi_{j_{3}}(s)\int\limits_{% t}^{s}\psi_{2}(s_{1})\phi_{j_{2}}(s_{1})\int\limits_{t}^{s_{1}}\psi_{1}(s_{2})% \phi_{j_{1}}(s_{2})ds_{2}ds_{1}ds,$$ another denotations see in the theorem 3. We can omit the cases 1-4 from the theorem 10 for two times continuously differentiable functions $\psi_{1}(s)$ and $\psi_{3}(s).$ Theorem 11 (see [24] - [26], [32]). Assume, that $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials or trigonomertic functions in the space $L_{2}([t,T])$, function $\psi_{2}(s)$ — is continuously differentiable at the interval $[t,T]$ and functions $\psi_{1}(s),\psi_{3}(s)$ — are two times continuously differentiable at the interval $[t,T]$. Then, for multiple Stratonovich stochastic integral of 3rd multiplicity $$J^{*}[\psi^{(3)}]_{T,t}={\int\limits_{t}^{*}}^{T}\psi_{3}(t_{3}){\int\limits_{% t}^{*}}^{t_{3}}\psi_{2}(t_{2}){\int\limits_{t}^{*}}^{t_{2}}\psi_{1}(t_{1})d{% \bf f}_{t_{1}}^{(i_{1})}d{\bf f}_{t_{2}}^{(i_{2})}d{\bf f}_{t_{3}}^{(i_{3})}\ $$ $(i_{1},i_{2},i_{3}=1,\ldots,m)$ the following converging in the mean-square sense expansion (29) $${J^{*}[\psi^{(3)}]_{T,t}=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}% \cr$\stackrel{{\scriptstyle}}{{{}_{p\to\infty}}}$}} }\sum\limits_{j_{1},j_{2},% j_{3}=0}^{p}C_{j_{3}j_{2}j_{1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}% \zeta_{j_{3}}^{(i_{3})}$$ is reasonable, where $$C_{j_{3}j_{2}j_{1}}=\int\limits_{t}^{T}\psi_{3}(s)\phi_{j_{3}}(s)\int\limits_{% t}^{s}\psi_{2}(s_{1})\phi_{j_{2}}(s_{1})\int\limits_{t}^{s_{1}}\psi_{1}(s_{2})% \phi_{j_{1}}(s_{2})ds_{2}ds_{1}ds;$$ another denotations see in the theorem 3. 8. Expansions of Multiple Stratonovich Stochastc Integrals of Multiplicity 4 The following theorem adapts the theorem 3 for the integrals (2) of multiplicity 4. Theorem 12 (see [23] - [26], [32]). Assume, that $\{\phi_{j}(x)\}_{j=0}^{\infty}$ — is a complete orthonormal system of Legendre polynomials or trigonometric functions in the space $L_{2}([t,T])$. Then, for multiple Stratonovich stochastic integral of 4th multiplicity $$I_{(\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4})T,t}^{*(i_{1}i_{2}i_{3}i_{4})% }={\int\limits_{t}^{*}}^{T}{\int\limits_{t}^{*}}^{t_{4}}{\int\limits_{t}^{*}}^% {t_{3}}{\int\limits_{t}^{*}}^{t_{2}}d{\bf w}_{t_{1}}^{(i_{1})}d{\bf w}_{t_{2}}% ^{(i_{2})}d{\bf w}_{t_{3}}^{(i_{3})}d{\bf w}_{t_{4}}^{(i_{4})}\ $$ $(i_{1},i_{2},i_{3},i_{4}=0,1,\ldots,m)$ the following converging in the mean-square sense expansion (30) $${I_{(\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4})T,t}^{*(i_{1}i_{2}i_{3}i_{4})% }=\hbox{\vtop{\offinterlineskip\halign{\cr}{\rm l.i.m.}\cr$\stackrel{{% \scriptstyle}}{{{}_{p\to\infty}}}$}} }\sum\limits_{j_{1},j_{2},j_{3},j_{4}=0}^% {p}C_{j_{4}j_{3}j_{2}j_{1}}\zeta_{j_{1}}^{(i_{1})}\zeta_{j_{2}}^{(i_{2})}\zeta% _{j_{3}}^{(i_{3})}\zeta_{j_{4}}^{(i_{4})}$$ is reasonable, where $$C_{j_{4}j_{3}j_{2}j_{1}}=\int\limits_{t}^{T}\phi_{j_{4}}(s)\int\limits_{t}^{s}% \phi_{j_{3}}(s_{1})\int\limits_{t}^{s_{1}}\phi_{j_{2}}(s_{2})\int\limits_{t}^{% s_{2}}\phi_{j_{1}}(s_{3})ds_{3}ds_{2}ds_{1}ds;$$ ${\bf w}_{\tau}^{(i)}={\bf f}_{\tau}^{(i)}$ — are independent standard Wiener processes $(i=1,\ldots,m)$ and ${\bf w}_{\tau}^{(0)}=\tau;$ $\lambda_{l}=0$ if $i_{l}=0$ and $\lambda_{l}=1$ if $i_{l}=1,\ldots,m.$ In principle we can prove the analogue of the theorem 12 for the multiple Stratonovich stochastic integrals of multiplicity 5 using the method of prooving the theorem 12 [23] - [26], [32], however this is essentially difficult. References [1] Kloeden P.E., Platen E. Numerical Solution of Stochastic Differential Equations. Springer, Berlin, 1995. 632 pp. [2] Kloeden P.E., Platen E., Schurz H. Numerical Solution of SDE Through Computer Experiments. Springer, Berlin, 2003. [3] Milstein G.N. Numerical Integration of Stochastic Differential Equations. Ural University Press, Sverdlovsk, 1988. 225 pp. [4] Milstein G.N., Tretyakov M.V. Stochastic Numerics for Mathematical Physics. Springer, Berlin, 2004. 616 pp. [5] Kulchitskiy O.Yu., Kuznetsov D.F. The Unified Taylor-It$\hat{\rm o}$ Expansion. Journal of Mathematical Sciences (N. Y.) 2000;99(2):1130-1140. [6] Kuznetsov D.F. New Representations of the Taylor-Stratonovich Expansions. Journal of Mathematical Sciences (N. Y.) 2003;118(6):5586-5596. [7] Kloeden P.E., Platen E., Wright I.W. The approximation of multiple stochastic integrals. Stochastic Analysis and Applications 1992;10(4):431-441. [8] Gilsing H., Shardlow T. SDELab: A package for solving stochastic differential equations in MATLAB. Journal of Computational and Applied Mathematics 2007;205(2):1002-1018. [9] Zahri M. Multidimensional Milstein scheme for solving a stochastic model for prebiotic evolution. Journal of Taibah University for Science 2014;8(2):186-198. [10] Reshniak V., Khaliq A.Q.M., Voss D.A., Zhang G. Split-step Milstein methods for multi-channel stiff stochastic differential systems. Applied Numerical Mathematics 2015;89:1-23. [11] Kiesewetter S., Polkinghorne R., Opanchuk B., Drummond P.D. xSPDE: Extensible software for stochastic equations. SoftwareX 2016;5:12-15. [12] Gevorkyan M.N., Velieva T.R., Korolkova A.V., Kulyabov D.S., Sevastyanov L.A. Stochastic Runge-Kutta software package for stochastic differential equations. arXiv:1606.06604v1 [physics.comp-ph], 2016;1-31. preprint. [13] Claudine L., Rosler A. Iterated Stochastic Integrals in Infinite Dimensions - Approximation and Error Estimates. arXiv:1709.06961v1 [math.PR], 2017;1-22. preprint. [14] Kamrani M., Jamshidi N. Implicit Milstein method for stochastic differential equations via the Wong-Zakai approximation. Numerical Algorithms. 2017;1-18. [15] Dmitriy F. Kuznetsov. The Method of the Expansion and Approximation of Repeated Stochastic Stratonovich Integrals, Based on Multiple Fourier Series in Complete Orthonormal Systems of Functions. [in Russian] Electronic Journal Differential Equations and Control Processes, no. 1, 1997, 18-77. (Accepted 15.12.1996). Available at: http://www.math.spbu.ru/diffjournal/pdf/j002.pdf [16] Kuznetsov D.F. Numerical Integration of Stochastic Differential Equations. 2. [In Russian]. Polytechnical University Publishing House: St.-Petersburg, 2006, 764 pp. (DOI: 10.18720/SPBPU/2/s17-227). Available at: http://www.sde-kuznetsov.spb.ru/downloads/kuz_2006.pdf [17] Kuznetsov D.F. Stochastic Differential Equations: Theory and Practice of Numerical Solution. With MatLab programs. 1st Ed. [In Russian]. Polytechnical University Publishing House: St.-Petersburg, 2007, 778 pp. (DOI: 10.18720/SPBPU/2/s17-228). Available at: http://www.sde-kuznetsov.spb.ru/downloads/1_ed_kuz.pdf [18] Kuznetsov D.F. Stochastic Differential Equations: Theory and Practice of Numerical Solution. With MatLab programs. 2nd Ed. [In Russian]. Polytechnical University Publishing House: St.-Petersburg, 2007, XXXII+770 pp. (DOI: 10.18720/SPBPU/2/s17-229). Available at: http://www.sde-kuznetsov.spb.ru/downloads/2_ed_kuz.pdf [19] Kuznetsov D.F. Stochastic Differential Equations: Theory and Practice of Numerical Solution. With MatLab programs. 3rd Ed. [In Russian]. Polytechnical University Publishing House: St.-Petersburg, 2009, XXXIV+768 pp. (DOI: 10.18720/SPBPU/2/s17-230). Available at: http://www.sde-kuznetsov.spb.ru/downloads/3_ed_kuz.pdf [20] Kuznetsov D.F. Stochastic Differential Equations: Theory and Practice of Numerical Solution. With MatLab programs. 4th Ed. [In Russian]. Polytechnical University Publishing House: St.-Petersburg, 2010, XXX+786 pp. (DOI: 10.18720/SPBPU/2/s17-231). Available at: http://www.sde-kuznetsov.spb.ru/downloads/4_ed_kuz.pdf [21] Dmitriy F. Kuznetsov. Multiple It$\hat{\rm o}$ and Stratonovich Stochastic Integrals and Multiple Fourier Series. [In Russian].Electronic Journal ”Differential Equations and Control Processes, no. 3, 2010, 257 (A.1 - A.257) pp. (DOI: 10.18720/SPBPU/2/z17-7). Available at: http://www.math.spbu.ru/diffjournal/pdf/kuznetsov_book.pdf [22] Dmitriy F. Kuznetsov. Strong Approximation of Multiple It$\hat{\rm o}$ and Stratonovich Stochastic Integrals: Multiple Fourier Series Approach. 2nd Ed. (In English). Polytechnical University Publishing House, St.-Petersburg, 2011, 250 pp. (DOI: 10.18720/SPBPU/2/s17-232). Available at: http://www.sde-kuznetsov.spb.ru/downloads/kuz_2011_1_ed.pdf [23] Dmitriy F. Kuznetsov. Strong Approximation of Multiple It$\hat{\rm o}$ and Stratonovich Stochastic Integrals: Multiple Fourier Series Approach. 2nd Ed. [In English]. Polytechnical University Publishing House, St.-Petersburg, 2011, 284 pp. (DOI: 10.18720/SPBPU/2/s17-233). Available at: http://www.sde-kuznetsov.spb.ru/downloads/kuz_2011_2_ed.pdf [24] Dmitriy F. Kuznetsov. Multiple It$\hat{\rm o}$ and Stratonovich Stochastic Integrals: Approximations, Properties, Formulas. [In English]. Polytechnical University Publishing House, St.-Petersburg, 2013, 382 pp. (DOI: 10.18720/SPBPU/2/s17-234). Available at: http://www.sde-kuznetsov.spb.ru/downloads/kuz_2013.pdf [25] Dmitriy F. Kuznetsov. Multiple It$\hat{\rm o}$ and Stratonovich Stochastic Integrals: Fourier-Legendre and Trigonometric Expansions, Approximations, Formulas. [In English]. Electronic Journal Differential Equations and Control Processes, no. 1, 2017, 385 (A.1 - A.385) pp. (DOI: 10.18720/SPBPU/2/z17-3). Available at: http://www.math.spbu.ru/diffjournal/pdf/kuznetsov_book2.pdf [26] Kuznetsov D.F. Stochastic Differential Equations: Theory and Practice of Numerical Solution. With MatLab Programs. 5th Ed. [In Russian]. Electronic Journal Differential Equations and Control Processes, no. 2, 2017, 1000 (A.1 - A.1000) pp. (DOI: 10.18720/SPBPU/2/z17-4). Available at: http://www.math.spbu.ru/diffjournal/pdf/kuznetsov_book3.pdf [27] Dmitriy F. Kuznetsov. Expansion of Triple Stratonovich Stochastic Integrals, Based on Generalized Multiple Fourier Series, Converging in the Mean: General Case of Series Summation. arXiv:1801.01564 [math.PR]. 2018, 26 pp. [in English]. [28] Dmitriy F. Kuznetsov. Exact Calculation of Mean-Square Error of Approximation of Multiple It$\hat{\rm o}$ Stochastic integrals for the Method, Based on the Multiple Fourier Series. arXiv:1801.01079 [math.PR]. 2018, 19 pp. [in English]. [29] Dmitriy F. Kuznetsov. Expansion of Multiple Stratonovich Stochastic Integrals of Arbitrary Multiplicity, Based on Generalized Repeated Fourier Series, Converging Pointwise. arXiv:1801.00784 [math.PR]. 2018, 23 pp. [in English]. [30] Dmitriy F. Kuznetsov. Mean-Square Approximation of Multiple It$\hat{\rm o}$ and Stratonovich Stochastic Integrals from the Taylor-It$\hat{\rm o}$ and Taylor-Stratonovich Expansions, Using Legendre Polynomials. arXiv:1801.00231 [math.PR]. 2017, 26 pp. [in English]. [31] Dmitriy F. Kuznetsov. Expansion of Multiple It$\hat{\rm o}$ Stochastic Integrals of Arbitrary Multiplicity, Based on Generalized Multiple Fourier Series, Converging in the Mean. arXiv:1712.09746 [math.PR]. 2017, 22 pp. [in English]. [32] Dmitriy F. Kuznetsov. Expansions of Multiple Stratonovich Stochastic Integrals, Based on Generalized Multiple Fourier Series. arXiv:1712.09516 [math.PR]. 2017, 25 pp. [in English]. [33] Dmitriy F. Kuznetsov. Application of the Fourier Method to the Mean-Square Approximation of Multiple It$\hat{\rm o}$ and Stratonovich Stochastic Integrals. arXiv:1712.08991 [math.PR]. 2017, 15 pp. [in English]. [34] Dmitriy F. Kuznetsov. Expansion of Multiple Stratonovich Stochastic Integrals of Multiplicity 2, Based on Double Fourier-Legendre Series, Summarized by Prinsheim Method. arXiv:1801.01962 [math.PR]. 2018, 21 pp. [in Russian].
Hidden geometric correlations in real multiplex networks Kaj-Kolja Kleineberg [email protected]    Marián Boguñá Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona, Spain    M. Ángeles Serrano Institució Catalana de Recerca i Estudis Avançats (ICREA), Passeig Lluís Companys 23, E-08010 Barcelona, Spain Departament de Física Fonamental, Universitat de Barcelona, Martí i Franquès 1, 08028 Barcelona, Spain    Fragkiskos Papadopoulos [email protected] Department of Electrical Engineering, Computer Engineering and Informatics, Cyprus University of Technology, 33 Saripolou Street, 3036 Limassol, Cyprus Abstract Real networks often form interacting parts of larger and more complex systems. Examples can be found in different domains, ranging from the Internet to structural and functional brain networks. Here, we show that these multiplex systems are not random combinations of single network layers. Instead, they are organized in specific ways dictated by hidden geometric correlations interweaving the layers. We find that these correlations are significant in different real multiplexes, and form a key framework for answering many important questions. Specifically, we show that these geometric correlations facilitate: (i) the definition and detection of multidimensional communities, which are sets of nodes that are simultaneously similar in multiple layers; (ii) accurate trans-layer link prediction, where connections in one layer can be predicted by observing the hidden geometric space of another layer; and (iii) efficient targeted navigation in the multilayer system using only local knowledge, which outperforms navigation in the single layers only if the geometric correlations are sufficiently strong. Our findings uncover fundamental organizing principles behind real multiplexes and can have important applications in diverse domains. Real networks are often not isolated entities but instead can be considered constituents of larger systems, called multiplexes or multilayer networks Kivelä et al. (2014); Radicchi and Arenas (2013); Bianconi (2014); Radicchi (2015); De Domenico et al. (2013); Szell et al. (2010); Menichetti et al. (2014); Halu et al. (2014). Examples can be found everywhere. One is the multiplex consisting of the different social networks that a person may belong to. Other examples include the Internet’s IPv4 and IPv6 topologies, or the structural and functional networks in the brain. Understanding the relations among the networks comprising a multiplex is crucial for understanding the behavior of a wide range of real world systems Kleineberg and Boguñá (2014, 2015, 2016); De Domenico et al. (2014); Simas et al. (2015). However, despite the burst of recent research in studying the properties of multiplex networks, e.g., De Domenico et al. (2013); Kivelä et al. (2014); Boccaletti et al. (2014), a universal framework describing the relations among the single networks comprising a multiplex and what implications these relations may have when it comes to applications remains elusive. Here, we show that real multiplexes are not random combinations of single network layers. Instead, we find that their constituent networks exhibit significant hidden geometric correlations. These correlations are called “hidden” as they are not directly observable by looking at each individual network topology. Specifically, each single network can be mapped (or embedded) into a separate hyperbolic space Krioukov et al. (2010); Boguñá et al. (2010); Papadopoulos et al. (2015a, b), where node coordinates abstract the popularity and similarity of nodes Serrano et al. (2008); Papadopoulos et al. (2012). We find that node coordinates are significantly correlated across layers of real multiplexes, meaning that distances between nodes in the underlying hyperbolic spaces of the constituent networks are also significantly correlated. The discovered geometric correlations yield a very powerful framework for answering important questions related to real multiplexes. Specifically, we first show that these correlations suggest the existence of multidimensional communities, which are sets of nodes that are similar (close in the underlying space) in multiple layers. Further, we show that strong geometric correlations imply accurate trans-layer link prediction, where connections in one layer can be predicted by knowing the hyperbolic distances among nodes in another layer. This is important for applications where we only know the connections among nodes in one context, e.g., structural connections between brain regions, and we want to predict connections between the same nodes in some other context, e.g., likelihood of functional connections between the same brain regions. Finally, to study the effects of geometric correlations on dynamical processes, we consider targeted navigation. Targeted navigation is a key function of many real networks, where either goods, people, or information is transferred from a source to a destination using the connections of the network. It has been shown that single complex networks, like the IPv4 Internet or the network of airport connections, can be navigated efficiently by performing greedy routing in their underlying geometric spaces Boguñá et al. (2008, 2010); Papadopoulos et al. (2010); Krioukov et al. (2010). In greedy routing, nodes forward incoming messages to their neighbors that are closest to the destination in the geometric space so that they only need local knowledge about the coordinates of their neighbors. In multilayer systems, messages can switch between layers and so a node forwards a message to its neighbor that is closest to the destination in any of the layers comprising the system. We call this process mutual greedy routing (MGR). Mutual greedy routing follows the same line of reasoning as greedy routing in Milgram’s experiment Travers and Milgram (1969), which was indeed performed using multiple domains. For example, in the case of a single network, to reach a lawyer in Boston one might want to forward a message to a judge in Los Angeles (greedy routing). However, in the case of two network layers, it might be known that the lawyer in Boston is also a passionate vintage model train collector. An individual who knows a judge in Los Angeles and the owner of a vintage model train shop in New York would probably choose to forward the message to the latter (MGR). Similarly, air travel networks can be supported by train networks to enhance the possibilities to navigate the physical world, individuals can use different online social networks to increase their outreach, and so on. In this work, we consider the real Internet, which is used to navigate the digital world, and show that mutual greedy routing in the multiplex consisting of the IPv4 and IPv6 Internet topologies as_ (2015) outperforms greedy routing in the single IPv4 and IPv6 networks. We also use synthetic model networks to show that geometric correlations improve the navigation of multilayer systems, which outperforms navigation in the single layers if these correlations are sufficiently strong. As we will show, if optimal correlations are present, the fraction of failed deliveries is mitigated superlinerarly with the number of layers, suggesting that more layers with the right correlations quickly make multiplex systems almost perfectly navigable. Geometric organization of real multiplexes It has been shown that many real (single layer) complex networks have an effective or hidden geometry underneath their observed topologies, which is hyperbolic rather than Euclidean Krioukov et al. (2009, 2010); Boguñá et al. (2010); Papadopoulos et al. (2012); Serrano et al. (2012). In this work, we extend the hidden geometry paradigm to real multiplexes and prove that the coordinates of nodes in the different underlying spaces of layers are correlated. Geometry of single layer networks Nodes of real single-layered networks can be mapped to points in the hyperbolic plane, such that each node $i$ has the polar coordinates, or hidden variables, $r_{i},\theta_{i}$. The radial coordinate $r_{i}$ abstracts the node popularity. The smaller the radial coordinate of a node, the more popular the node is, and the more likely it attracts connections. The angular distance between two nodes, $\Delta\theta_{ij}=\pi-|\pi-|\theta_{i}-\theta_{j}||$, abstracts their similarity. The smaller this distance, the more similar two nodes are, and the more likely they are connected. The hyperbolic distance between two nodes, very well approximated by $x_{ij}\approx r_{i}+r_{j}+2\ln{\sin{(\Delta\theta_{ij}/2)}}$ Krioukov et al. (2010), is then a single-metric representation of a combination of the two attractiveness attributes, radial popularity and angular similarity. The smaller the hyperbolic distance between two nodes, the higher is the probability that the nodes are connected, meaning that connections take place by optimizing trade-offs between popularity and similarity Papadopoulos et al. (2012). Techniques based on Maximum Likelihood Estimation for inferring the popularity and similarity node coordinates in a real network have been derived in Boguñá et al. (2010) and recently optimized in Papadopoulos et al. (2015a, b). It has been shown that through the constructed hyperbolic maps one can identify soft communities of nodes, which are groups of nodes located close to each other in the angular similarity space Boguñá et al. (2010); Serrano et al. (2012); Papadopoulos et al. (2012); Zuev et al. (2015); predict missing links with high precision Serrano et al. (2012); Papadopoulos et al. (2015a, b); and facilitate efficient greedy routing in the Internet, which can reach destinations with more than $90$% success rate, following almost shortest network paths Boguñá et al. (2010); Papadopoulos et al. (2015a, b). Geometry of real multiplexes We now consider nine different real-world multiplex networks from diverse domains. An overview of the considered datasets is given in Table 1—see Supplementary Information section I for a detailed description. For each multiplex, we map each network layer independently to an underlying hyperbolic space using the HyperMap method Papadopoulos et al. (2015a, b) (see Supplementary Information section II), thus inferring the popularity and similarity coordinates $r,\theta$ of all of its nodes. A visualization of the mapped IPv4 and IPv6 Internet layers is shown in Fig. 1. For each of our real multiplexes, we find that both the radial and the angular coordinates of nodes that exist in different layers are correlated. The radial popularity coordinate of a node $i$ depends on its observed degree in the network $k_{i}$ via $r_{i}\sim\ln{N}-\ln{k_{i}}$, where $N$ is the total number of nodes Boguñá et al. (2010); Papadopoulos et al. (2015a, b). Therefore, radial correlations are equivalent to correlations among node degrees, which have been recently found and studied Nicosia and Latora (2015); Min et al. (2014); Kim and Goh (2013); Gemmetto and Garlaschelli (2015); Ángeles Serrano et al. (2015). Consistent with these findings, radial correlations are present in our real multiplexes and are encoded in the conditional probability $P(r_{2}|r_{1})$ that a node has radial coordinate $r_{2}$ in layer 2 given its radial coordinate $r_{1}$ in layer 1. $P(r_{2}|r_{1})$ for different real multiplexes is shown in Supplementary Fig. 1, where we observe significant radial correlations. The correlation among the angular similarity coordinates, on the other hand, is a fundamentally new result that has important practical implications. Fig. 2 shows the distribution of nodes that have angular coordinates $(\theta_{1},\theta_{2})$ in layers $1$ and $2$ of the real Internet, Drosophila, and arXiv multiplexes. The figure also shows the corresponding distributions in the reshuffled counterparts of the real systems, where we have destroyed the trans-layer coordinate correlations by randomly reshuffling node ids (see Supplementary Information section III). We observe an overabundance of two-dimensional similarity clusters in the real multiplexes. These clusters consist of nodes that are similar, i.e., are located at small angular distances, in both layers of the multiplex. These similarity clusters do not exist in the reshuffled counterparts of the real systems, and are evidence of significant angular correlations. Similar results hold for the rest of the real multiplexes that we consider (see Supplementary Information section IV). In Supplementary Information section X, we quantify both the radial and the angular correlations present in all the considered real multiplexes. The generalization of community definition and detection techniques from single layer networks to multiplex systems has recently gained attention Mucha et al. (2010); Zhu and Li (2014); Loe and Jensen (2015); De Domenico et al. (2015b). The two-dimensional similarity histograms in Fig. 2 allow one to naturally observe two-dimensional soft communities, defined as sets of nodes that are similar—close in the angular similarity space—in both layers of the multiplex simultaneously. They also provide a measure of distance between the different communities. We note that these communities are called “soft” as they are defined by the geometric proximity of nodes Boguñá et al. (2010); Serrano et al. (2012); Papadopoulos et al. (2012); Zuev et al. (2015) rather than by the network topology as is traditionally the case Fortunato (2010). We also note that we do not develop an automated multidimensional community detection algorithm here. However, our findings could serve as a starting point for the development of such an algorithm (see discussion in Supplementary Information section IV). In Supplementary Information section V, we show how certain geographic regions are associated to two-dimensional communities in the IPv4/IPv6 Internet. The problem of link prediction has been extensively studied in the context of predicting missing and future links in single layer networks Lü and Zhou (2011); Clauset et al. (2008) and its generalization to real-world multilayer systems is recently gaining attention Hristova et al. (2015). In our case, the radial and angular correlations across different layers suggest that by knowing the hyperbolic distance between a pair of nodes in one layer, we can predict the likelihood that the same pair is connected in another layer. Fig. 3 (top) validates that this is indeed the case. The figure shows the empirical trans-layer connection probability, $P(1|2)$ ($P(2|1)$), that two nodes are connected in one of the layers of the multiplex given their hyperbolic distance in the other layer. In Fig. 3 (top), we observe that this probability decreases with the hyperbolic distance between nodes in the considered real multiplexes (see Supplementary Information section IV for the results for the rest of the multiplexes). By contrast, in their reshuffled counterparts, which do not exhibit geometric correlations, this probability is almost a straight line. Fig. 3 (bottom) shows that the trans-layer connection probability decreases with the angular distance between nodes, which provides an alternative empirical validation of the existence of significant similarity correlations across the layers. In Supplementary Information section VI, we quantify the quality of trans-layer link prediction in all the real multiplexes we consider, and show that our approach outperforms a binary link predictor that is based on edge overlaps. Modeling geometric correlations and implications to mutual greedy routing The IPv4 Internet has been found to be navigable Boguñá et al. (2010); Papadopoulos et al. (2015a, b). Specifically, it has been shown that greedy routing (GR) could reach destinations with more than 90% success rate in the constructed hyperbolic maps of the IPv4 topology in 2009. We find a similar efficiency of GR in both the IPv4 and IPv6 topologies considered here, which correspond to January 2015. Specifically, we perform GR in the hyperbolic map of each topology among $10^{5}$ randomly selected source-destination pairs that exist in both topologies. We find that GR reaches destinations with $90\%$ and $92\%$ success rates in IPv4 and IPv6, respectively. Furthermore, we also perform angular GR, which is the same as GR but using only angular distances. We find that the success rate in this case is almost $60\%$ in both the IPv4 and IPv6 topologies. However, hyperbolic mutual greedy routing (MGR) between the same source-destination pairs, which travels to any neighbor in any layer closer to destination, increases the success rate to $95\%$, while the angular MGR that uses only the angular distances, increases the success rate along the angular direction to $66\%$. More details about the MGR process are found in Supplementary Information section XI. The observations above raise the following fundamental questions. (i) How do the radial and angular correlations affect the performance of MGR? (ii) Under which conditions does MGR perform better than single-layer GR? (iii) How does the performance of MGR depend on the number of layers in a multilayer system? And (iv), how close to the optimal—in terms of MGR’s performance—are the geometric correlations in the IPv4/IPv6 Internet? Answering these questions requires a framework to construct realistic synthetic topologies (layers) where correlations—both radial and angular—can be tuned without altering the topological characteristics of each individual layer. To this end, we develop the geometric multiplex model (GMM) (see Methods section B). The model generates multiplexes with layer topologies embedded in the hyperbolic plane and where correlations can be tuned independently by varying the model parameters $\nu\in[0,1]$ (radial correlations) and $g\in[0,1]$ (angular correlations). We consider two-, three- and four-layer multiplexes constructed using our framework with different values of the correlation strength parameters $\nu$ and $g$. From Figs. 4A,B and Supplementary Figs. 16–18, we observe that in general both MGR and angular MGR perform better as we increase the correlation strengths $\nu$ and $g$. When both radial and angular correlations are weak, we do not observe any significant benefits from mutual navigation. Indeed, in Figs. 4C-E we observe that in the uncorrelated case ($\nu\to 0$, $g\to 0$) MGR performs almost identical to the single-layer GR, irrespectively of the number of layers. This is because when a message reaches a node in one layer after the first iteration of the MGR process, the probability that this node will have a neighbor in another layer closer to the destination is small. That is, even though a node may have more options (neighbors in other layers) for forwarding a message, these options are basically useless and messages navigate mainly single layers. Increasing the strength of correlations makes MGR more efficient, as the probability to have a neighbor closer to the destination in another layer increases. However, increasing the strength of correlations also increases the edge overlap between the layers (see Supplementary Information section VIII), which reduces the options that a node has for forwarding a message. We observe that very strong radial and angular correlations may not be optimal at low temperatures (cf. Supplementary Figs. 16–18 for $T=0.1$). This is because when the layers have the same nodes and the same average degree and power law exponent, $\bar{k},\gamma$, then as $\nu\to 1$, $g\to 1$, the coordinates of the nodes in the layers become identical (see Supplementary Information section VII). Further, if the temperature of the layers is low, the connection probability $p(x_{ij})$ in each layer becomes the step function, where two nodes $i,j$ are deterministically connected if their hyperbolic distance is $x_{ij}\leq R\leavevmode\nobreak\ \sim\ln{N}$ (see Methods section B). That is, as $T\to 0$, $\nu\to 1$, $g\to 1$, all layers become identical, and MGR degenerates to single-layer GR. We observe (Figs. 4A,B and Supplementary Figs. 16–18) that the best MGR performance is always achieved at high angular correlations, and either high radial correlations if the temperature of the individual layers is high, or low radial correlations if the temperature of the layers is low. The best angular MGR performance is always achieved at high angular and low radial correlations. In other words, MGR performs more efficient when the layers comprising the multiplex are similar but not “too” similar. High temperatures induce more randomness so that even with maximal geometric correlations the layers are not too similar. However, at low temperatures, very strong geometric correlations make the layers too similar and the best performance occurs for intermediate correlations. From Figs. 4C-E we observe that, for a fixed number of layers and for optimal correlations, the failure rate ($1-$success rate) is reduced by a constant factor, which is independent of the navigation type (MGR or angular MGR) and the layer temperature. This factor, which we call failure mitigation factor, is the inverse of the slope of the best-fit lines in Figs. 4C-E. Fig. 4F shows the failure mitigation factor for our two-, three- and four-layer multiplexes for both uncorrelated and optimally correlated coordinates. Remarkably, if optimal correlations are present, the failure mitigation factor grows superlinerarly with the number of layers, suggesting that more layers with the right correlations can quickly make multiplex systems almost perfectly navigable. On the contrary, more layers without correlations do not have a significant effect on mutual navigation, which performs virtually identical to single-layer navigation. Finally, we investigate how close to the optimal—in terms of mutual navigation performance—are the radial and angular correlations in the IPv4/IPv6 Internet. To this end, we use an extension of the GMM framework that can account for different layer sizes (see Methods section C) to construct an Internet-like synthetic multiplex. For those nodes that exist in both layers of our multiplex (see Figs. 5A,B), we tune the correlations among their coordinates as before, by varying the parameters $\nu$ and $g$, and perform mutual navigation. Figs. 5C,D show the performance of angular MGR and of MGR, respectively. We observe again that increasing the correlation strengths improves performance. In angular MGR, the success rate is $63\%$ with uncorrelated coordinates, while with optimal correlations it becomes $75\%$. In MGR, the success rate with uncorrelated coordinates is $85\%$ and with optimal correlations is $91\%$. The star in Figs. 5C,D indicates the achieved performance with the empirical correlation strengths in the IPv4/IPv6 Internet, $\nu_{E}\approx 0.4,g_{E}\approx 0.4$, which are estimated using the inferred radial and angular coordinates of nodes (see Supplementary Information section IX). At $\nu=\nu_{E},g=g_{E}$, the success rate of angular MGR is $71\%$, which is closer to the rate obtained with optimal correlations than to the uncorrelated case. For MGR, the success rate is $88\%$, which lies in the middle of the uncorrelated and optimally correlated cases. Outlook Hidden metric spaces underlying real complex networks provide a fundamental explanation of their observed topologies and, at the same time, offer practical solutions to many of the challenges that the new science of networks is facing nowadays. On the other hand, the multiplex nature of numerous real-world systems has proven to induce emerging behaviors that cannot be observed in single layer networks. The discovery of hidden metric spaces underlying real multiplexes is, thus, a natural step forward towards a fundamental understanding of such systems. The task is, however, complex due to the inherent complexity of each single layer and that, in principle, the metric spaces governing the topologies of the different single networks could be unrelated. Our results here clearly indicate that this is not the case. Our findings open the door for multiplex embedding, in which all the layers of a real multiplex are simultaneously and not independently embedded into hyperbolic spaces. This would require a reverse engineering of the geometric multiplex model proposed here, which generates multiplexes where radial and angular correlations can be tuned independently by varying its parameters. Multiplex embeddings can have important applications in diverse domains, ranging from improving information transport and navigation or search in multilayer communication systems and decentralized data architectures Contreras and Reichman (2015); Helbing and Pournaras (2015), to understanding functional and structural brain networks and deciphering their precise relationship(s) Simas et al. (2015), to predicting links among nodes (e.g., terrorists) in a specific network by knowing their connectivity in some other network. Methods .1 Computation of trans-layer connection probability To compute the trans-layer connection probability, we consider all nodes that exist in both layers. In each of the layers, we bin the range of hyperbolic distances between these nodes from zero to the maximum distance into small bins. For each bin we then find all the node pairs located at the hyperbolic distances falling within the bin. The percentage of pairs in this set of pairs that are connected by a link in the other layer, is the value of the empirical trans-layer connection probability at the bin. .2 Geometric multiplex model (GMM) Our framework builds on the (single-layer) network construction procedure prescribed by the newtonian $\mathbb{S}^{1}$ Serrano et al. (2008) and hyperbolic $\mathbb{H}^{2}$ Krioukov et al. (2010) models. The two models are isomorphic and here we present the results for the $\mathbb{H}^{2}$ version even if for calculations it is more convenient to make use of the $\mathbb{S}^{1}$. We recall that to construct a network of size $N$, the $\mathbb{H}^{2}$ model firsts assigns to each node $i=1,\ldots,N$ its popularity and similarity coordinates $r_{i},\theta_{i}$. Subsequently, it connects each pair of nodes $i,j$ with probability $p(x_{ij})=1/(1+e^{\frac{1}{2T}(x_{ij}-R)})$, where $x_{ij}$ is the hyperbolic distance between the nodes and $R\sim\ln{N}$ (see Supplementary Information section VII A). The connection probability $p(x_{ij})$ is nothing but the Fermi-Dirac distribution. Parameter $T$ is the temperature and controls clustering in the network Dorogovtsev (2010), which is the probability that two neighbors of a node are connected. The average clustering $\bar{c}$ is maximized at $T=0$, nearly linearly decreases to zero with $T\in[0,1)$, and is asymptotically zero if $T>1$. As $T\to 0$ the connection probability becomes the step function $p(x_{ij})\to 1$ if $x_{ij}\leq R$, and $p(x_{ij})\to 0$ if $x_{ij}>R$. It has been shown that the $\mathbb{S}^{1}$ and $\mathbb{H}^{2}$ models can construct synthetic networks that resemble real networks across a wide range of structural characteristics, including power law degree distributions and strong clustering Serrano et al. (2008); Krioukov et al. (2010). Our framework constructs single-layer topologies using these models, and allows for radial and angular coordinate correlations across the different layers. The strength of these correlations can be tuned via model parameters $\nu\in[0,1]$ and $g\in[0,1]$, without affecting the topological characteristics of the individual layers, which can have different properties and different sizes (see Supplementary Information section VII). The radial correlations increase with parameter $\nu$—at $\nu=0$ there are no radial correlations, while at $\nu=1$ radial correlations are maximized. Similarly, the angular correlations increase with parameter $g$—at $g=0$ there are no angular correlations, while at $g=1$ angular correlations are maximized. .3 Extended GMM and the IPv4/IPv6 Internet To construct the Internet-like synthetic multiplex of Fig. 5 we use an extention of the GMM that can construct multiplexes with different layer sizes and where correlations among the coordinates of the common nodes between the layers can be tuned as before via parameters $\nu$ and $g$ (see Supplementary Information section VII D). Layer 1 in the Internet-like synthetic multiplex has approximately the same number of nodes as in the IPv4 topology, $N_{1}=37563$ nodes, as well as the same power law degree distribution exponent $\gamma_{1}=2.1$, average node degree $\bar{k}_{1}\approx 5$, and average clustering $\bar{c}_{1}\approx 0.63$ ($T_{1}=0.5$). Layer 2 has approximately the same number of nodes as in the IPv6 topology, $N_{2}=5163$ nodes, and the same power law exponent $\gamma_{2}=2.1$, average node degree $\bar{k}_{2}\approx 5.2$, and average clustering $\bar{c}_{2}\approx 0.55$ ($T_{2}=0.5$). The IPv4 topology is significantly larger than the IPv6 topology, and there are $4819$ common nodes (Autonomous Systems) in the two topologies. We find that nodes with a higher degree in IPv4 are more likely to also exist in IPv6. Specifically, we find that the empirical probability $\psi(k)$ that a node of degree $k$ in IPv4 also exists in IPv6 can be approximated by $\psi(k)=1/(1+15.4k^{-1.05})$ (see Supplementary Fig. 10). We capture this effect in our synthetic multiplex by first constructing layer 1, and then sampling with the empirical probability $\psi(k)$ nodes from layer 1 that will also be present in layer 2 (see Supplementary Information section VII D). A visualization illustrating the common nodes in the real Internet and in our synthetic multiplex is given in Figs. 5A,B. We note that the fact that nodes with higher degrees in the larger layer have higher probability to also exist in the smaller layer has also been observed in several other real multiplexes Nicosia and Latora (2015). However, our model for constructing synthetic multiplexes with different layer sizes is quite general, and allows for any sampling function $\psi(k)$ to be applied (see Supplementary Information section VII D). References Kivelä et al. (2014) Mikko Kivelä, Alex Arenas, Marc Barthelemy, James P. Gleeson, Yamir Moreno,  and Mason A. Porter, “Multilayer networks,” Journal of Complex Networks 2, 203–271 (2014). Radicchi and Arenas (2013) Filippo Radicchi and Alex Arenas, “Abrupt transition in the structural formation of interconnected networks,” Nature Physics 9, 717–720 (2013). Bianconi (2014) Ginestra Bianconi, “Multilayer networks: Dangerous liaisons?” Nature Physics 10, 712–714 (2014). Radicchi (2015) Filippo Radicchi, “Percolation in real interdependent networks,” Nature Physics 11, 597–602 (2015). De Domenico et al. (2013) Manlio De Domenico, Albert Solé-Ribalta, Emanuele Cozzo, Mikko Kivelä, Yamir Moreno, Mason A. Porter, Sergio Gómez,  and Alex Arenas, “Mathematical formulation of multilayer networks,” Phys. Rev. X 3, 041022 (2013). Szell et al. (2010) Michael Szell, Renaud Lambiotte,  and Stefan Thurner, “Multirelational organization of large-scale social networks in an online world,” Proceedings of the National Academy of Sciences 107, 13636–13641 (2010). Menichetti et al. (2014) Giulia Menichetti, Daniel Remondini,  and Ginestra Bianconi, ‘‘Correlations between weights and overlap in ensembles of weighted multiplex networks,” Phys. Rev. E 90, 062817 (2014). Halu et al. (2014) Arda Halu, Satyam Mukherjee,  and Ginestra Bianconi, “Emergence of overlap in ensembles of spatial multiplexes and statistical mechanics of spatial interacting network ensembles,” Phys. Rev. E 89, 012806 (2014). Kleineberg and Boguñá (2014) Kaj-Kolja Kleineberg and Marián Boguñá, “Evolution of the digital society reveals balance between viral and mass media influence,” Phys. Rev. X 4, 031046 (2014). Kleineberg and Boguñá (2015) Kaj-Kolja Kleineberg and Marián Boguñá, “Digital ecology: Coexistence and domination among interacting networks,” Scientific Reports 5, 10268 (2015). Kleineberg and Boguñá (2016) Kaj-Kolja Kleineberg and Marián Boguñá, “Competition between global and local online social networks,” Scientific Reports 6, 25116 (2016). De Domenico et al. (2014) Manlio De Domenico, Albert Solé-Ribalta, Sergio Gómez,  and Alex Arenas, “Navigability of interconnected networks under random failures,” Proceedings of the National Academy of Sciences 111, 8351–8356 (2014). Simas et al. (2015) Tiago Simas, Mario Chavez, Pablo R. Rodriguez,  and Albert Diaz-Guilera, “An algebraic topological method for multimodal brain networks comparison,” Frontiers in Psychology 6, 904 (2015). Boccaletti et al. (2014) S. Boccaletti, G. Bianconi, R. Criado, C.I. del Genio, J. Gómez-Gardeñes, M. Romance, I. Sendiña-Nadal, Z. Wang,  and M. Zanin, “The structure and dynamics of multilayer networks,” Physics Reports 544, 1–122 (2014). Krioukov et al. (2010) Dmitri Krioukov, Fragkiskos Papadopoulos, Maksim Kitsak, Amin Vahdat,  and Marián Boguñá, “Hyperbolic geometry of complex networks,” Phys. Rev. E 82, 036106 (2010). Boguñá et al. (2010) Marián Boguñá, Fragkiskos Papadopoulos,  and Dmitri Krioukov, “Sustaining the Internet with hyperbolic mapping.” Nature communications 1, 62 (2010). Papadopoulos et al. (2015a) Fragkiskos Papadopoulos, Constantinos Psomas,  and Dmitri Krioukov, ‘‘Network mapping by replaying hyperbolic growth,” IEEE/ACM Transactions on Networking 23, 198–211 (2015a). Papadopoulos et al. (2015b) Fragkiskos Papadopoulos, Rodrigo Aldecoa,  and Dmitri Krioukov, “Network geometry inference using common neighbors,” Phys. Rev. E 92, 022807 (2015b). Serrano et al. (2008) M. Ángeles Serrano, Dmitri Krioukov,  and Marián Boguñá, ‘‘Self-Similarity of Complex Networks and Hidden Metric Spaces,” Phys. Rev. Lett. 100, 078701 (2008). Papadopoulos et al. (2012) Fragkiskos Papadopoulos, Maksim Kitsak, M. Ángeles Serrano, Marián Boguñá,  and Dmitri Krioukov, “Popularity versus similarity in growing networks,” Nature 489, 537–540 (2012). Boguñá et al. (2008) Marián Boguñá, Dmitri Krioukov,  and K. C. Claffy, “Navigability of complex networks,” Nature Physics 5, 74–80 (2008). Papadopoulos et al. (2010) Fragkiskos Papadopoulos, Dmitri Krioukov, Marián Boguñá,  and Amin Vahdat, “Greedy forwarding in dynamic scale-free networks embedded in hyperbolic metric spaces,” Proceedings of the 29th Conference on Information Communications,  INFOCOM’10, 2973–2981 (2010). Travers and Milgram (1969) Jeffrey Travers and Stanley Milgram, “An experimental study of the small world problem,” Sociometry 32, 425–443 (1969). as_(2015) “The IPv4 and IPv6 Topology Datasets,”  (2015), http://www.caida.org/data/active/ipv4_routed_topology_aslinks_dataset.xml and https://www.caida.org/data/active/ipv6_allpref_topology_dataset.xml. Krioukov et al. (2009) Dmitri Krioukov, Fragkiskos Papadopoulos, Amin Vahdat,  and Marián Boguñá, “Curvature and temperature of complex networks,” Phys. Rev. E 80, 035101 (2009). Serrano et al. (2012) M. Ángeles Serrano, Marián Boguñá,  and Francesc Sagués, “Uncovering the hidden geometry behind metabolic networks,” Mol. BioSyst. 8, 843–850 (2012). Zuev et al. (2015) Konstantin Zuev, Marián Boguñá, Ginestra Bianconi,  and Dmitri Krioukov, ‘‘Emergence of Soft Communities from Geometric Preferential Attachment,” Scientific Reports 5, 9421 (2015). bio (2006) “Biogrid: a general repository for interaction datasets,” Nucleic Acids Research 34, D535–D539 (2006). De Domenico et al. (2015) M. De Domenico, V. Nicosia, A. Arenas,  and V. Latora, “Structural reducibility of multilayer networks,” Nature Communications 6, 6864 (2015). Chen et al. (2006) Beth L. Chen, David H. Hall,  and Dmitri B. Chklovskii, “Wiring optimization can relate neuronal structure and function,” Proceedings of the National Academy of Sciences of the United States of America 103, 4723–4728 (2006). De Domenico et al. (2015a) Manlio De Domenico, Mason A. Porter,  and Alex Arenas, “Muxviz: a tool for multilayer analysis and visualization of networks,” Journal of Complex Networks 3, 159–176 (2015a). De Domenico et al. (2015b) Manlio De Domenico, Andrea Lancichinetti, Alex Arenas,  and Martin Rosvall, “Identifying modular flows on multilayer networks reveals highly overlapping organization in interconnected systems,” Phys. Rev. X 5, 011027 (2015b). Coleman et al. (1957) J. Coleman, E. Katz,  and H. Menzel, “The diffusion of an Innovation among Physicians,” Sociometry 20, 253–270 (1957). Nicosia and Latora (2015) Vincenzo Nicosia and Vito Latora, “Measuring and modeling correlations in multiplex networks,” Phys. Rev. E 92, 032805 (2015). Min et al. (2014) Byungjoon Min, Su Do Yi, Kyu-Min Lee,  and K.-I. Goh, “Network robustness of multiplex networks with interlayer degree correlations,” Phys. Rev. E 89, 042811 (2014). Kim and Goh (2013) Jung Yeol Kim and K.-I. Goh, “Coevolution and correlated multiplexity in multiplex networks,” Phys. Rev. Lett. 111, 058702 (2013). Gemmetto and Garlaschelli (2015) V. Gemmetto and D. Garlaschelli, “Multiplexity versus correlation: the role of local constraints in real multiplexes,” Scientific Reports 5, 9120 (2015). Ángeles Serrano et al. (2015) M. Ángeles Serrano, Ľuboš Buzna,  and Marián Boguñá, “Escaping the avalanche collapse in self-similar multiplexes,” New Journal of Physics 17, 053033 (2015). Mucha et al. (2010) Peter J. Mucha, Thomas Richardson, Kevin Macon, Mason A. Porter,  and Jukka-Pekka Onnela, “Community Structure in Time-Dependent, Multiscale, and Multiplex Networks,” Science 328, 876–878 (2010). Zhu and Li (2014) Guangyao Zhu and Kan Li, “A unified model for community detection of multiplex networks,” Web Information Systems Engineering – WISE 2014,  Lecture Notes in Computer Science, 8786, 31–46 (2014). Loe and Jensen (2015) Chuan Wen Loe and Henrik Jeldtoft Jensen, “Comparison of communities detection algorithms for multiplex,” Physica A: Statistical Mechanics and its Applications 431, 29–45 (2015). Fortunato (2010) Santo Fortunato, “Community detection in graphs,” Physics Reports 486, 75–174 (2010). Lü and Zhou (2011) Linyuan Lü and Tao Zhou, “Link prediction in complex networks: A survey,” Physica A: Statistical Mechanics and its Applications 390, 1150–1170 (2011). Clauset et al. (2008) Aaron Clauset, Cristopher Moore,  and M. E. J. Newman, “Hierarchical structure and the prediction of missing links in networks,” Nature 453, 98–101 (2008). Hristova et al. (2015) Desislava Hristova, Anastasios Noulas, Chloë Brown, Mirco Musolesi,  and Cecilia Mascolo, “A Multilayer Approach to Multiplexity and Link Prediction in Online Geo-Social Networks,” Preprint at http://arxiv.org/abs/1508.07876  (2015). Contreras and Reichman (2015) Jorge L. Contreras and Jerome H. Reichman, “Sharing by design: Data and decentralized commons,” Science 350, 1312–1314 (2015). Helbing and Pournaras (2015) Dirk Helbing and Evangelos Pournaras, “Society: Build digital democracy,” Nature 527, 33–34 (2015). Dorogovtsev (2010) S. N. Dorogovtsev, Lectures on Complex Networks (Oxford University Press, Oxford, 2010). Acknowledgements This work was supported by: the European Commission through the Marie Curie ITN “iSocial” grant no. PITN-GA-2012-316808; a James S. McDonnell Foundation Scholar Award in Complex Systems; the ICREA Academia prize, funded by the Generalitat de Catalunya; the MINECO project no. FIS2013-47282-C2-1-P; and the Generalitat de Catalunya grant no. 2014SGR608. Furthermore, M. B. and M. A. S. acknowledge support from the European Commission FET-Proactive Project MULTIPLEX no. 317532.
[ [ (26 March 2014; 29 June 2014) Abstract Recently, there has been an increasing interest in the bottom-up evaluation of the semantics of logic programs with complex terms. The presence of function symbols in the program may render the ground instantiation infinite, and finiteness of models and termination of the evaluation procedure, in the general case, are not guaranteed anymore. Since the program termination problem is undecidable in the general case, several decidable criteria (called program termination criteria) have been recently proposed. However, current conditions are not able to identify even simple programs, whose bottom-up execution always terminates. The paper introduces new decidable criteria for checking termination of logic programs with function symbols under bottom-up evaluation, by deeply analyzing the program structure. First, we analyze the propagation of complex terms among arguments by means of the extended version of the argument graph called propagation graph. The resulting criterion, called $\Gamma$-acyclicity, generalizes most of the decidable criteria proposed so far. Next, we study how rules may activate each other and define a more powerful criterion, called safety. This criterion uses the so-called safety function able to analyze how rules may activate each other and how the presence of some arguments in a rule limits its activation. We also study the application of the proposed criteria to bound queries and show that the safety criterion is well-suited to identify relevant classes of programs and bound queries. Finally, we propose a hierarchy of classes of terminating programs, called $k$-safety, where the $k$-safe class strictly includes the $(k\mbox{-}1)$-safe class. keywords: Logic programming with function symbols, bottom-up execution, program termination, stable models. \submitted 7 December 2012 Checking Termination of the Bottom-Up Evaluation of Logic Programs]Checking Termination of Bottom-Up Evaluation of Logic Programs with Function Symbols 111This work refines and extends results from the conference paper [Greco et al. (2012)]. M. Calautti, S. Greco, F. Spezzano and I. Trubitsyna] MARCO CALAUTTI, SERGIO GRECO, FRANCESCA SPEZZANO and IRINA TRUBITSYNA DIMES, Università della Calabria 87036 Rende(CS), Italy E-mail: {calautti,greco,fspezzano,trubitsyna}@dimes.unical.it Note: To appear in Theory and Practice of Logic Programming (TPLP). 1 Introduction Recently, there has been an increasing interest in the bottom-up evaluation of the semantics of logic programs with complex terms. Although logic languages under stable model semantics have enough expressive power to express problems in the second level of the polynomial hierarchy, in some cases function symbols make languages compact and more understandable. For instance, several problems can be naturally expressed using list and set constructors, and arithmetic operators. The presence of function symbols in the program may render the ground instantiation infinite, and finiteness of models and termination of the evaluation procedure, in the general case, are not guaranteed anymore. Since the program termination problem is undecidable in the general case, several decidable sufficient conditions (called program termination criteria) have been recently proposed. The program termination problem has received a significant attention since the beginning of logic programming and deductive databases [Krishnamurthy et al. (1996)] and has recently received an increasing interest. A considerable body of work has been done on termination of logic programs under top-down evaluation [Schreye and Decorte (1994), Marchiori (1996), Ohlebusch (2001), Codish et al. (2005), Serebrenik and De Schreye (2005), Nguyen et al. (2007), Bruynooghe et al. (2007), Nishida and Vidal (2010), Schneider-Kamp et al. (009a), Schneider-Kamp et al. (009b), Schneider-Kamp et al. (2010), Str$\ddot{o}$der et al. (2010), Voets and Schreye (2010), Brockschmidt et al. (2012), Liang and Kifer (2013), Bonatti (2004), Baselice et al. (2009)]. In this context, the class of finitary programs, allowing decidable (ground) query computation using a top-down evaluation, has been proposed in [Bonatti (2004), Baselice et al. (2009)]. Moreover, there are other research areas, such as these of term rewriting [Zantema (1995), Sternagel and Middeldorp (2008), Arts and Giesl (2000), Endrullis et al. (2008), Ferreira and Zantema (1996)] and chase termination [Fagin et al. (2005), Meier et al. (2009), Marnette (2009), Greco et al. (2011), Greco and Spezzano (2010)], whose results can be of interest to the logic program termination context. In this paper, we consider logic programs with function symbols under the stable model semantics [Gelfond and Lifschitz (1988), Gelfond and Lifschitz (1991)] and thus, all the excellent works mentioned above cannot be straightforwardly applied to our setting. Indeed, the goal of top-down termination analysis is to detect, for a given program and query goal, sufficient conditions guaranteeing that the resolution algorithm terminates. On the other side, the aim of the bottom-up termination analysis is to guarantee the existence of an equivalent finite ground instantiation of the input program. Furthermore, as stated in [Schreye and Decorte (1994)], even restricting our attention to the top-down approach, the termination of logic programs strictly depends on the selection and search rules used in the resolution algorithm. Considering the different aspects of term rewriting and termination of logic programs, we address readers to [Schreye and Decorte (1994)] (pages 204-207). In this framework, the class of finitely ground programs (${\cal FG}$) has been proposed in [Calimeri et al. (2008)]. The key property of this class is that stable models (answer sets) are computable as for each program ${\cal P}$ in this class, there exists a finite and computable subset of its instantiation (grounding), called intelligent instantiation, having precisely the same answer sets as ${\cal P}$. Since the problem of deciding whether a program is in ${\cal FG}$ is not decidable, decidable subclasses, such as finite domain programs [Calimeri et al. (2008)], $\omega$-restricted programs [Syrj$\ddot{a}$nen (2001)], $\lambda$-restricted programs [Gebser et al. (007b)], and the most general one, argument-restricted programs [Lierler and Lifschitz (2009)], have been proposed. Current techniques analyze how values are propagated among predicate arguments to detect whether a given argument is limited, i.e. whether the set of values which can be associated with the argument, also called active domain, is finite. However, these methods have limited capacity in comprehending that arguments are limited in the case where different function symbols appear in the recursive rules. Even the argument-restricted criterion, which is one the most general criteria, fails in such cases. Thus, we propose a new technique, called $\Gamma$-acyclicity, whose aim is to improve the argument-restricted criterion without changing the (polynomial) time complexity of the argument-restricted criterion. This technique makes use of the so-called propagation graph, that represents the propagation of values among arguments and the construction of complex terms during the program evaluation. Furthermore, since many practical programs are not recognized by current termination criteria, including the $\Gamma$-acyclicity criterion, we propose an even more general technique, called safety, which also analyzes how rules activate each other. The new technique allows us to recognize as terminating many classical programs, still guaranteeing polynomial time complexity. Example 1 Consider the following program $P_{\ref{count-ex}}$ computing the length of a list stored in a fact of the form $\tt input(L)$: $$\begin{array}[]{l}\tt r_{0}:\ list(L)\leftarrow input(L).\\ \tt r_{1}:\ list(L)\leftarrow list([X|L]).\\ \tt r_{2}:\ count([\,],0).\\ \tt r_{3}:\ count([X|L],I+1)\leftarrow list([X|L]),\ count(L,I).\end{array}$$ where $\tt input$ is a base predicate defined by only one fact of the form $\tt input([a,b,...])$.  $\Box$ The safety technique, proposed in this paper, allows us to understand that $P_{\ref{count-ex}}$ is finitely ground and, therefore, terminating under the bottom-up evaluation. Contribution $\!\!$. • We first refine the method proposed in [Lierler and Lifschitz (2009)] by introducing the set of restricted arguments and we show that the complexity of finding such arguments is polynomial in the size of the given program. • We then introduce the class of $\Gamma$-acyclic programs, that strictly extends the class of argument-restricted programs. Its definition is based on a particular graph, called propagation graph, representing how complex terms in non restricted arguments are created and used during the bottom-up evaluation. We also show that the complexity of checking whether a program is $\Gamma$-acyclic is polynomial in the size of the given program. • Next we introduce the safety function whose iterative application, starting from the set of $\Gamma$-acyclic arguments, allows us to derive a larger set of limited arguments, by analyzing how rules may be activated. In particular, we define the activation graph that represents how rules may activate each other and design conditions detecting rules whose activation cannot cause their head arguments to be non limited. • Since new criteria are defined for normal logic programs without negation, we extend their application to the case of disjunctive logic programs with negative literals and show that the computation of stable models can be performed using current ASP systems, by a simple rewriting of the source program. • We propose the application of the new criteria to bound queries and show that the safety criterion is well suited to identify relevant classes of programs and bound queries. • As a further improvement, we introduce the notion of active paths of length $k$ and show its applicability in the termination analysis. In particular, we generalize the safety criterion and show that the $k$-safety criteria define a hierarchy of terminating criteria for logic programs with function symbols. • Complexity results for the proposed techniques are also presented. More specifically, we show that the complexity of deciding whether a program ${\cal P}$ is $\Gamma$-acyclic or safe is polynomial in the size of ${\cal P}$, whereas the complexity of the deciding whether a program is $k$-safe, with $k>1$ is exponential. A preliminary version of this paper has been presented at the 28th International Conference on Logic Programming [Greco et al. (2012)]. Although the concepts of $\Gamma$-acyclic program and safe program have been introduced in the conference paper, the definitions contained in the current version are different. Moreover, most of the theoretical results and all complexity results contained in this paper as well as the definition of k-safe program are new. Organization $\!\!$. The paper is organized as follows. Section 2 introduces basic notions on logic programming with function symbols. Section 3 presents the argument-restriction criterion. In Section 4 the propagation of complex terms among arguments is investigated and the class of $\Gamma$-acyclic programs is defined. Section 5 analyzes how rules activate each other and introduces the safety criterion. In Section 6 the applicability of the safety criterion to (partially) ground queries is discussed. Section 7 presents further improvements extending the safety criterion. Finally, in Section 8 the application of termination criteria to general disjunctive programs with negated literals is presented. 2 Logic Programs with Function symbols Syntax. We assume to have infinite sets of constants, variables, predicate symbols, and function symbols. Each predicate and function symbol $g$ is associated with a fixed arity, denoted by $ar(g)$, which is a non-negative integer for predicate symbols and a natural number for function symbols. A term is either a constant, a variable, or an expression of the form $f(t_{1},\dots,t_{m})$, where $f$ is a function symbol of arity $m$ and the $t_{i}$’s are terms. In the first two cases we say the term is simple while in the last case we say it is complex. The binary relation subterm over terms is recursively defined as follows: every term is a subterm of itself; if $t$ is a complex term of the form $f(t_{1},\dots,t_{m})$, then every $t_{i}$ is a subterm of $t$ for $1\leq i\leq m$; if $t_{1}$ is a subterm of $t_{2}$ and $t_{2}$ is a subterm of $t_{3}$, then $t_{1}$ is a subterm of $t_{3}$. The depth $d(u,t)$ of a simple term $u$ in a term $t$ that contains $u$ is recursively defined as follows: $$\begin{array}[]{l}d(u,u)=0,\\ d(u,f(t_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},t_{m}))=1+\max\limits_{{i\ :\ t_% {i}\,contains\,u}}d(u,t_{i}).\end{array}$$ The depth of term $t$, denoted by $d(t)$, is the maximal depth of all simple terms occurring in $t$. An atom is of the form $p(t_{1},\dots,t_{n})$, where $p$ is a predicate symbol of arity $n$ and the $t_{i}$’s are terms (we also say that the atom is a $p$-atom). A literal is either an atom $A$ (positive literal) or its negation $\neg A$ (negative literal). A rule $r$ is of the form: $A_{1}\vee\mbox{\rm.}\mbox{\rm.}\mbox{\rm.}\vee A_{m}\leftarrow B_{1},\mbox{\rm% .}\mbox{\rm.}\mbox{\rm.},B_{k},\neg C_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},% \neg C_{n}$ where $m>0$, $k\geq 0$, $n\geq 0$, and $A_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},A_{m},B_{1},\mbox{\rm.}\mbox{\rm.}% \mbox{\rm.},B_{k},$ $C_{1},$ $\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},C_{n}$ are atoms. The disjunction $A_{1}\vee\mbox{\rm.}\mbox{\rm.}\mbox{\rm.}\vee A_{m}$ is called the head of $r$ and is denoted by $head(r)$; the conjunction $B_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},B_{k},\neg C_{1},\mbox{\rm.}\mbox{\rm.% }\mbox{\rm.},\neg C_{n}$ is called the body of $r$ and is denoted by $body(r)$. The positive (resp. negative) body of $r$ is the conjunction $B_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},B_{k}$ (resp. $\neg C_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},\neg C_{n}$) and is denoted by $body^{+}(r)$ (resp. $body^{-}(r)$). With a slight abuse of notation we use $head(r)$ (resp. $body(r)$, $body^{+}(r)$, $body^{-}(r)$) to also denote the set of atoms (resp. literals) appearing in the head (resp. body, positive body, negative body) of $r$. If $m=1$, then $r$ is normal; if $n=0$, then $r$ is positive. If a rule $r$ is both normal and positive, then it is standard. A program is a finite set of rules. A program is normal (resp. positive, standard) if every rule in it is normal (resp. positive, standard). A term (resp. an atom, a literal, a rule, a program) is said to be ground if no variables occur in it. A ground normal rule with an empty body is also called a fact. For any atom $A$ (resp. set of atoms, rule), $var(A)$ denotes the set of variables occurring in $A$. We assume that programs are range restricted, i.e., for each rule, the variables appearing in the head or in negative body literals also appear in some positive body literal. The definition of a predicate symbol $p$ in a program ${\cal P}$ consists of all rules in ${\cal P}$ with $p$ in the head. Predicate symbols are partitioned into two different classes: base predicate symbols, whose definition can contain only facts (called database facts), and derived predicate symbols, whose definition can contain any rule. Database facts are not shown in our examples as they are not relevant for the proposed criteria. Given a program ${\cal P}$, a predicate $p$ depends on a predicate $q$ if there is a rule $r$ in ${\cal P}$ such that $p$ appears in the head and $q$ in the body, or there is a predicate $s$ such that $p$ depends on $s$ and $s$ depends on $q$. A predicate $p$ is said to be recursive if it depends on itself, whereas two predicates $p$ and $q$ are said to be mutually recursive if $p$ depends on $q$ and $q$ depends on $p$. A rule $r$ is said to be recursive if its body contains a predicate symbol mutually recursive with a predicate symbol in the head. Given a rule $r$, $rbody(r)$ denotes the set of body atoms whose predicate symbols are mutually recursive with the predicate symbol of an atom in the head. We say that $r$ is linear if $|rbody(r)|\leq 1$. We say that a recursive rule $r$ defining a predicate $p$ is strongly linear if it is linear, the recursive predicate symbol appearing in the body is $p$ and there are no other recursive rules defining $p$. A predicate symbol $p$ is said to be linear (resp. strongly linear) if all recursive rules defining $p$ are linear (resp. strongly linear). A substitution is a finite set of pairs $\theta=\{X_{1}/t_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},X_{n}/t_{n}\}$ where $t_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},t_{n}$ are terms and $X_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},X_{n}$ are distinct variables not occurring in $t_{1},\dots,t_{n}$. If $\theta=\{X_{1}/t_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},X_{n}/t_{n}\}$ is a substitution and $T$ is a term or an atom, then $T\theta$ is the term or atom obtained from $T$ by simultaneously replacing each occurrence of $X_{i}$ in $T$ by $t_{i}$ ($1\leq i\leq n$) — $T\theta$ is called an instance of $T$. Given a set $S$ of terms (or atoms), then $S\theta=\{T\theta\mid T\in S\}$. A substitution $\theta$ is a unifier for a finite set of terms (or atoms) $S$ if $S\theta$ is a singleton. We say that a set of terms (or atoms) $S$ unify if there exists a unifier $\theta$ for $S$. Given two substitutions $\theta=\{X_{1}/t_{1},\dots,X_{n}/t_{n}\}$ and $\vartheta=\{Y_{1}/u_{1},\dots,Y_{m}/u_{m}\}$, their composition, denoted $\theta\circ\vartheta$, is the substitution obtained from the set $\{X_{1}/t_{1}\vartheta,\dots,X_{n}/t_{n}\vartheta,$ $Y_{1}/u_{1},\dots,Y_{m}/u_{m}\}$ by removing every $X_{i}/t_{i}\vartheta$ such that $X_{i}=t_{i}\vartheta$ and every $Y_{j}/u_{j}$ such that $Y_{j}\in\{X_{1},\dots,X_{n}\}$. A substitution $\theta$ is more general than a substitution $\vartheta$ if there exists a substitution $\eta$ such that $\vartheta=\theta\circ\eta$. A unifier $\theta$ for a set $S$ of terms (or atoms) is called a most general unifier (mgu) for $S$ if it is more general than any other unifier for $S$. The mgu is unique modulo renaming of variables. Semantics. Let ${\cal P}$ be a program. The Herbrand universe $H_{\cal P}$ of ${\cal P}$ is the possibly infinite set of ground terms which can be built using constants and function symbols appearing in ${\cal P}$. The Herbrand base $B_{\cal P}$ of ${\cal P}$ is the set of ground atoms which can be built using predicate symbols appearing in ${\cal P}$ and ground terms of $H_{\cal P}$. A rule $r^{\prime}$ is a ground instance of a rule $r$ in ${\cal P}$ if $r^{\prime}$ can be obtained from $r$ by substituting every variable in $r$ with some ground term in $H_{\mathcal{P}}$. We use $ground(r)$ to denote the set of all ground instances of $r$ and $ground({\cal P})$ to denote the set of all ground instances of the rules in ${\cal P}$, i.e., $ground({\cal P})=\cup_{r\in{\cal P}}ground(r)$. An interpretation of ${\cal P}$ is any subset $I$ of $B_{\cal P}$. The truth value of a ground atom $A$ w$.$r$.$t$.$ $I$, denoted $value_{I}(A)$, is true if $A\in I$, false otherwise. The truth value of $\neg A$ w$.$r$.$t$.$ $I$, denoted $value_{I}(\neg A)$, is true if $A\not\in I$, false otherwise. The truth value of a conjunction of ground literals $C=L_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},L_{n}$ w$.$r$.$t$.$ $I$ is $value_{I}(C)=\min(\{value_{I}(L_{i})\ |\ 1\leq i\leq n\})$—here the ordering false $<$ true holds—whereas the truth value of a disjunction of ground literals $D=L_{1}\vee\mbox{\rm.}\mbox{\rm.}\mbox{\rm.}\vee L_{n}$ w$.$r$.$t$.$ $I$ is $value_{I}(D)\!=\!\max(\{value_{I}(L_{i})\ |\ 1\leq i\leq n\})$; if $n=0$, then $value_{I}(C)\!=\!$ true and $value_{I}(D)\!=$ false. A ground rule $r$ is satisfied by $I$, denoted $I\models r$, if $value_{I}(head(r))\geq value_{I}(body(r))$; we write $I\not\models r$ if $r$ is not satisfied by $I$. Thus, a ground rule $r$ with empty body is satisfied by $I$ if $value_{I}(head(r))\!=$ true. An interpretation of ${\cal P}$ is a model of ${\cal P}$ if it satisfies every ground rule in $ground({\cal P})$. A model $M$ of ${\cal P}$ is minimal if no proper subset of $M$ is a model of ${\cal P}$. The set of minimal models of ${\cal P}$ is denoted by ${\cal MM}({\cal P})$. Given an interpretation $I$ of ${\cal P}$, let ${\cal P}^{I}$ denote the ground positive program derived from $ground({\cal P})$ by (i) removing every rule containing a negative literal $\neg A$ in the body with $A\in I$, and (ii) removing all negative literals from the remaining rules. An interpretation $I$ is a stable model of ${\cal P}$ if and only if $I\in{\cal MM}({\cal P}^{I})$ [Gelfond and Lifschitz (1988), Gelfond and Lifschitz (1991)]. The set of stable models of ${\cal P}$ is denoted by ${\cal SM}({\cal P})$. It is well known that stable models are minimal models (i.e., ${\cal SM}({\cal P})\subseteq{\cal MM}({\cal P})$). Furthermore, minimal and stable model semantics coincide for positive programs (i.e., ${\cal SM}({\cal P})={\cal MM}({\cal P})$). A standard program has a unique minimal model, called minimum model. Given a set of ground atoms $S$ and a predicate $g$ (resp. an atom $A$), $S[g]$ (resp. $S[A]$) denotes the set of $g$-atoms (resp. ground atoms unifying with $A$) in $S$. Analogously, for a given set $M$ of sets of ground atoms, we shall use the following notations $M[g]=\{S[g]\ |\ S\in M\}$ and $M[A]=\{S[A]\ |\ S\in M\}$. Given a set of ground atoms $S$, and a set $G$ of predicates symbols, then $S[G]=\cup_{g\in G}S[g]$. Argument graph. Given an $n$-ary predicate $p$, $p[i]$ denotes the $i$-th argument of $p$, for $1\leq i\leq n$. If $p$ is a base (resp. derived) predicate symbol, then $p[i]$ is said to be a base (resp. derived) argument. The set of all arguments of a program ${\cal P}$ is denoted by $args({\cal P})$; analogously, $args_{b}({\cal P})$ and $args_{d}({\cal P})$ denote the sets of all base and derived arguments, respectively. For any program ${\cal P}$ and n-ary predicate $p$ occurring in ${\cal P}$, an argument $p[i]$, with $1\leq i\leq n$, is associated with the set of values it can take during the evaluation; this domain, called active domain of $p[i]$, is denoted by $AD(p[i])=\{t_{i}|p(t_{1},\dots,t_{n})\in M\wedge M\in{\cal SM}({\cal P})\}$. An argument $p[i]$ is said to be limited iff $AD(p[i])$ is finite. The argument graph of a program ${\cal P}$, denoted $G({\cal P})$, is a directed graph whose nodes are $args({\cal P})$ (i.e. the arguments of ${\cal P}$), and there is an edge from $q[j]$ to $p[i]$, denoted by $(q[j],p[i])$, iff there is a rule $r\in{\cal P}$ such that: 1. an atom $p(t_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},t_{n})$ appears in $head(r)$, 2. an atom $q(u_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},u_{m})$ appears in $body^{+}(r)$, and 3. terms $t_{i}$ and $u_{j}$ have a common variable. Consider, for instance, program $P_{\ref{count-ex}}$ of Example 1. $G(P_{\ref{count-ex}})=(args(P_{\ref{count-ex}}),E)$, where $args(P_{\ref{count-ex}})=\tt\{input[1],list[1],count[1],count[2]\}$, whereas, considering the occurrences of variables in the rules of $P_{\ref{count-ex}}$ we have that $E=$ $\tt\{(input[1],list[1]),\ (list[1],list[1]),\ (list[1],count[1]),\ (count[1],% count[1]),\ \\ (count[2],count[2])\}$. Labeled directed graphs. In the following we will also consider labeled directed graphs, i.e. directed graphs with labeled edges. In this case we represent an edge from $a$ to $b$ as a triple $(a,b,l)$, where $l$ denotes the label. A path $\pi$ from $a_{1}$ to $b_{m}$ in a possibly labeled directed graph is a non-empty sequence $(a_{1},b_{1},l_{1}),\dots,$ $(a_{m},b_{m},l_{m})$ of its edges s.t. $b_{i}=a_{i+1}$ for all $1\leq i<m$; if the first and last nodes coincide (i.e., $a_{1}=b_{m}$), then $\pi$ is called a cyclic path. In the case where the indication of the starting edge is not relevant, we will call a cyclic path a cycle. We say that a node $a$ depends on a node $b$ in a graph iff there is a path from $b$ to $a$ in that graph. Moreover, we say that $a$ depends on a cycle $\pi$ iff it depends on a node $b$ appearing in $\pi$. Clearly, nodes belonging to a cycle $\pi$ depend on $\pi$. 3 Argument ranking The argument ranking of a program has been proposed in [Lierler and Lifschitz (2009)] to define the class ${\cal AR}$ of argument-restricted programs. An argument ranking for a program ${\cal P}$ is a partial function $\phi$ from $args({\cal P})$ to non-negative integers, called ranks, such that, for every rule $r$ of ${\cal P}$, every atom $p(t_{1},\dots,t_{n})$ occurring in the head of $r$, and every variable $X$ occurring in a term $t_{i}$, if $\phi(p[i])$ is defined, then $body^{+}(r)$ contains an atom $q(u_{1},\dots,u_{m})$ such that $X$ occurs in a term $u_{j}$, $\phi(q[j])$ is defined, and the following condition is satisfied $$\phi(p[i])-\phi(q[j])\geq d(X,t_{i})-d(X,u_{j}).$$ (1) A program ${\cal P}$ is said to be argument-restricted if it has an argument ranking assigning ranks to all arguments of ${\cal P}$. Example 2 Consider the following program $P_{\ref{Example-AR1}}$, where $\tt b$ is a base predicate: $$\begin{array}[]{l}r_{1}:\tt p(f(X))\leftarrow p(X),b(X)\mbox{\rm.}\\ r_{2}:\tt t(f(X))\leftarrow p(X)\mbox{\rm.}\\ r_{3}:\tt s(X)\leftarrow t(f(X))\mbox{\rm.}\\ \end{array}$$ This program has an argument ranking $\phi$, where $\phi(\tt b[1])$$=0$, $\phi(\tt p[1])$$=1$, $\phi(\tt t[1])$$=2$ and $\phi(\tt s[1])$$=1$. Consequently, $P_{\ref{Example-AR1}}$ is argument-restricted.   $\Box$ Intuitively, the rank of an argument is an estimation of the depth of terms that may occur in it. In particular, let $d_{1}$ be the rank assigned to a given argument $p[i]$ and let $d_{2}$ be the maximal depth of terms occurring in the database facts. Then $d_{1}+d_{2}$ gives an upper bound of the depth of terms that may occur in $p[i]$ during the program evaluation. Different argument rankings may satisfy condition (1). A function assigning minimum ranks to arguments is denoted by $\phi_{min}$. Minimum ranking. We define a monotone operator $\Omega$ that takes as input a function $\phi$ over arguments and gives as output a function over arguments that gives an upper bound of the depth of terms. More specifically, we define $\Omega(\phi)(p[i])$ as $$max(max\{D(p(t_{1},\dots,t_{n}),r,i,X)\,|\,r\in{\cal P}\wedge p(t_{1},\dots,t_% {n})\in head(r)\wedge X\,occurs\,in\,t_{i}\},0)$$ where $D(p(t_{1},\dots,t_{n}),r,i,X)$ is defined as $$min\{d(X,t_{i})-d(X,u_{j})+\phi(q[j])\,|\,q(u_{1},\dots,u_{m})\in body^{+}(r)% \wedge\,X\,occurs\,in\,u_{j}\}.$$ In order to compute $\phi_{min}$ we compute the fixpoint of $\Omega$ starting from the function $\phi_{0}$ that assigns $0$ to all arguments. In particular, we have: $$\begin{array}[]{l}\phi_{0}(p[i])=0;\\ \phi_{k}(p[i])=\Omega(\phi_{k-1})(p[i])=\Omega^{k}(\phi_{0})(p[i]).\end{array}$$ The function $\phi_{min}$ is defined as follows: $$\phi_{min}(p[i])=\left\{\begin{array}[]{ll}\Omega^{k}(\phi_{0})(p[i])&if\ % \exists k\mbox{ (finite) s.t. }\Omega^{k}(\phi_{0})(p[i])=\Omega^{\infty}(\phi% _{0})(p[i])\\ undefined&otherwise\end{array}\right.$$ We denote the set of restricted arguments of ${\cal P}$ as $AR({\cal P})=\{p[i]\ |\ p[i]\in args({\cal P})\wedge\ \phi_{min}(p[i])\mbox{ % is defined}\}$. Clearly, from definition of $\phi_{min}$, it follows that all restricted arguments are limited. Observe that ${\cal P}$ is argument-restricted iff $AR({\cal P})=args({\cal P})$. Example 3 Consider again program $P_{\ref{Example-AR1}}$ from Example 2. The following table shows the first four iterations of $\Omega$ starting from the base ranking function $\phi_{0}$: Since $\Omega(\phi_{3})=\Omega(\phi_{2})$, further applications of $\Omega$ provide the same result. Consequently, $\phi_{min}$ coincides with $\phi_{3}$ and defines ranks for all arguments of $P_{\ref{Example-AR1}}$.   $\Box$ Let $M=|args({\cal P})|\times d_{max}$, where $d_{max}$ is the largest depth of terms occurring in the heads of rules of ${\cal P}$. One can determine whether ${\cal P}$ is argument-restricted by iterating $\Omega$ starting from $\phi_{0}$ until - one of the values of $\Omega^{k}(\phi_{0})$ exceeds $M$, in such a case ${\cal P}$ is not argument-restricted; - $\Omega^{k+1}(\phi_{0})=\Omega^{k}(\phi_{0})$, in such a case $\phi_{min}$ coincides with $\phi_{k}$, $\phi_{min}$ is total, and ${\cal P}$ is argument-restricted. Observe that if the program is not argument-restricted the first condition is verified with $k\leq M\times|args({\cal P})|\leq M^{2}$, as at each iteration the value assigned to at least one argument is changed. Thus, the problem of deciding whether a given program ${\cal P}$ is argument-restricted is in $PTime$. In the following section we will show that the computation of restricted arguments can be done in polynomial time also when ${\cal P}$ is not argument-restricted (see Proposition 1). 4 $\Gamma$-acyclic programs In this section we exploit the role of function symbols for checking program termination under bottom-up evaluation. Starting from this section, we will consider standard logic programs. Only in Section 8 we will refer to general programs, as it discusses how termination criteria defined for standard programs can be applied to general disjunctive logic programs with negative literals. We also assume that if the same variable $X$ appears in two terms occurring in the head and body of a rule respectively, then at most one of the two terms is a complex term and that the nesting level of complex terms is at most one. As we will see in Section 8, there is no real restriction in such an assumption as every program could be rewritten into an equivalent program satisfying such a condition. The following example shows a program admitting a finite minimum model, but the argument-restricted criterion is not able to detect it. Intuitively, the definition of argument restricted programs does not take into account the possible presence of different function symbols in the program that may prohibit the propagation of values in some rules and, consequently, guarantee the termination of the bottom-up computation. Example 4 Consider the following program $P_{\ref{Example-notAmmisLabel1}}$: $$\begin{array}[]{l}r_{0}:\tt s(X)\leftarrow b(X)\mbox{\rm.}\\ r_{1}:\tt r(f(X))\leftarrow s(X)\mbox{\rm.}\\ r_{2}:\tt q(f(X))\leftarrow r(X)\mbox{\rm.}\\ r_{3}:\tt s(X)\leftarrow q(g(X))\mbox{\rm.}\end{array}$$ where $\tt b$ is a base predicate symbol. The program is not argument-restricted since the argument ranking function $\phi_{min}$ cannot assign any value to $\tt r[1]$, $\tt q[1]$, and $\tt s[1]$. However the bottom-up computation always terminates, independently from the database instance.    $\Box$ In order to represent the propagation of values among arguments, we introduce the concept of labeled argument graphs. Intuitively, it is an extension of the argument graph where each edge has a label describing how the term propagated from one argument to another changes. Arguments that are not dependent on a cycle can propagate a finite number of values ​​and, therefore, are limited. Since the active domain of limited arguments is finite, we can delete edges ending in the corresponding nodes from the labeled argument graph. Then, the resulting graph, called propagation graph, is deeply analyzed to identify further limited arguments. Definition 1 (Labeled argument graph) Let ${\cal P}$ be a program. The labeled argument graph $\mbox{${\cal G}$}_{L}({\cal P})$ is a labeled directed graph $(args({\cal P}),E)$ where $E$ is a set of labeled edges defined as follows. For each pair of nodes $p[i],q[j]\in args({\cal P})$ such that there is a rule $r$ with $head(r)=p(v_{1},\dots,v_{n})$, $q(u_{1},\dots,u_{m})\in body(r)$, and terms $u_{j}$ and $v_{i}$ have a common variable $X$, there is an edge $(q[j],p[i],\alpha)\in E$ such that • $\alpha=\epsilon$  if $u_{j}=v_{i}=X$, • $\alpha=f$  if $u_{j}=X$ and $v_{i}=f(\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},X,\mbox{\rm.}\mbox{\rm.}\mbox{\rm.})$, • $\alpha=\overline{f}$  if $u_{j}=f(\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},X,\mbox{\rm.}\mbox{\rm.}\mbox{\rm.})$ and $v_{i}=X$. $\Box$ In the definition above, the symbol $\epsilon$ denotes the empty label which concatenated to a string does not modify the string itself, that is, for any string $s$, $s\epsilon=\epsilon s=s$. The labeled argument graph of program $P_{\ref{Example-notAmmisLabel1}}$ is shown in Figure 1 (left). The edges of this graph represent how the propagation of values occurs. For instance, edge $\tt(b[1],s[1],\epsilon)$ states that a term $\tt t$ is propagated without changes from $\tt b[1]$ to $\tt s[1]$ if rule $r_{0}$ is applied; analogously, edge $\tt(s[1],r[1],f)$ states that starting from a term $\tt t$ in $\tt s[1]$ we obtain $\tt f(t)$ in $\tt r[1]$ if rule $r_{1}$ is applied, whereas edge $\tt(q[1],s[1],\overline{g})$ states that starting from a term $\tt g(t)$ in $\tt q[1]$ we obtain $\tt t$ in $\tt s[1]$ if rule $r_{3}$ is applied. Given a path $\pi$ in $\mbox{${\cal G}$}_{L}({\cal P})$ of the form $(a_{1},b_{1},\alpha_{1}),\dots,$ $(a_{m},b_{m},\alpha_{m})$, we denote with $\lambda(\pi)$ the string $\alpha_{1}\,\mbox{\rm.}\mbox{\rm.}\mbox{\rm.}\,\alpha_{m}$. We say that $\pi$ spells a string $w$ if $\lambda(\pi)=w$. Intuitively, the string $\lambda(\pi)$ describes a sequence of function symbols used to compose and decompose complex terms during the propagation of values among the arguments in $\pi$. Example 5 Consider program $P_{\ref{ex:label-path}}$ derived from program $P_{\ref{Example-notAmmisLabel1}}$ of Example 4 by replacing rule $r_{2}$ with the rule $\tt q(g(X))\leftarrow r(X)$. The labeled argument graph $\mbox{${\cal G}$}_{L}(P_{\ref{ex:label-path}})$ is reported in Figure 1 (right). Considering the cyclic path $\pi=\tt(s[1],r[1],f),$ $\tt(r[1],q[1],g),$ $\tt(q[1],s[1],\overline{g})$, $\lambda(\pi)=\tt fg\overline{g}$ represents the fact that starting from a term $\tt t$ in $\tt s[1]$ we may obtain the term $\tt f(t)$ in $\tt r[1]$, then we may obtain term $\tt g(f(t))$ in $\tt q[1]$, and term $\tt f(t)$ in $\tt s[1]$, and so on. Since we may obtain a larger term in $\tt s[1]$, the arguments depending on this cyclic path may not be limited. Consider now program $P_{\ref{Example-notAmmisLabel1}}$, whose labeled argument graph is shown in Figure 1 (left), and the cyclic path $\pi^{\prime}=\tt(s[1],r[1],f),$ $\tt(r[1],q[1],f),$ $\tt(q[1],s[1],\overline{g})$. Observe that starting from a term $\tt t$ in $\tt s[1]$ we may obtain term $\tt f(t)$ in $\tt r[1]$ (rule $r_{1}$), then we may obtain term $\tt f(f(t))$ in $\tt q[1]$ (rule $r_{2}$). At this point the propagation in this cyclic path terminates since the head atom of rule $r_{2}$ containing term $\tt f(X)$ cannot match with the body atom of rule $r_{3}$ containing term $\tt g(X)$. The string $\lambda(\pi^{\prime})=\tt ff\overline{g}$ represents the propagation described above. Observe that for this program all arguments are limited.   $\Box$ Let $\pi$ be a path from $p[i]$ to $q[j]$ in the labeled argument graph. Let $\hat{\lambda}(\pi)$ be the string obtained from $\lambda(\pi)$ by iteratively eliminating pairs of the form $\alpha\overline{\alpha}$ until the resulting string cannot be further reduced. If $\hat{\lambda}(\pi)=\epsilon$, then starting from a term $t$ in $p[i]$ we obtain the same term $t$ in $q[j]$. Consequently, if $\hat{\lambda}(\pi)$ is a non-empty sequence of function symbols $f_{i_{1}},f_{i_{2}}\dots,f_{i_{k}}$, then starting from a term $t$ in $p[i]$ we may obtain a larger term in $q[j]$. For instance, if $k=2$ and $f_{i_{1}}$ and $f_{i_{2}}$ are of arity one, we may obtain $f_{i_{2}}(f_{i_{1}}(t))$ in $q[j]$. Based on this intuition we introduce now a grammar $\Gamma_{\!\cal P}$ in order to distinguish the sequences of function symbols used to compose and decompose complex terms in a program ${\cal P}$, such that starting from a given term we obtain a larger term. Given a program ${\cal P}$, we denote with $F_{\!\cal P}=\{f_{1},\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},f_{m}\}$ the set of function symbols occurring in ${\cal P}$, whereas $\overline{F}_{\!\cal P}=\{\overline{f}\ |\ f\in F_{\!\cal P}\}$ and $T_{\!\cal P}=F_{\!\cal P}\cup\overline{F}_{\!\cal P}$. Definition 2 Let ${\cal P}$ be a program, the grammar $\Gamma_{\!\cal P}$ is a 4-tuple $(N,T_{\!\cal P},R,S)$, where $N=\{S,S_{1},S_{2}\}$ is the set of nonterminal symbols, $S$ is the start symbol, and $R$ is the set of production rules defined below: 1. $S\ \rightarrow S_{1}\,f_{i}\,S_{2}$,               $\forall f_{i}\in F_{\!\cal P}$; 2. $S_{1}\rightarrow f_{i}\,S_{1}\,\overline{f}_{i}\,S_{1}\ |\ \epsilon$,        $\forall f_{i}\in F_{\!\cal P}$; 3. $S_{2}\rightarrow S_{1}\,S_{2}\ |\ f_{i}\,S_{2}\ |\ \epsilon$,     $\forall f_{i}\in F_{\!\cal P}$. $\Box$ The language ${\cal L}(\Gamma_{\!\cal P})$ is the set of strings generated by $\Gamma_{\!\cal P}$. Example 6 Let $F_{\cal P}=\tt\{f,g,h\}$ be the set of function symbols occurring in a program ${\cal P}$. Then strings $\tt f$, $\tt fg\overline{g}$, $\tt g\overline{g}f$, $\tt fg\overline{g}h\overline{h}$, $\tt fhg\overline{g}\overline{h}$ belong to ${\cal L}(\Gamma_{\!\cal P})$ and represent, assuming that $\tt f$ is a unary function symbol, different ways to obtain term $\tt f(t)$ starting from term $\tt t$.  $\Box$ Note that only if a path $\pi$ spells a string $w\in{\cal L}(\Gamma_{\!\cal P})$, then starting from a given term in the first node of $\pi$ we may obtain a larger term in the last node of $\pi$. Moreover, if this path is cyclic, then the arguments depending on it may not be limited. On the other hand, all arguments not depending on a cyclic path $\pi$ spelling a string $w\in{\cal L}(\Gamma_{\!\cal P})$ are limited. Given a program ${\cal P}$ and a set of arguments $\cal S$ recognized as limited by a specific criterion, the propagation graph of ${\cal P}$ with respect to $\cal S$, denoted by $\Delta({\cal P},{\cal S})$, consists of the subgraph derived from $\mbox{${\cal G}$}_{L}({\cal P})$ by deleting edges ending in a node of $\cal S$. Although we can consider any set $\cal S$ of limited arguments, in the following we assume ${\cal S}=AR({\cal P})$ and, for the simplicity of notation, we denote $\Delta({\cal P},AR({\cal P}))$ as $\Delta({\cal P})$. Even if more general termination criteria have been defined in the literature, here we consider the $AR$ criterion since it is the most general among those so far proposed having polynomial time complexity. Definition 3 ($\Gamma$-acyclic Arguments and $\Gamma$-acyclic Programs) Given a program ${\cal P}$, the set of its $\Gamma$-acyclic arguments, denoted by $\mbox{${\Gamma\!A}$}({\cal P})$, consists of all arguments of ${\cal P}$ not depending on a cyclic path in $\Delta({\cal P})$ spelling a string of ${\cal L}(\Gamma_{\!\cal P})$. A program ${\cal P}$ is called $\Gamma$-acyclic if $\mbox{${\Gamma\!A}$}({\cal P})=args({\cal P})$, i.e. if there is no cyclic path in $\Delta({\cal P})$ spelling a string of ${\cal L}(\Gamma_{\!\cal P})$. We denote the class of $\Gamma$-acyclic  programs ${\Gamma\!\cal A}$.   $\Box$ Clearly, $AR({\cal P})\subseteq\mbox{${\Gamma\!A}$}({\cal P})$, i.e. the set of restricted arguments is contained in the set of $\Gamma$-acyclic arguments. As a consequence, the set of argument-restricted programs is a subset of the set of $\Gamma$-acyclic  programs. Moreover, the containment is strict, as there exist programs that are $\Gamma$-acyclic, but not argument-restricted. For instance, program $P_{\ref{Example-notAmmisLabel1}}$ from Example 4 is $\Gamma$-acyclic, but not argument-restricted. Indeed, all cyclic paths in $\Delta(P_{\ref{Example-notAmmisLabel1}})$ do not spell strings belonging to the language ${\cal L}(\Gamma_{P_{\ref{Example-notAmmisLabel1}}})$. The importance of considering the propagation graph instead of the labeled argument graph in Definition 3 is shown in the following example. Example 7 Consider program $P_{\ref{Example-safe-AR}}$ below obtained from $P_{\ref{Example-notAmmisLabel1}}$ by adding rules $r_{4}$ and $r_{5}$. $$\begin{array}[]{l}r_{0}:\tt s(X)\leftarrow b(X)\mbox{\rm.}\\ r_{1}:\tt r(f(X))\leftarrow s(X)\mbox{\rm.}\\ r_{2}:\tt q(f(X))\leftarrow r(X)\mbox{\rm.}\\ r_{3}:\tt s(X)\leftarrow q(g(X))\mbox{\rm.}\\ r_{4}:\tt n(f(X))\leftarrow s(X),b(X)\mbox{\rm.}\\ r_{5}:\tt s(X)\leftarrow n(X)\mbox{\rm.}\end{array}$$ The corresponding labeled argument graph $\mbox{${\cal G}$}_{L}(P_{\ref{Example-safe-AR}})$ and propagation graph $\Delta(P_{\ref{Example-safe-AR}})$ are reported in Figure 2. Observe that arguments $\tt n[1]$ and $\tt s[1]$ are involved in the red cycle in the labeled argument graph $\mbox{${\cal G}$}_{L}(P_{\ref{Example-safe-AR}})$ spelling a string of ${\cal L}(\Gamma_{\!P_{\ref{Example-safe-AR}}})$. At the same time this cycle is not present in the propagation graph $\Delta(P_{\ref{Example-safe-AR}})$ since $AR(P_{\ref{Example-safe-AR}})=\tt\{b[1],n[1]\}$ and the program is $\Gamma$-acyclic.   $\Box$ Theorem 1 Given a program ${\cal P}$, 1. all arguments in $\mbox{${\Gamma\!A}$}({\cal P})$ are limited; 2. if ${\cal P}$ is $\Gamma$-acyclic, then ${\cal P}$ is finitely ground. \proof 1) As previously recalled, arguments in $AR({\cal P})$ are limited. Let us now show that all arguments in $\mbox{${\Gamma\!A}$}({\cal P})\setminus AR({\cal P})$ are limited too. Suppose by contradiction that $q[k]\in\mbox{${\Gamma\!A}$}({\cal P})\setminus AR({\cal P})$ is not limited. Observe that depth of terms that may occur in $q[k]$ depends on the paths in the propagation graph $\Delta({\cal P})$ that ends in $q[k]$. In particular, this depth may be infinite only if there is a path $\pi$ from an argument $p[i]$ to $q[k]$ (not necessarily distinct from $p[i]$), such that $\hat{\lambda}(\pi)$ is a string of an infinite length composed by symbols in $F_{P}$. But this is possible only if this path contains a cycle spelling a string in ${\cal L}(\Gamma_{\!\cal P})$. Thus we obtain contradiction with Definition 3. 2) From the previous proof, it follows that every argument in the $\Gamma$-acyclic program can take values only from a finite domain. Consequently, the set of all possible ground terms derived during the grounding process is finite and every $\Gamma$-acyclic program is finitely ground.   $\Box$ From the previous theorem we can also conclude that all $\Gamma$-acyclic  programs admit a finite minimum model, as this is a property of finitely ground programs. We conclude by observing that since the language ${\cal L}(\Gamma_{\!\cal P})$ is context-free, the analysis of paths spelling strings in ${\cal L}(\Gamma_{\!\cal P})$ can be carried out using pushdown automata. As $\Gamma_{\!\cal P}$ is context free, the language ${\cal L}(\Gamma_{\!\cal P})$ can be recognized by means of a pushdown automaton $M=(\{q_{0},q_{F}\},T_{\!\cal P},\Lambda,\delta,q_{0},Z_{0},\{q_{F}\}\})$, where $q_{0}$ is the initial state, $q_{F}$ is the final state, $\Lambda=\{Z_{0}\}\cup\{F_{i}|f_{i}\in F_{\!\cal P}\}$ is the stack alphabet, $Z_{0}$ is the initial stack symbol, and $\delta$ is the transition function defined as follows: 1. $\delta(q_{0},f_{i},Z_{0})=(q_{F},F_{i}Z_{0})$,     $\forall f_{i}\in F_{\!\cal P}$, 2. $\delta(q_{F},f_{i},F_{j})=(q_{F},F_{i}F_{j})$,     $\forall f_{i}\in F_{\!\cal P}$, 3. $\delta(q_{F},\overline{f}_{j},F_{j})=(q_{F},\epsilon)$,         $\forall f_{i}\in F_{\!\cal P}$. The input string is recognized if after having scanned the entire string the automaton is in state $q_{F}$ and the stack contains at least one symbol $F_{i}$. A path $\pi$ is called: • increasing, if $\hat{\lambda}(\pi)\in{\cal L}(\Gamma_{\!\cal P})$, • flat, if $\hat{\lambda}(\pi)=\epsilon$, • failing, otherwise. It is worth noting that $\lambda(\pi)\in{\cal L}(\Gamma_{\!\cal P})$ iff $\hat{\lambda}(\pi)\in{\cal L}(\Gamma_{\!\cal P})$ as function $\hat{\lambda}$ emulates the pushdown automaton used to recognize ${\cal L}(\Gamma_{\!\cal P})$. More specifically, for any path $\pi$ and relative string $\lambda(\pi)$ we have that: • if $\pi$ is increasing, then the pushdown automaton recognizes the string $\lambda(\pi)$ in state $q_{F}$ and the stack contains a sequence of symbols corresponding to the symbols in $\hat{\lambda}(\pi)$ plus the initial stack symbol $Z_{0}$; • if $\pi$ is flat, then the pushdown automaton does not recognize the string $\lambda(\pi)$; moreover, the entire input string is scanned, but the stack contains only the symbol $Z_{0}$; • if $\hat{\lambda}(\pi)$ is failing, then the pushdown automaton does not recognize the string $\lambda(\pi)$ as it goes in an error state. Complexity. Concerning the complexity of checking whether a program is $\Gamma$-acyclic, we first introduce definitions and results that will be used hereafter. We start by introducing the notion of size of a logic program. We assume that simple terms have constant size and, therefore, the size of a complex term $f(t_{1},\dots,t_{k})$, where $t_{1},\dots,t_{k}$ are simple terms, is bounded by $O(k)$. Analogously, the size of an atom $p(t_{1},\dots,t_{n})$ is given by the sum of the sizes of the $t_{i}$’s, whereas the size of a conjunction of atoms (resp. rule, program) is given by the sum of the sizes of its atoms. That is, we identify for a program ${\cal P}$ the following parameters: $n_{r}$ is the number of rules of ${\cal P}$, $n_{b}$ is the maximum number of atoms in the body of rules of ${\cal P}$, $a_{p}$ is the maximum arity of predicate symbols occurring in ${\cal P}$, and $a_{f}$ is the maximum arity of function symbols occurring in ${\cal P}$. We assume that the size of ${\cal P}$, denoted by $size({\cal P})$, is bounded by $O(n_{r}\times n_{b}\times a_{p}\times a_{f})$. Finally, since checking whether a program is terminating requires to read the program, we assume that the program has been already scanned and stored using suitable data structures. Thus, all the complexity results presented in the rest of the paper do not take into account the cost of scanning and storing the input program. We first introduce a tighter bound for the complexity of computing $AR({\cal P})$. Proposition 1 For any program ${\cal P}$, the time complexity of computing $AR({\cal P})$ is bounded by $O(|args({\cal P})|^{3})$. \proof Assume that $n=|args({\cal P})|$ is the total number of arguments of ${\cal P}$. First, it is important to observe the connection between the behavior of operator $\Omega$ and the structure of the labeled argument graph $\mbox{${\cal G}$}_{L}({\cal P})$. In particular, if the applications of the operator $\Omega$ change the rank of an argument $q[i]$ from $0$ to $k$, then there is a path from an argument to $q[i]$ in $\mbox{${\cal G}$}_{L}({\cal P})$, where the number of edges labeled with some positive function symbol minus the number of edges labeled with some negative function symbol is at least $k$. Given a cycle in a labeled argument graph, let us call it affected if the number of edges labeled with some positive function symbol is greater than the number of edges labeled with some negative function symbol. If an argument is not restricted, it is involved in or depends on an affected cycle. On the other hand, if after an application of $\Omega$ the rank assigned to an argument exceeds $n$, this argument is not restricted [Lierler and Lifschitz (2009)]. Recall that we are assuming that $d_{max}=1$ and, therefore, $M=n\times d_{max}=n$. Now let us show that after $2n^{2}+n$ iterations of $\Omega$ all not restricted arguments exceed rank $n$. Consider an affected cycle and suppose that it contains $k$ arguments, whereas the number of arguments depending on this cycle, but not belonging to it is $m$. Obviously, $k+m\leq n$. All arguments involved in this cycle change their rank by at least one after $k$ iterations of $\Omega$. Thus their ranks will be greater than $n+m$ after $(n+m+1)*k$ iterations. The arguments depending on this cycle, but not belonging to it, need at most another $m$ iterations to reach the rank greater than $n$. Thus all unrestricted arguments exceed the rank $n$ in $(n+m+1)*k+m$ iterations of $\Omega$. Since $(n+m+1)*k+m=nk+mk+(k+m)\leq 2n^{2}+n$, the restricted arguments are those that at step $2n^{2}+n$ do not exceed rank $n$. It follows that the complexity of computing $AR({\cal P})$ is bounded by $O(n^{3})$ because we have to do $O(n^{2})$ iterations and, for each iteration we have to check the rank of $n$ arguments.  $\Box$ In order to study the complexity of computing $\Gamma$-acyclic arguments of a program we introduce a directed (not labeled) graph obtained from the propagation graph. Definition 4 (Reduction of $\Delta({\cal P})$) Given a program ${\cal P}$, the reduction of $\Delta({\cal P})$ is a directed graph $\Delta_{R}({\cal P})$ whose nodes are the arguments of ${\cal P}$ and there is an edge $(p[i],q[j])$ in $\Delta_{R}({\cal P})$ iff there is a path $\pi$ from $p[i]$ to $q[j]$ in $\Delta({\cal P})$ such that $\hat{\lambda}(\pi)\in F_{\!\cal P}$.  $\Box$ The reduction $\Delta_{R}({\cal P})$ of the propagation graph $\Delta({\cal P})$ from Figure 3 is shown in Figure 4. It is simple to note that for each path in $\Delta({\cal P})$ from node $p[i]$ to node $q[j]$ spelling a string of ${\cal L}(\Gamma_{\!\cal P})$ there exists a path from $p[i]$ to $q[j]$ in $\Delta_{R}({\cal P})$ and vice versa. As shown in the lemma below, this property always holds. Lemma 1 Given a program ${\cal P}$ and arguments $p[i],q[j]\in args({\cal P})$, there exists a path in $\Delta({\cal P})$ from $p[i]$ to $q[j]$ spelling a string of ${\cal L}(\Gamma_{\!\cal P})$ iff there is a path from $p[i]$ to $q[j]$ in $\Delta_{R}({\cal P})$. \proof ($\Rightarrow$) Suppose there is a path $\pi$ from $p[i]$ to $q[j]$ in $\Delta({\cal P})$ such that $\lambda(\pi)\in{\cal L}(\Gamma_{\!\cal P})$. Then $\hat{\lambda}(\pi)$ is a non-empty string, say $f_{1}\dots f_{k}$, where $f_{i}\in F_{\cal P}$ for $i\in[1..k]$. Consequently, $\pi$ can be seen as a sequence of subpaths $\pi_{1},\dots,\pi_{k}$, such that $\hat{\lambda}(\pi_{i})=f_{i}$ for $i\in[1..k]$. Thus, from the definition of the reduction of $\Delta({\cal P})$, there is a path from $p[i]$ to $q[j]$ in $\Delta_{R}({\cal P})$ whose length is equal to $|\hat{\lambda}(\pi)|$. ($\Leftarrow$) Suppose there is a path $(n_{1},n_{2})\dots(n_{k},n_{k+1})$ from $n_{1}$ to $n_{k+1}$ in $\Delta_{R}({\cal P})$. From the definition of the reduction of $\Delta({\cal P})$, for each edge $(n_{i},n_{i+1})$ there is a path, say $\pi_{i}$, from $n_{i}$ to $n_{i+1}$ in $\Delta({\cal P})$ such that $\hat{\lambda}(\pi_{i})\in F_{\cal P}$. Consequently, there is a path from $n_{1}$ to $n_{k+1}$ in $\Delta({\cal P})$, obtained as a sequence of paths $\pi_{1},\dots,\pi_{k}$ whose string is simply $\lambda(\pi_{1})\dots\lambda(\pi_{k})$. Since $\hat{\lambda}(\pi_{i})\in F_{\cal P}$ implies that $\lambda(\pi_{i})\in{\cal L}(\Gamma_{\!\cal P})$, for every $1\leq i\leq k$, we have that $\lambda(\pi_{1})\dots\lambda(\pi_{k})$ belongs also to ${\cal L}(\Gamma_{\!\cal P})$.  $\Box$ Proposition 2 Given a program ${\cal P}$, the time complexity of computing the reduction $\Delta_{R}({\cal P})$ is bounded by $O(|args({\cal P})|^{3}\times|F_{\!{\cal P}}|)$. \proof The construction of $\Delta_{R}({\cal P})$ can be performed as follows. First, we compute all the paths $\pi$ in $\Delta({\cal P})$ such that $|\hat{\lambda}(\pi)|\leq 1$. To do so, we use a slight variation of the Floyd-Warshall’s transitive closure of $\Delta({\cal P})$ which is defined by the following recursive formula. Assume that each node of $\Delta({\cal P})$ is numbered from $1$ to $n=|args({\cal P})|$, then we denote with $path(i,j,\alpha,k)$ the existence of a path $\pi$ from node $i$ to node $j$ in $\Delta({\cal P})$ such that $\hat{\lambda}(\pi)=\alpha$, $|\alpha|\leq 1$ and $\pi$ may go only through nodes in $\{1,\dots,k\}$ (except for $i$ and $j$). The set of atoms $path(i,j,\alpha,k)$, for all values $1\leq i,j\leq n$, can be derived iteratively as follows: • (base case: $k=0$) $path(i,j,\alpha,0)$ holds if there is an edge $(i,j,\alpha)$ in $\Delta({\cal P})$, • (inductive case: $0<k\leq n$) $path(i,j,\alpha,k)$ holds if – $path(i,j,\alpha,k-1)$ holds, or – $path(i,k,\alpha_{1},k-1)$ and $path(k,j,\alpha_{2},k-1)$ hold, $\alpha=\alpha_{1}\alpha_{2}$ and $|\alpha|\leq 1$. Note that in order to compute all the possible atoms $path(i,j,\alpha,k)$, we need to first initialize every base atom $path(i,j,\alpha,0)$ with cost bounded by $O(n^{2}\times|F_{\!{\cal P}}|)$, as this is the upper bound for the number of edges in $\Delta({\cal P})$. Then, for every $1\leq k\leq n$, we need to compute all paths $path(i,j,\alpha,k)$, thus requiring a cost bounded by $O(n^{3}\times|F_{\!{\cal P}}|)$ operations. The whole procedure will require $O(n^{3}\times|F_{\!{\cal P}}|)$ operations. Since we have computed all possible paths $\pi$ in $\Delta({\cal P})$ such that $|\hat{\lambda}(\pi)|\leq 1$, we can obtain all the edges $(i,j)$ of $\Delta_{R}({\cal P})$ (according to Definition 4) by simply selecting the atoms $path(i,j,\alpha,k)$ with $\alpha\in F_{\!{\cal P}}$, whose cost is bounded by $O(n^{2}\times|F_{\!{\cal P}}|)$. Then, the time complexity of constructing $\Delta_{R}({\cal P})$ is $O(n^{3}\times|F_{\!{\cal P}}|)$.  $\Box$ Theorem 2 The complexity of deciding whether a program ${\cal P}$ is $\Gamma$-acyclic is bounded by $O(|args({\cal P})|^{3}\times|F_{\!\cal P}|)$. \proof Assume that $n=|args({\cal P})|$ is the total number of arguments of ${\cal P}$. To check whether ${\cal P}$ is $\Gamma$-acyclic it is sufficient to first compute the set of restricted arguments $AR({\cal P})$ which requires time $O(n^{3})$ from Proposition 1. Then, we need to construct the propagation graph $\Delta({\cal P})$, for which the maximum number of edges is $n^{2}\times(|F_{\!{\cal P}}|+|\overline{F}_{\!{\cal P}}|+1)$, then it can be constructed in time $O(n^{2}\times|F_{\!{\cal P}}|)$ (recall that we are not taking into account the cost of scanning and storing the program). Moreover, starting from $\Delta({\cal P})$, we need to construct $\Delta_{R}({\cal P})$, which requires time $O(n^{3}\times|F_{\!{\cal P}}|)$ (cf. Proposition 2) and then, following Lemma 1, we need to check whether $\Delta_{R}({\cal P})$ is acylic. Verifying whether $\Delta_{R}({\cal P})$ is acyclic can be done by means of a simple traversal of $\Delta_{R}({\cal P})$ and checking if a node is visited more than once. The complexity of a depth-first traversal of a graph is well-known to be $O(|E|)$ where $E$ is the set of edges of the graph. Since the maximum number of edges of $\Delta_{R}({\cal P})$ is by definition $n^{2}\times|F_{\!{\cal P}}|$, the traversal of $\Delta_{R}({\cal P})$ can be done in time $O(n^{2}\times|F_{\!{\cal P}}|)$. Thus, the whole time complexity is still bounded by $O(n^{3}\times|F_{\!{\cal P}}|)$.  $\Box$ Corollary 1 For any program ${\cal P}$, the time complexity of computing $\mbox{${\Gamma\!A}$}({\cal P})$ is bounded by $O(|args({\cal P})|^{3}\times|F_{\!{\cal P}}|)$. \proof Straightforward from the proof of Theorem 2. $\Box$ As shown in the previous theorem, the time complexity of checking whether a program ${\cal P}$ is $\Gamma$-acyclic is bounded by $O(|args({\cal P})|^{3}\times|F_{\!{\cal P}}|)$, which is strictly related to the complexity of checking whether a program is argument-restricted, which is $O(|args({\cal P})|^{3})$. In fact, the new proposed criterion performs a more accurate analysis on how terms are propagated from the body to the head of rules by taking into account the function symbols occurring in such terms. Moreover, if a logic program ${\cal P}$ has only one function symbol, the time complexity of checking whether ${\cal P}$ is $\Gamma$-acyclic is the same as the one required to check if it is argument-restricted. 5 Safe programs The $\Gamma$-acyclicity termination criterion presents some limitations, since it is not able to detect when a rule can be activated only a finite number of times during the bottom up evaluation of the program. The next example shows a simple terminating program which is not recognized by the $\Gamma$-acyclicity termination criterion. Example 8 Consider the following logic program $P_{\ref{Example-act-graph}}$: $$\begin{array}[]{l}r_{1}:\tt p(X,X)\leftarrow b(X).\\ r_{2}:\tt p(f(X),g(X))\leftarrow p(X,X)\mbox{\rm.}\\ \end{array}$$ where $\tt b$ is base predicate. As the program is standard, it has a (finite) unique minimal model, which can can be derived using the classical bottom-up fixpoint computation algorithm. Moreover, independently from the set of base facts defining $\tt b$, the minimum model of $P_{\ref{Example-act-graph}}$ is finite and its computation terminates.   $\Box$ Observe that the rules of program $P_{\ref{Example-act-graph}}$ can be activated at most $n$ times, where $n$ is the cardinality of the active domain of the base predicate $\tt b$. Indeed, the recursive rule $r_{2}$ cannot activate itself since the newly generated atom is of the form $\tt p(f(\cdot),g(\cdot))$ and does not unify with its body. As another example consider the recursive rule $\tt q(f(X))\leftarrow q(X),t(X)$ and the strongly linear rule $\tt p(f(X),g(Y))\leftarrow p(X,Y),t(X)$ where $\tt t[1]$ is a limited argument. The activation of these rules is limited by the cardinality of the active domain of $\tt t[1]$. Thus, in this section, in order to define a more general termination criterion we introduce the safety function which, by detecting rules that can be executed only a finite number of times, derives a larger set of limited arguments of the program. We start by analyzing how rules may activate each other. Definition 5 (Activation Graph) Let ${\cal P}$ be a program and let $r_{1}$ and $r_{2}$ be (not necessarily distinct) rules of ${\cal P}$. We say that $r_{1}$ activates $r_{2}$ iff $head(r_{1})$ and an atom in $body(r_{2})$ unify. The activation graph $\Sigma({\cal P})=({\cal P},E)$ consists of the set of nodes denoting the rules of ${\cal P}$ and the set of edges $(r_{i},r_{j})$, with $r_{i},r_{j}\in{\cal P}$, such that $r_{i}$ activates $r_{j}$. $\Box$ Example 9 Consider program $P_{\ref{Example-act-graph}}$ of Example 8. The activation graph of this program contains two nodes $r_{1}$ and $r_{2}$ and an edge from $r_{1}$ to $r_{2}$. Rule $r_{1}$ activates rule $r_{2}$ as the head atom $\tt p(X,X)$ of $r_{1}$ unifies with the body atom $\tt p(X,X)$ of $r_{2}$. Intutively, this means that the execution of the first rule may cause the second rule to be activated. In fact, the execution of $r_{1}$ starting from the database instance $D=\tt\{b(a)\}$ produces the new atom $\tt p(a,a)$. The presence of this atom allows the second rule to be activated, since the body of $r_{2}$ can be made true by means of the atom $\tt p(a,a)$, producing the new atom $\tt p(f(a),g(a))$. It is worth noting that the second rule cannot activate itself since $head(r_{2})$ does not unify with the atom $\tt p(X,X)$ in $body(r_{2})$.  $\Box$ The activation graph shows how rules may activate each other, and, consequently, the possibility to propagate values from one rule to another. Clearly, the active domain of an argument $p[i]$ can be infinite only if $p$ is the head predicate of a rule that may be activated an infinite number of times. A rule may be activated an infinite number of times only if it depends on a cycle of the activation graph. Therefore, a rule not depending on a cycle can only propagate a finite number of values into its head arguments. Another important aspect is the structure of rules and the presence of limited arguments in their body and head atoms. As discussed at the beginning of this section, rules $\tt q(f(X))\leftarrow q(X),t(X)$ and $\tt p(f(X),g(Y))\leftarrow p(X,Y),t(X)$, where $\tt t[1]$ is a limited argument, can be activated only a finite number of times. In fact, as variable $\tt X$ in both rules can be substituted only by values taken from the active domain of $\tt t[1]$, which is finite, the active domains of $\tt q[1]$ and $\tt p[1]$ are finite as well, i.e. $\tt q[1]$ and $\tt p[1]$ are limited arguments. Since $\tt q[1]$ is limited, the first rule can be applied only a finite number of times. In the second rule we have predicate $\tt p$ of arity two in the head, and we know that $\tt p[1]$ is a limited argument. Since the second rule is strongly linear, the domains of both head arguments $\tt p[1]$ and $\tt p[2]$ grow together each time this rule is applied. Consequently, the active domain of $\tt p[2]$ must be finite as well as the active domain of $\tt p[1]$ and this rule can be applied only a finite number of times. We now introduce the notion of limited term, that will be used to define a function, called safety function, that takes as input a set of limited arguments and derives a new set of limited arguments in ${\cal P}$. Definition 6 (Limited terms) Given a rule $r=q(t_{1},\dots,t_{m})\leftarrow body(r)\in{\cal P}$ and a set $A$ of limited arguments, we say that $t_{i}$ is limited in $r$ (or $r$ limits $t_{i}$) w.r.t. $A$ if one of the following conditions holds: 1. every variable $X$ appearing in $t_{i}$ also appears in an argument in $body(r)$ belonging to $A$, or 2. $r$ is a strongly linear rule such that: (a) for every atom $p(u_{1},...,u_{n})\in head(r)\cup rbody(r)$, all terms $u_{1},...,u_{n}$ are either simple or complex; (b) $var(head(r))=var(rbody(r))$, (c) there is an argument $q[j]\in A$. $\Box$ Definition 7 (Safety Function) For any program ${\cal P}\!$, let $A$ be a set of limited arguments of ${\cal P}\!$ and let $\Sigma({\cal P})$ be the activation graph of ${\cal P}$. The safety function $\Psi(A)$ denotes the set of arguments $q[i]\in args({\cal P})$ such that for all rules $r=q(t_{1},\dots,t_{m})\leftarrow body(r)\in{\cal P}$, either $r$ does not depend on a cycle $\pi$ of $\Sigma({\cal P})$ or $t_{i}$ is limited in $r$ w.r.t. $A$.  $\Box$ Example 10 Consider the following program $P_{\ref{Example-safety-function}}$: $$\begin{array}[]{l}r_{1}\!:\ \tt p(f(X),g(Y))\leftarrow p(X,Y),b(X)\mbox{\rm.}% \\ r_{2}\!:\ \tt q(f(Y))\leftarrow p(X,Y),q(Y)\mbox{\rm.}\end{array}$$ where $\tt b$ is base predicate. Let $A=\mbox{${\Gamma\!A}$}({\cal P})=\{\tt b[1],p[1]\}$. The activation and the propagation graphs of this program are reported in Figure 5. The application of the safety function to the set of limited arguments $A$ gives $\Psi(A)=\{\tt b[1],p[1],p[2]\}$. Indeed: • $\texttt{b[1]}\in\Psi(A)$ since $\tt b$ is a base predicate which does not appear in the head of any rule; consequently all the rules with $\tt b$ in the head (i.e. the empty set) trivially satisfy the conditions of Definition 7. • $\texttt{p[1]}\in\Psi(A)$ because the unique rule with $\tt p$ in the head (i.e. $r_{1}$) satisfies the first condition of Definition 6, that is, $r_{1}$ limits the term $\tt f(X)$ w.r.t. $A$ in the head of rule $r_{1}$ corresponding to argument $\tt p[1]$. • Since $r_{1}$ is strongly linear and the second condition of Definition 6 is satisfied, $\texttt{p[2]}\in\Psi(A)$ as well.  $\Box$ The following proposition shows that the safety function can be used to derive further limited arguments. Proposition 3 Let ${\cal P}$ be a program and let $A$ be a set of limited arguments of ${\cal P}$. Then, all arguments in $\Psi(A)$ are also limited. \proof Consider an argument $q[i]\in\Psi(A)$, then for every rule $r=q(t_{1},\dots,t_{n})\leftarrow body(r)$ either $r$ does not depend on a cycle of $\Sigma({\cal P})$ or $t_{i}$ is limited in $r$ w.r.t. $A$. Clearly, if $r$ does not depend on a cycle of $\Sigma({\cal P})$, it can be activated a finite number of times as it is not ’effectively recursive’ and does not depend on rules which are effectively recursive. Moreover, if $t_{i}$ is limited in $r$ w.r.t. $A$, we have that either: 1) The first condition of Definition 6 is satisfied (i.e. every variable $X$ appearing in $t_{i}$ also appears in an argument in $body(r)$ belonging to $A$). This means that variables in $t_{i}$ can be replaced by a finite number of values. 2) The second condition of Definition 6 is satisfied. Let $p(t_{1},...,t_{n})=head(r)$, the condition that all terms $t_{1},...,t_{n}$ must be simple or complex guarantees that, if terms in $head(r)$ grow, then they grow all together (conditions 2.a and 2.b). Moreover, if the growth of a term $t_{j}$ is blocked (Condition 2.c), the growth of all terms (including $t_{i}$) is blocked too. Therefore, if one of the two condition is satisfied for all rules defining $q$, the active domain of $q[i]$ is finite.   $\Box$ Unfortunately, as shown in the following example, the relationship $A\subseteq\Psi(A)$ does not always hold for a generic set of arguments $A$, even if the arguments in $A$ are limited. Example 11 Consider the following program $P_{\ref{Example-notsafe-AR}}$: $$\begin{array}[]{l}r_{1}:\tt p(f(X),Y)\leftarrow q(X),r(Y)\mbox{\rm.}\\ r_{2}:\tt q(X)\leftarrow p(X,Y)\mbox{\rm.}\\ r_{3}:\tt t(Y)\leftarrow r(Y)\mbox{\rm.}\\ r_{4}:\tt s(Y)\leftarrow t(Y)\mbox{\rm.}\\ r_{5}:\tt r(Y)\leftarrow s(Y)\mbox{\rm.}\end{array}$$ Its activation graph $\Sigma(P_{\ref{Example-notsafe-AR}})$ is shown in Figure 6, whereas the set of restricted arguments is $AR(P_{\ref{Example-notsafe-AR}})=\mbox{${\Gamma\!A}$}(P_{\ref{Example-notsafe-% AR}})=\tt\{r[1],t[1],$ $\tt s[1],p[2]\}$. Considering the set $A=\tt\{p[2]\}$, we have that the safety function $\Psi({\tt\{p[2]\}})=\emptyset$. Therefore, the relation $A\subseteq\Psi(A)$ does not hold for $A=\tt\{p[2]\}$. Moreover, regarding the set $A^{\prime}=\mbox{${\Gamma\!A}$}(P_{\ref{Example-notsafe-AR}})=\tt\{r[1],t[1],s% [1],p[2]\}$, we have $\Psi(A^{\prime})=\tt\{r[1],t[1],$ $\tt s[1],p[2]\}$ $=A^{\prime}$, i.e. the relation $A^{\prime}\subseteq\Psi(A^{\prime})$ holds.  $\Box$ The following proposition states that if we consider the set $A$ of $\Gamma$-acyclic arguments of a given program ${\cal P}$, the relation $A\subseteq\Psi(A)$ holds. Proposition 4 For any logic program ${\cal P}$: 1. $\mbox{${\Gamma\!A}$}({\cal P})\subseteq\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$; 2. $\Psi^{i}(\mbox{${\Gamma\!A}$}({\cal P}))\subseteq\Psi^{i+1}(\mbox{${\Gamma\!A}% $}({\cal P}))$ for $i>0$. \proof 1) Suppose that $q[k]\in\mbox{${\Gamma\!A}$}({\cal P})$. Then $q[k]\in AR({\cal P})$ or $q[k]$ does not depend on a cycle in $\Delta({\cal P})$ spelling a string of ${\cal L}(\Gamma_{\!\cal P})$. In both cases $q[k]$ can depend only on arguments in $\mbox{${\Gamma\!A}$}({\cal P})$. If $q[k]$ does not depend on any argument, then it does not appear in the head of any rule and, consequently, $q[k]\in\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$. Otherwise, the first condition of Definition 6 is satisfied and $q[k]\in\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$. 2) We prove that $\Psi^{i}(\mbox{${\Gamma\!A}$}({\cal P}))\subseteq\Psi^{i+1}(\mbox{${\Gamma\!A}% $}({\cal P}))$ for $i>0$ by induction. We start by showing that $\Psi^{i}(\mbox{${\Gamma\!A}$}({\cal P}))\subseteq\Psi^{i+1}(\mbox{${\Gamma\!A}% $}({\cal P}))$ for $i=1$, i.e. that the relation $\Psi(\mbox{${\Gamma\!A}$}({\cal P}))\subseteq\Psi(\Psi(\mbox{${\Gamma\!A}$}({% \cal P})))$ holds. In order to show this relation we must show that for every argument $q[k]\in{\cal P}$ if $q[k]\in\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$, then $q[k]\in\Psi(\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$. Consider $q[k]\in\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$. Then, $q[k]$ satisfies Definition 7 w.r.t. $A=\mbox{${\Gamma\!A}$}({\cal P})$. From comma one of this proof it follows that $\mbox{${\Gamma\!A}$}({\cal P})\subseteq\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$, consequently $q[k]$ satisfies Definition 7 w.r.t. $A=\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$ too and so, $q[k]\in\Psi(\Psi(\mbox{${\Gamma\!A}$}({\cal P})))$. Suppose that $\Psi^{k}(\mbox{${\Gamma\!A}$}({\cal P}))\subseteq\Psi^{k+1}(\mbox{${\Gamma\!A}% $}({\cal P}))$ for $k>0$. In order to show that $\Psi^{k+1}(\mbox{${\Gamma\!A}$}({\cal P}))\subseteq\Psi^{k+2}(\mbox{${\Gamma\!% A}$}({\cal P}))$ we must show that for every argument $q[k]\in{\cal P}$ if $q[k]\in\Psi^{k+1}(\mbox{${\Gamma\!A}$}({\cal P}))$, then $q[k]\in\Psi^{k+2}(\mbox{${\Gamma\!A}$}({\cal P}))$. Consider $q[k]\in\Psi^{k+1}(\mbox{${\Gamma\!A}$}({\cal P}))$. Then $q[k]$ satisfies Definition 7 w.r.t. $A=\Psi^{k}(\mbox{${\Gamma\!A}$}({\cal P}))$. Since $\Psi^{k}(\mbox{${\Gamma\!A}$}({\cal P}))\subseteq\Psi^{k+1}(\mbox{${\Gamma\!A}% $}({\cal P}))$, $q[k]$ satisfies Definition 7 w.r.t. $A=\Psi^{k+1}(\mbox{${\Gamma\!A}$}({\cal P}))$ too. Consequently, $q[k]\in\Psi^{k+2}(\mbox{${\Gamma\!A}$}({\cal P}))$.   $\Box$ Observe that we can prove in a similar way that $AR({\cal P})\subseteq\Psi(AR({\cal P}))$ and that $\Psi^{i}(AR({\cal P}))\subseteq\Psi^{i+1}(AR({\cal P}))$ for $i>0$. Definition 8 (Safe Arguments and Safe Programs) For any program ${\cal P}$, $\mathit{safe}({\cal P})=\Psi^{\infty}(\mbox{${\Gamma\!A}$}({\cal P}))$ denotes the set of safe arguments of ${\cal P}$. A program ${\cal P}$ is said to be safe if all arguments are safe. The class of safe programs will be denoted by ${\cal SP}$. $\Box$ Clearly, for any set of arguments $A\subseteq\mbox{${\Gamma\!A}$}({\cal P})$, $\Psi^{i}(A)\subseteq\Psi^{i}(\mbox{${\Gamma\!A}$}({\cal P}))$. Moreover, as shown in Proposition 4, when the starting set is $\mbox{${\Gamma\!A}$}({\cal P})$, the sequence $\mbox{${\Gamma\!A}$}({\cal P}),\ \Psi(\mbox{${\Gamma\!A}$}({\cal P})),\Psi^{2}% (\mbox{${\Gamma\!A}$}({\cal P})),\ \dots$ is monotone and there is a finite $n=O(|args({\cal P})|)$ such that $\Psi^{n}(\mbox{${\Gamma\!A}$}({\cal P}))=\Psi^{\infty}(\mbox{${\Gamma\!A}$}({% \cal P}))$. We can also define the inflactionary version of $\Psi$ as $\hat{\Psi}(A)=A\cup\Psi(A)$, obtaining that $\hat{\Psi}^{i}(\mbox{${\Gamma\!A}$}({\cal P}))=\Psi^{i}(\mbox{${\Gamma\!A}$}({% \cal P}))$, for all natural numbers $i$. The introduction of the inflactionary version guarantees that the sequence $A,\ \hat{\Psi}(A),\ \hat{\Psi}^{2}(A),\ \dots$ is monotone for every set $A$ of limited arguments. This would allow us to derive a (possibly) larger set of limited arguments starting from any set of limited arguments. Example 12 Consider again program $P_{\ref{Example-act-graph}}$ of Example 8. Although $AR(P_{\ref{Example-act-graph}})=\emptyset$, the program $P_{\ref{Example-act-graph}}$ is safe as $\Sigma(P_{\ref{Example-act-graph}})$ is acyclic. Consider now the program $P_{\ref{Example-safety-function}}$ of Example 10. As already shown in Example 10, the first application of the safety function to the set of $\Gamma$-acyclic arguments of $P_{\ref{Example-safety-function}}$ gives $\Psi(\mbox{${\Gamma\!A}$}(P_{\ref{Example-safety-function}}))=\{\tt b[1],p[1],% p[2]\}$. The application of the safety function to the obtained set gives $\Psi(\Psi(\mbox{${\Gamma\!A}$}(P_{\ref{Example-safety-function}})))=\{\tt b[1]% ,p[1],p[2],q[1]\}$. In fact, in the unique rule defining $\tt q$, term $\tt f(Y)$, corresponding to the argument $\tt q[1]$, is limited in $r$ w.r.t. $\{\tt b[1],p[1],p[2]\}$ (i.e. the variable $\tt Y$ appears in $body(r)$ in a term corresponding to argument $\tt p[2]$ and argument $\tt p[2]$, belonging to the input set, is limited). At this point, all arguments of $P_{\ref{Example-safety-function}}$ belong to the resulting set. Thus, $\mathit{safe}(P_{\ref{Example-safety-function}})=args(P_{\ref{Example-safety-% function}})$, and we have that program $P_{\ref{Example-safety-function}}$ is safe.   $\Box$ We now show results on the expressivity of the class ${\cal SP}$ of safe programs. Theorem 3 The class ${\cal SP}$ of safe programs strictly includes the class ${\Gamma\!\cal A}$ of $\Gamma$-acyclic programs and is strictly contained in the class ${\cal FG}$ of finitely ground programs. \proof ($\mbox{${\Gamma\!\cal A}$}\subsetneq\mbox{${\cal SP}$}$). From Proposition 4 it follows that $\mbox{${\Gamma\!\cal A}$}\subseteq\mbox{${\cal SP}$}$. Moreover, $\mbox{${\Gamma\!\cal A}$}\subsetneq\mbox{${\cal SP}$}$ as program $P_{\ref{Example-safety-function}}$ is safe but not $\Gamma$-acyclic. ($\mbox{${\cal SP}$}\subsetneq\mbox{${\cal FG}$}$). From Proposition 3 it follows that every argument in the safe program can take values only from a finite domain. Consequently, the set of all possible ground terms derived during the grounding process is finite and the program is finitely ground. Moreover, we have that the program $P_{\ref{Example1-Intro}}$ of Example 16 is finitely ground, but not safe.   $\Box$ As a consequence of Theorem 3, every safe program admits a finite minimum model. Complexity. We start by introducing a bound on the complexity of constructing the activation graph. Proposition 5 For any program ${\cal P}$, the activation graph $\Sigma({\cal P})$ can be constructed in time $O(n_{r}^{2}\times n_{b}\times(a_{p}\times a_{f})^{2})$, where $n_{r}$ is the number of rules of ${\cal P}$, $n_{b}$ is the maximum number of body atoms in a rule, $a_{p}$ is the maximum arity of predicate symbols and $a_{f}$ is the maximum arity of function symbols. \proof To check whether a rule $r_{i}$ activates a rule $r_{j}$ we have to determine if an atom $B$ in $body(r_{j})$ unifies with the head-atom $A$ of $r_{i}$. This can be done in time $O(n_{b}\times u)$, where $u$ is the cost of deciding whether two atoms unify, which is quadratic in the size of the two atoms [Venturini Zilli (1975)], that is $u=O((a_{p}\times a_{f})^{2})$ as the size of atoms is bounded by $a_{p}\times a_{f}$ (recall that the maximum depth of terms is 1). In order to construct the activation graph we have to consider all pairs of rules and for each pair we have to check if the first rule activates the second one. Therefore, the global complexity is $O(n_{r}^{2}\times n_{b}\times u)=O(n_{r}^{2}\times n_{b}\times(a_{p}\times a_{% f})^{2})$. $\Box$ We recall that given two atoms $A$ and $B$, the size of a m.g.u. $\theta$ for $\{A,B\}$ can be, in the worst case, exponential in the size of $A$ and $B$, but the complexity of deciding whether a unifier for $A$ and $B$ exists is quadratic in the size of $A$ and $B$ [Venturini Zilli (1975)]. Proposition 6 The complexity of deciding whether a program ${\cal P}$ is safe is $O((size({\cal P}))^{2}+|args({\cal P})|^{3}\times|F_{\!\cal P}|)$. \proof The construction of the activation graph $\Sigma({\cal P})$ can be done in time $O(n_{r}^{2}\times n_{b}\times(a_{p}\times a_{f})^{2})$, where $n_{r}$ is the number of rules of ${\cal P}$, $n_{b}$ is the maximum number of body atoms in a rule, $a_{p}$ is the maximum arity of predicate symbols and $a_{f}$ is the maximum arity of function symbols (cf. Proposition 5). The complexity of computing $\mbox{${\Gamma\!A}$}({\cal P})$ is bounded by $O(|args({\cal P})|^{3}\times|F_{\!\cal P}|)$ (cf. Theorem 2). From Definition 7 and Proposition 4 it follows that the sequence $\mbox{${\Gamma\!A}$}({\cal P})$, $\Psi(\mbox{${\Gamma\!A}$}({\cal P}))$, $\Psi^{2}(\mbox{${\Gamma\!A}$}({\cal P}))$, ... is monotone and converges in a finite number of steps bounded by the cardinality of the set $args({\cal P})$. The complexity of determining rules not depending on cycles in the activation graph $\Sigma({\cal P})$ is bounded by $O(n_{r}^{2})$, as it can be done by means of a depth-first traversal of $\Sigma({\cal P})$, which is linear in the number of its edges. Since checking whether the conditions of Definition 6 hold for all arguments in ${\cal P}$ is in $O(size({\cal P}))$, checking such conditions for at most $|args({\cal P})|$ steps is $O(|args({\cal P})|\times size({\cal P}))$. Thus, the complexity of checking all the conditions of Definition 7 for all steps is $O(n_{r}^{2}+|args({\cal P})|\times size({\cal P}))$. Since, $n_{r}^{2}\times n_{b}\times(a_{p}\times a_{f})^{2}=O((size({\cal P}))^{2})$, $|args({\cal P})|=O(size({\cal P}))$ and $n_{r}^{2}=O((size({\cal P}))^{2})$, the complexity of deciding whether ${\cal P}$ is safe is $O((size({\cal P}))^{2}+|args({\cal P})|^{3}\times|F_{\!\cal P}|)$.   $\Box$ 6 Bound Queries and Examples In this section we consider the extension of our framework to queries. This is an important aspect as in many cases, the answer to a query is finite, although the models may have infinite cardinality. This happens very often when the query goal contains ground terms. 6.1 Bound Queries Rewriting techniques, such as magic-set, allow bottom-up evaluators to efficiently compute (partially) ground queries, that is queries whose query goal contains ground terms. These techniques rewrite queries (consisting of a query goal and a program) such that the top-down evaluation is emulated [Beeri and Ramakrishnan (1991), Greco (2003), Greco et al. (2005), Alviano et al. (2010)]. Labelling techniques similar to magic-set have been also studied in the context of term rewriting [Zantema (1995)]. Before presenting the rewriting technique, let us introduce some notations. A query is a pair $Q=\langle q(u_{1},\mbox{\rm.}\mbox{\rm.},u_{n}),{\cal P}\rangle$, where $q(u_{1},\mbox{\rm.}\mbox{\rm.},u_{n})$ is an atom called query goal and ${\cal P}$ is a program. We recall that an adornment of a predicate symbol $p$ with arity $n$ is a string $\alpha\in\{b,f\}^{*}$ such that $|\alpha|=n$222Adornments of predicates, introduced to optimize the bottom-up computation of logic queries, are similar to mode of usage defined in logic programming to describe how the arguments of a predicate $p$ must be restricted when an atom with predicate symbol $p$ is called.. The symbols $b$ and $f$ denote, respectively, bound and free arguments. Given a query $Q=\langle q(u_{1},\mbox{\rm.}\mbox{\rm.},u_{n}),{\cal P}\rangle$, $MagicS(Q)=\langle q^{\alpha}(u_{1},\mbox{\rm.}\mbox{\rm.},u_{n}),MagicS(q(u_{1% },\mbox{\rm.}\mbox{\rm.},u_{n}),{\cal P})\rangle$ indicates the rewriting of $Q$, where $MagicS(q(u_{1},\mbox{\rm.}\mbox{\rm.},u_{n}),{\cal P})$ denotes the rewriting of rules in ${\cal P}$ with respect to the query goal $q(u_{1},\mbox{\rm.}\mbox{\rm.},u_{n})$ and $\alpha$ is the adornment associated with the query goal. We assume that our queries $\langle G,{\cal P}\rangle$ are positive, as the rewriting technique is here applied to $\langle G,st({\cal P})\rangle$ to generate the positive program which is used to restrict the source program (see Section 8). Definition 9 A query $Q=\langle G,{\cal P}\rangle$ is safe if ${\cal P}$ or $MagicS(G,{\cal P})$ is safe.  $\Box$ It is worth noting that it is possible to have a query $Q$=$\langle G,{\cal P}\rangle$ such that ${\cal P}$ is safe, but the rewritten program $MagicS(G,{\cal P})$ is not safe and vice versa. Example 13 Consider the query $Q={\tt\langle p(f(f(a)))},P_{\ref{example-query}}\rangle$, where $P_{\ref{example-query}}$ is defined below: $$\begin{array}[]{l}\tt p(a)\mbox{\rm.}\\ \tt p(f(X))\!\leftarrow\!p(X)\mbox{\rm.}\end{array}$$ $P_{\ref{example-query}}$ is not safe, but if we rewrite the program using the magic-set method, we obtain the safe program: $$\begin{array}[]{lll}\tt magic\_p^{b}(f(f(a)))\mbox{\rm.}\\ \tt magic\_p^{b}(X)\leftarrow magic\_p^{b}(f(X))\mbox{\rm.}\\ \tt p^{b}(a)\leftarrow magic\_p^{b}(a)\mbox{\rm.}\\ \tt p^{b}(f(X))\leftarrow magic\_p^{b}(f(X)),\ p^{b}(X)\mbox{\rm.}\\ \end{array}$$ Consider now the query $Q={\tt\langle p(a)},{\cal P}^{\prime}_{\ref{example-query}}\rangle$, where ${\cal P}^{\prime}_{\ref{example-query}}$ is defined as follows: $$\begin{array}[]{l}\tt p(f(f(a)))\mbox{\rm.}\\ \tt p(X)\!\leftarrow\!p(f(X))\mbox{\rm.}\end{array}$$ The program is safe, but after the magic-set rewriting we obtain the following program: $$\begin{array}[]{l}\tt magic\_p^{b}(a)\mbox{\rm.}\\ \tt magic\_p^{b}(f(X))\leftarrow magic\_p^{b}(X)\mbox{\rm.}\\ \tt p^{b}(f(f(a)))\leftarrow magic\_p^{b}(f(f(a)))\mbox{\rm.}\\ \tt p^{b}(X)\leftarrow magic\_p^{b}(X),p^{b}(f(X))\mbox{\rm.}\end{array}$$ which is not recognized as safe because it is not terminating.  $\Box$ Thus, we propose to first check if the input program is safe and, if it does not satisfy the safety criterion, to check the property on the rewritten program, which is query-equivalent to the original one. We recall that for each predicate symbol $p$ with arity $n$, the number of adorned predicates $p^{\alpha_{1}...\alpha_{n}}$ could be exponential and bounded by $O(2^{n})$. However, in practical cases only few adornments are generated for each predicate symbol. Indeed, rewriting techniques are well consolidated and widely used to compute bound queries. 6.2 Examples Let us now consider the application of the technique described above to some practical examples. Since each predicate in the rewritten query has a unique adornment, we shall omit them. Example 14 Consider the query $\langle{\tt reverse([a,b,c,d],L)},P_{\ref{reverse-example}}\rangle$, where $P_{\ref{reverse-example}}$ is defined by the following rules: $$\begin{array}[]{l}r_{0}:\tt\ reverse([\,],[\,])\mbox{\rm.}\\ r_{1}:\tt\ reverse([X|Y],[X|Z])\leftarrow reverse(Y,Z)\mbox{\rm.}\\ \end{array}$$ The equivalent program $P^{\prime}_{\ref{reverse-example}}$, rewritten to be computed by means of a bottom-up evaluator, is: $$\begin{array}[]{l}\rho_{0}:\tt\ m\_\,reverse([a,b,c,d])\mbox{\rm.}\\ \rho_{1}:\tt\ m\_\,reverse(Y)\leftarrow m\_\,reverse([X|Y])\mbox{\rm.}\\ \rho_{2}:\tt\ reverse([\,],[\,])\leftarrow m\_\,reverse([\,])\mbox{\rm.}\\ \rho_{3}:\tt\ reverse([X|Y],[X|Z])\leftarrow m\_\,reverse([X|Y]),reverse(Y,Z)% \mbox{\rm.}\end{array}$$ Observe that $P^{\prime}_{\ref{reverse-example}}$ is not argument-restricted. In order to check $\Gamma$-acyclicity and safety criteria, we have to rewrite rule $\rho_{3}$ having complex terms in both the head and the body. Thus we add an additional predicate $\tt b1$ defined by rule $\rho_{4}$ and replace $\rho_{3}$ by $\rho^{\prime}_{3}$. $$\begin{array}[]{l}\rho^{\prime}_{3}:\tt\ reverse([X|Y],[X|Z])\leftarrow b1(X,Y% ,Z)\mbox{\rm.}\\ \rho_{4}:\tt\ b1(X,Y,Z)\leftarrow m\_reverse([X|Y]),reverse(Y,Z)\mbox{\rm.}% \end{array}$$ The obtained program, denoted $P^{\prime\prime}_{\ref{reverse-example}}$, is safe but not $\Gamma$-acyclic. $\Box$ Example 15 Consider the query $\langle\tt length([a,b,c,d],L),$ $P_{\ref{length-example}}\rangle$, where $P_{\ref{length-example}}$ is defined by the following rules: $$\begin{array}[]{l}r_{0}:\tt\ length([\,],0)\mbox{\rm.}\\ r_{1}:\tt\ length([X|T],I+1)\leftarrow length(T,I)\mbox{\rm.}\\ \end{array}$$ The equivalent program $P^{\prime}_{\ref{length-example}}$, is rewritten to be computed by means of a bottom-up evaluator as follows333 Observe that program $P^{\prime}_{\ref{length-example}}$ is equivalent to program $P_{\ref{count-ex}}$ presented in the Introduction, assuming that the base predicate $\tt input$ is defined by a fact $\tt input([a,b,c,d])$. : $$\begin{array}[]{l}\rho_{0}:\tt\ m\_length([a,b,c,d])\mbox{\rm.}\\ \rho_{1}:\tt\ m\_length(T)\leftarrow m\_length([X|T])\mbox{\rm.}\\ \rho_{2}:\tt\ length([\,],0)\leftarrow m\_length([\,])\mbox{\rm.}\\ \rho_{3}:\tt\ length([X|T],I+1)\leftarrow m\_length([X|T]),\ length(T,I)\mbox{% \rm.}\\ \end{array}$$ Also in this case, it is necessary to split rule $\rho_{3}$ into two rules to avoid having function symbols in both the head and the body, as shown below: $$\begin{array}[]{l}\rho^{\prime}_{3}:\tt\ length([X|T],I+1)\leftarrow b1(X,T,I)% \mbox{\rm.}\\ \rho_{4}:\tt\ b1(X,T,I)\leftarrow m\_length1(X,T),\ length(T,I)\mbox{\rm.}\end% {array}$$ The obtained program, denoted $P^{\prime\prime}_{\ref{length-example}}$, is safe but not $\Gamma$-acyclic. $\Box$ We conclude this section pointing out that the queries in the two examples above are not recognized as terminating by most of the previously proposed techniques, including $AR$. We also observe that many programs follow the structure of programs presented in the examples above. For instance, programs whose aim is the verification of a given property on the elements of a given list, have the following structure: $$\begin{array}[]{l}\tt verify([\,],[\,])\mbox{\rm.}\\ \tt verify([X|L_{1}],[Y|L_{2}])\leftarrow property(X,Y),verify(L_{1},L_{2})% \mbox{\rm.}\end{array}$$ Consequently, queries having a ground argument in the query goal are terminating. 7 Further Improvements The safety criterion can be improved further as it is not able to detect that in the activation graph, there may be cyclic paths that are not effective or can only be activated a finite number of times. The next example shows a program which is finitely ground, but recognized as terminating by the safery criterion. Example 16 Consider the following logic program $P_{\ref{Example1-Intro}}$ obtained from $P_{\ref{Example-act-graph}}$ by adding an auxiliary predicate $q$: $$\begin{array}[]{l}r_{1}:\tt p(X,X)\leftarrow b(X).\\ r_{2}:\tt q(f(X),g(X))\leftarrow p(X,X)\mbox{\rm.}\\ r_{3}:\tt p(X,Y)\leftarrow q(X,Y)\mbox{\rm.}\end{array}$$ $P_{\ref{Example1-Intro}}$ is equivalent to $P_{\ref{Example-act-graph}}$ w$.$r$.$t$.$ predicate $p$.   $\Box$ Although the activation graph $\Sigma(P_{\ref{Example1-Intro}})$ contains a cycle, the rules occurring in the cycle cannot be activated an infinite number of times. Therefore, in this section we introduce the notion of active paths and extend the definitions of activation graphs and safe programs. Definition 10 (Active Path) Let ${\cal P}$ be a program and $k\geq 1$ be a natural number. The path $(r_{1},r_{2}),\dots,(r_{k},r_{k+1})$ is an active path in the activation graph $\Sigma({\cal P})$ iff there is a set of unifiers $\theta_{1},\dots\theta_{k}$, such that • $head(r_{1})$ unifies with an atom from $body(r_{2})$ with unifier $\theta_{1}$; • $head(r_{i})\theta_{i-1}$ unifies with an atom from $body(r_{i+1})$ with unifier $\theta_{i}$ for $i\in[2..k]$. We write $r_{1}\rightsquigarrow\!\!\!\!\!\!^{k}\ r_{k+1}$ if there is an active path of length $k$ from $r_{1}$ to $r_{k+1}$ in $\Sigma({\cal P})$.  $\Box$ Intuitively, $(r_{1},r_{2}),\dots,(r_{k},r_{k+1})$ is an active path if $r_{1}$ transitively activates rule $r_{k+1}$, that is if the head of $r_{1}$ unifies with some body atom of $r_{2}$ with mgu $\theta_{1}$, then the head of the rule $r_{2}\theta_{1}$ unifies with some body atom of $r_{3}$ with mgu $\theta_{2}$, then the head of the rule $r_{3}\theta_{2}$ unifies with some body atom of $r_{4}$ with mgu $\theta_{3}$, and so on until the head of the rule $r_{k}\theta_{k-1}$ unifies with some body atom of $r_{k+1}$ with mgu $\theta_{k}$. Definition 11 (k-Restricted Activation Graph) Let ${\cal P}$ be a program and $k\geq 1$ be a natural number, the $k$-restricted activation graph $\Sigma_{k}({\cal P})=({\cal P},E)$ consists of a set of nodes denoting the rules of ${\cal P}$ and a set of edges $E$ defined as follows: there is an edge $(r_{i},r_{j})$ from $r_{i}$ to $r_{j}$ iff $r_{i}\rightsquigarrow\!\!\!\!\!\!^{k}\ r_{j}$, i.e. iff there is an active path of length $k$ from $r_{i}$ to $r_{j}$.  $\Box$ Example 17 The $k$-restricted activation graphs for the program of Example 16, with $k\in[1..3]$, are reported in Figure 7. $\Box$ Obviously, the activation graph presented in Definition 5 is $1$-restricted. We next extend the definition of safe function by referring to $k$-restricted activation graphs, instead of the (1-restricted) activation graph. Definition 12 ($k$-Safety Function) For any program ${\cal P}\!$ and natural number $k\geq 1$, let $A$ be a set of limited arguments of ${\cal P}\!$. The $k$-safety function $\Psi_{k}(A)$ denotes the set of arguments $q[i]\in args({\cal P})$ such that for all rules $r=q(t_{1},\dots,t_{m})\leftarrow body(r)\in{\cal P}$, either $r$ does not depend on a cycle $\pi$ of $\Sigma_{j}({\cal P})$, for some $1\leq j\leq k$, or $t_{i}$ is limited in $r$ w.r.t. $A$. $\Box$ Observe that the $k$-safety function $\Psi_{k}$ is defined as a natural extension of the safety function $\Psi$ by considering all the $j$-restricted activation graphs, for $1\leq j\leq k$. Note that the $1$-restricted activation graph coincides with the standard activation graph and, consequently, $\Psi_{1}$ coincides with $\Psi$. Definition 13 ($k$-Safe Arguments) For any program ${\cal P}$, $\mathit{safe}_{k}({\cal P})=\Psi_{k}^{\infty}(\mbox{${\Gamma\!A}$}({\cal P}))$ denotes the set of $k$-safe arguments of ${\cal P}$. A program ${\cal P}$ is said to be $k$-safe if all arguments are $k$-safe. $\Box$ Example 18 Consider again the logic program $P_{\ref{Example1-Intro}}$ from Example 16. $\Sigma_{2}(P_{\ref{Example1-Intro}})$ contains the unique cycle $(r_{3},r_{3})$; consequently, $\tt q[1]$ and $\tt q[2]$ appearing only in the head of rule $r_{2}$ are $2$-safe. By applying iteratively operator $\Psi_{2}$ to the set of limited arguments $\tt\{b[1],q[1],q[2]\}$, we derive that also $\tt p[1]$ and $\tt p[2]$ are $2$-safe. Since $\mathit{safe}_{2}(P_{\ref{Example1-Intro}})=args(P_{\ref{Example1-Intro}})$, we have that $P_{\ref{Example1-Intro}}$ is $2$-safe. Observe also that $\Sigma_{3}(P_{\ref{Example1-Intro}})$ does not contain any edge and, therefore, all arguments are $3$-safe.   $\Box$ For any natural number $k>0$, $\mbox{${\cal SP}$}_{k}$ denotes the class of $k$-safe logic programs, that is the set of programs ${\cal P}$ such that $safe_{k}({\cal P})=args({\cal P})$. The following proposition states that the classes of $k$-safe programs define a hierarchy where $\mbox{${\cal SP}$}_{k}\subsetneq\mbox{${\cal SP}$}_{k+1}$. Proposition 7 The class $\mbox{${\cal SP}$}_{k+1}$ of $(k+1)$-safe programs strictly extends the class $\mbox{${\cal SP}$}_{k}$ of $k$-safe programs, for any $k\geq 1$. \proof ($\mbox{${\cal SP}$}_{k}\subseteq\mbox{${\cal SP}$}_{k+1}$) It follows straightforwardly from the definition of $k$-safe function. ($\mbox{${\cal SP}$}_{k}\neq\mbox{${\cal SP}$}_{k+1}$) To show that the containment is strict, consider the program $P_{\ref{Example1-Intro}}$ from Example 16 for $k=1$ and the following program ${\cal P}_{k}$ for $k>1$: $$\begin{array}[]{rl}r_{0}\!:&\tt q_{1}(f(X),g(X))\leftarrow p(X,X)\mbox{\rm.}\\ r_{1}\!:&\tt q_{2}(X,Y)\leftarrow q_{1}(X,Y)\mbox{\rm.}\\ \dots\\ r_{k-1}\!:&\tt q_{k}(X,Y)\leftarrow q_{k-1}(X,Y)\mbox{\rm.}\\ \ \ r_{k}\!:&\tt p(X,Y)\leftarrow q_{k}(X,Y)\mbox{\rm.}\\ \end{array}$$ It is easy to see that ${\cal P}_{k}$ is in $\mbox{${\cal SP}$}_{k+1}$, but not in $\mbox{${\cal SP}$}_{k}$.  $\Box$ Recall that the minimal model of a standard program ${\cal P}$ can be characterized in terms of the classical immediate consequence operator $\mbox{$\cal T$}\!_{\cal P}$ defined as follows. Given a set $I$ of ground atoms, then $$\begin{array}[]{ll}\mbox{$\cal T$}\!_{\cal P}(I)=\{A\theta\mid&\!\!\!\exists r% \!:A\leftarrow A_{1},\dots,A_{n}\in{\cal P}\mbox{ and }\exists\theta\mbox{ s.t% . }A_{i}\theta\in I\mbox{ for every }1\leq i\leq n\}\end{array}$$ where $\theta$ is a substitution replacing variables with constants. Thus, $\mbox{$\cal T$}\!_{\cal P}$ takes as input a set of ground atoms and returns as output a set of ground atoms; clearly, $\mbox{$\cal T$}\!_{\cal P}$ is monotonic. The $i$-th iteration of $\mbox{$\cal T$}\!_{\cal P}$ ($i\geq 1$) is defined as follows: $\mbox{$\cal T$}^{1}_{\!\!\!{\cal P}}(I)=\mbox{$\cal T$}_{\cal P}(I)$ and $\mbox{$\cal T$}_{\cal P}^{i}(I)=\mbox{$\cal T$}_{\cal P}(\mbox{$\cal T$}\!_{% \cal P}^{i-1}(I))$ for $i>1$. It is well known that the minimum model of ${\cal P}$ is equal to the fixed point $\mbox{$\cal T$}_{\cal P}^{\infty}(\emptyset)$. A rule $r$ is fired at run-time with a substitution $\theta$ at step $i$ if $head(r)\theta\in T_{\cal P}^{i}(\emptyset)-T_{\cal P}^{i-1}(\emptyset)$. Moreover, we say that $r$ is fired (at run-time) by a rule $s$ if $r$ is fired with a substitution $\theta$ at step $i$, $s$ is fired with a substitution $\sigma$ at step $i-1$, and $head(s)\sigma\in body(r)\theta$. Let ${\cal P}$ be a program whose minimum model is $M={\cal MM}({\cal P})=T_{\cal P}^{\infty}(\emptyset)$, $M[\![r]\!]$ denotes the set of facts which have been inferred during the application of the immediate consequence operator using rule $r$, that is the set of facts $head(r)\theta$ such that, for some natural number $i$, $head(r)\theta\in T_{\cal P}^{i}(\emptyset)-T_{\cal P}^{i-1}(\emptyset)$; $M[\![r]\!]$ if infinite iff $r$ is fired an infinite number of times. Clearly, if a rule $s$ fires at run-time a rule $r$, then the activation graph contains an edge $(s,r)$. An active sequence of rules is a sequence of rules $r_{1},\dots,r_{n}$ such that $r_{i}$ fires at run-time rule $r_{i+1}$ for $i\in[1..n-1]$. Theorem 4 Let ${\cal P}$ be a logic program and let $r$ be a rule of ${\cal P}$. If $M[\![r]\!]$ is infinite, then, for every natural number $k$, $r$ depends on a cycle of $\Sigma_{k}({\cal P})$. \proof Let $n_{r}$ be the number of rules of ${\cal P}$ and let $N=n_{r}*k$. If $M[\![r]\!]$ is infinite we have that there is an active sequence of rules $r^{\prime}_{0},r^{\prime}_{1},\dots,r^{\prime}_{i},\dots,r^{\prime}_{N}$ such that $r^{\prime}_{N}$ coincides with $r$. This means that $$r^{\prime}_{0}\rightsquigarrow\!\!\!\!\!\!^{k}\ r^{\prime}_{k},\ r^{\prime}_{k% }\rightsquigarrow\!\!\!\!\!\!^{k}\ r^{\prime}_{2k},\ \dots,\ r^{\prime}_{j*k}% \rightsquigarrow\!\!\!\!\!\!^{k}\ r^{\prime}_{(j+1)*k},\ \dots,\ r^{\prime}_{(% n_{r}-1)*k}\rightsquigarrow\!\!\!\!\!\!^{k}\ r^{\prime}_{N},$$ i.e. that the $k$-restricted activation graph $\Sigma_{k}({\cal P})$ contains path $\pi=(r^{\prime}_{0},r^{\prime}_{k}),$ $(r^{\prime}_{k},r^{\prime}_{2k}),\dots,(r^{\prime}_{j*k},r^{\prime}_{(j+1)*k})% ,\dots,(r^{\prime}_{(n_{r}-1)*k},r)$. Observe that the number of rules involved in $\pi$ is $n_{r}+1$ and is greater than the number of rules of ${\cal P}$. Consequently, there is a rule occurring more than once in $\pi$, i.e. $\pi$ contains a cycle. Therefore, $r$ depends on a cycle of $\Sigma_{k}({\cal P})$. $\Box$ As shown in Example 18, in some cases the analysis of the $k$-restricted activation graph is enough to determine the termination of a program. Indeed, let $cyclicR(\Sigma_{k}({\cal P}))$ be the set of rules $r$ in ${\cal P}$ s.t. $r$ depends on a cycle in $\Sigma_{k}({\cal P})$, the following results hold. Corollary 2 A program ${\cal P}$ is terminating if $\forall r\in{\cal P}$, $\exists k$ s.t. $r\not\in cyclicR(\Sigma_{k}({\cal P}))$. \proof Straightforward from Theorem 4. $\Box$ Obviously, if there is a $k$ such that for all rules $r\in{\cal P}$ $r\not\in cyclicR(\Sigma_{k}({\cal P}))$, ${\cal P}$ is terminating. We conclude this section showing that the improvements here discussed increase the complexity of the technique which is not polynomial anymore. Proposition 8 For any program ${\cal P}$ and natural number $k>1$, the activation graph $\Sigma_{k}({\cal P})$ can be constructed in time exponential in the size of ${\cal P}$ and $k$. \proof Let $(r_{1},r_{2})\cdots(r_{k},r_{k+1})$ be an active path of length $k$ in $\Sigma({\cal P})$. Consider a pair $(r_{i},r_{i+1})$ and two unifying atoms $A_{i}=head(r_{i})$ and $B_{i+1}\in body(r_{i+1})$ (with $1\leq i\leq k$), the size of an mgu $\theta$ for $A_{i}$ and $B_{i+1}$, represented in the standard way (cif. Section 2), can be exponential in the size of the two atoms. Clearly, the size of $A_{i}\theta$ and $B_{i+1}\theta$ can also be exponential. Consequently, the size of $A_{i+1}\theta$ which is used for the next step, can grow exponentially as well. Moreover, since in the computation of an active path of length $k$ we apply $k$ mgu’s, the size of terms can grow exponentially with $k$. $\Box$ Observe that for the computation of the 1-restricted argument graph it is sufficient to determine if two atoms unify (without computing the mgu), whereas for the computation of the $k$-restricted argument graphs, with $k>1$, it is necessary to construct all the mgu’s and to apply them to atoms. 8 Computing stable models for disjunctive programs In this section we discuss how termination criteria, defined for standard programs, can be applied to general disjunctive logic programs. First, observe that we have assumed that whenever the same variable $X$ appears in two terms occurring, respectively, in the head and body of a rule, at most one of the two terms is a complex term and that the nesting level of complex terms is at most one. There is no real restriction in such an assumption as every program could be rewritten into an equivalent program satisfying such a condition. For instance, a rule $r^{\prime}$ of the form $$\tt p(f(g(X)),h(Y,Z))\leftarrow p(f(X),Y),\ q(h(g(X),l(Z)))$$ is rewritten into the set of ’flatten’ rules below: $$\begin{array}[]{lll}\tt p(f(A),h(Y,Z))&\leftarrow&\tt b_{1}(A,Y,Z)\\ \tt b_{1}(g(X),Y,Z)&\leftarrow&\tt b_{2}(X,Y,Z)\\ \tt b_{2}(X,Y,Z)&\leftarrow&\tt b_{3}(X,Y,g(X),l(Z))\\ \tt b_{3}(X,Y,B,C)&\leftarrow&\tt p(f(X),Y),\ q(h(B,C))\end{array}$$ where $\tt b_{1},b_{2}$ and $\tt b_{3}$ are new predicate symbols, whereas $\tt A,B$ and $\tt C$ are new variables introduced to flat terms with depth greater than 1. More specifically, let $d(p(t_{1},\dots,t_{n}))=max\{d(t_{1}),\dots,d(t_{n})\}$ be the depth of atom $p(t_{1},\dots,t_{n})$ and $d(A_{1},\dots,A_{n})=max\{d(A_{1}),\dots,d(A_{n})\}$ be the depth of a conjunction of atoms $A_{1},\dots,A_{n}$, for each standard rule $r$ we generate a set of ’flatten’ rules, denoted by $flat(r)$ whose cardinality is bounded by $O(d(head(r))+d(body(r))$. Therefore, given a standard program ${\cal P}$, the number of rules of the rewritten program is polynomial in the size of ${\cal P}$ and bounded by $$O\left(\ \sum_{r\in{\cal P}}\ d(head(r))+d(body(r))\right).$$ Concerning the number of arguments in the rewritten program, for a given rule $r$ we denote with $nl(r,h,i)$ (resp. $nl(r,b,i)$) the number of occurrences of function symbols occurring at the same nesting level $i$ in the head (resp. body) of $r$ and with $nf(r)=max\{nl(r,t,i)\ |\ t\in\{h,b\}\wedge i>1\}$. For instance, considering the above rule $r^{\prime}$, we have that $nl(r^{\prime},h,1)=2$ (function symbols $\tt f$ and $\tt h$ occur at nesting level $1$ in the head), $nl(r^{\prime},h,2)=1$ (function symbol $\tt g$ occurs at nesting level $2$ in the head), $nl(r^{\prime},b,1)=2$ (function symbols $\tt f$ and $\tt h$ occur at nesting level $1$ in the head), $nl(r^{\prime},b,2)=2$ (function symbols $\tt g$ and $\tt l$ occur at nesting level $2$ in the head). Consequently, $nf(r^{\prime})=2$. The rewriting of the source program results in a ’flattened’ program with $|flat(r)|-1$ new predicate symbols. The arity of every new predicate in $flat(r)$ is bounded by $|var(r)|+nf(r)$. Therefore, the global number of arguments in the flattened program is bounded by $$O\left(\ args({\cal P})+\sum_{r\in{\cal P}}\left(\ |var(r)|+nf(r)\ \right)% \right).$$ The termination of a disjunctive program ${\cal P}$ with negative literals can be determined by rewriting it into a standard logic program $st({\cal P})$ such that every stable model of ${\cal P}$ is contained in the (unique) minimum model of $st({\cal P})$, and then by checking $st({\cal P})$ for termination. Definition 14 (Standard version) Given a program ${\cal P}$, $st({\cal P})$ denotes the standard program, called standard version, obtained by replacing every disjunctive rule $r=a_{1}\vee\cdots\vee a_{m}\leftarrow body(r)$ with $m$ standard rules of the form $a_{i}\leftarrow body^{+}(r)$, for $1\leq i\leq m$. Moreover, we denote with $ST({\cal P})$ the program derived from $st({\cal P})$ by replacing every derived predicate symbol $q$ with a new derived predicate symbol $Q$.   $\Box$ The number of rules in the standard program $st({\cal P})$ is equal to $\sum_{r\in{\cal P}}|head(r)|$, where $|head(r)|$ denotes the number of atoms in the head of $r$. Example 19 Consider program $P_{\ref{standard-rule-example}}$ consisting of the two rules $$\begin{array}[]{l}\tt p(X)\vee q(X)\leftarrow r(X),\neg a(X).\\ \tt r(X)\leftarrow b(X),\neg q(X).\end{array}$$ where $\tt p$, $\tt q$ and $\tt r$ are derived (mutually recursive) predicates, whereas $\tt a$ and $\tt b$ are base predicates. The derived standard program $st(P_{\ref{standard-rule-example}})$ is as follows: $$\begin{array}[]{l}\tt p(X)\leftarrow r(X).\\ \tt q(X)\leftarrow r(X).\\ \tt r(X)\leftarrow b(X).\end{array}$$ $\Box$ Lemma 2 For every program ${\cal P}$, every stable model $M\in{\cal SM}({\cal P})$ is contained in the minimum model ${\cal MM}(st({\cal P}))$. \proof From the definition of stable models we have that every $M\in{\cal SM}({\cal P})$ is the minimal model of the ground positive program ${\cal P}^{M}$. Consider now the standard program ${\cal P}^{\prime}$ derived from ${\cal P}^{M}$ by replacing every ground disjunctive rule $r=a_{1}\vee\dots\vee a_{n}\leftarrow body(r)$ with $m$ ground normal rules $a_{i}\leftarrow body(r)$. Clearly, $M\subseteq{\cal MM}({\cal P}^{\prime})$. Moreover, since ${\cal P}^{\prime}\subseteq st({\cal P})$, we have that ${\cal MM}({\cal P}^{\prime})\subseteq{\cal MM}(st({\cal P}))$. Therefore, $M\subseteq{\cal MM}(st({\cal P}))$.  $\Box$ The above lemma implies that for any logic program ${\cal P}$, if $st({\cal P})$ is finitely ground we can restrict the Herbrand base and only consider head (ground) atoms $q(t)$ such that $q(t)\in{\cal MM}(st({\cal P}))$. This means that, after having computed the minimum model of $st({\cal P})$, we can derive a finite ground instantiation of ${\cal P}$, equivalent to the original program, by considering only ground atoms contained in ${\cal MM}(st({\cal P}))$. We next show how the original program ${\cal P}$ can be rewritten so that, after having computed ${\cal MM}(st({\cal P}))$, every grounder tool easily generates an equivalent finitely ground program. The idea consists in generating, for any disjunctive program ${\cal P}$ such that $st({\cal P})$ satisfies some termination criterion (e.g. safety), a new equivalent program $ext({\cal P})$. The computation of the stable models of $ext({\cal P})$ can be carried out by considering the finite ground instantiation of $ext({\cal P})$ [Leone et al. (2002), Simons et al. (2002), Gebser et al. (007a)]. For any disjunctive rule $r=q_{1}(u_{1})\vee\cdots\vee q_{k}(u_{k})\leftarrow body(r)$, the conjunction of atoms $Q_{1}(u_{1}),\mbox{\rm.}\mbox{\rm.}\mbox{\rm.},Q_{k}(u_{k})$ will be denoted by $headconj(r)$. Definition 15 (Extended program) Let ${\cal P}$ be a disjunctive program and let $r$ be a rule of ${\cal P}$, then, $ext(r)$ denotes the (disjunctive) extended rule $head(r)\leftarrow headconj(r),body(r)$ obtained by extending the body of $r$, whereas $ext({\cal P})=\{ext(r)\ |\ r\in{\cal P}\}\cup ST({\cal P})$ denotes the (disjunctive) extended program obtained by extending the rules of ${\cal P}$ and adding (standard) rules defining the new predicates. $\Box$ Example 20 Consider the program $P_{\ref{standard-rule-example}}$ of Example 19. The extended program $ext(P_{\ref{standard-rule-example}})$ is as follows: $$\begin{array}[]{l}\tt p(X)\vee q(X)\leftarrow P(X),Q(X),r(X),\neg a(X).\\ \tt r(X)\leftarrow R(X),b(X),\neg q(X).\\ \tt P(X)\leftarrow R(X).\\ \tt Q(X)\leftarrow R(X).\\ \tt R(X)\leftarrow b(X).\end{array}$$ $\Box$ The following theorem states that ${\cal P}$ and $ext({\cal P})$ are equivalent w.r.t. the set of predicate symbols in ${\cal P}$. Theorem 5 For every program ${\cal P}$, ${\cal SM}({\cal P})[S_{\cal P}]={\cal SM}(ext({\cal P}))[S_{\cal P}]$, where $S_{\cal P}$ is the set of predicate symbols occurring in ${\cal P}$. \proof First, we recall that $ST({\cal P})\subseteq ext({\cal P})$ and assume that $N$ is the minimum model of $ST({\cal P})$, i.e. $N={\cal MM}(ST({\cal P}))$. • We first show that for each $S\in{\cal SM}(ext({\cal P}))$, $M=S-N$ is a stable model for ${\cal P}$, that is $M\in{\cal SM}({\cal P})$. Let us consider the ground program ${\cal P}^{\prime\prime}$ obtained from $ext({\cal P})^{S}$ by first deleting every ground rule $r=head(r)\leftarrow headconj(r),body(r)$ such that $N\not\models headconj(r)$ and then by removing from the remaining rules, the conjunction $headconj(r)$. Observe that the sets of minimal models for $ext({\cal P})^{S}$ and ${\cal P}^{\prime\prime}$ coincide, i.e. ${\cal MM}(ext({\cal P})^{S})={\cal MM}({\cal P}^{\prime\prime})$. Indeed, for every $r$ in $ext({\cal P})^{S}$, if $N\not\models headconj(r)$, then the body of $r$ is false and thus $r$ can be removed as it does not contribute to infer head atoms. On the other hand, if $N\models headconj(r)$, the conjunction $headconj(r)$ is trivially true, and can be safely deleted from the body of $r$. Therefore, $M\cup N\in{\cal MM}({\cal P}^{\prime\prime})$. Moreover, since ${\cal P}^{\prime\prime}=({\cal P}\cup ST({\cal P}))^{S}={\cal P}^{M}\cup ST({% \cal P})^{N}$, we have that $M\in{\cal MM}({\cal P}^{M})$, that is $M\in{\cal SM}({\cal P})$. • We now show that for each $M\in{\cal SM}({\cal P})$, $(M\cup N)\in{\cal SM}(ext({\cal P}))$. Let us assume that $S=M\cup N$. Since $M\in{\cal MM}({\cal P}^{M})$ we have that $S\in{\cal SM}({\cal P}\cup ST({\cal P}))$, that is $S\in{\cal MM}(({\cal P}\cup ST({\cal P}))^{S})$. Consider the ground program ${\cal P}^{\prime}$ derived from $({\cal P}\cup ST({\cal P}))^{S}$ by replacing every rule disjunctive $r=head(r)\leftarrow body(r)$ such that $M\models body(r)$ with $ext(r)=head(r)\leftarrow headconj(r),body(r)$. Also in this case we have that ${\cal MM}({\cal P}\cup ST({\cal P}))^{S})={\cal MM}({\cal P}^{\prime})$ as $S\models body(r)$ iff $S\models body(ext(r))$. This, means that $S$ is a stable model for $ext({\cal P})$. $\Box$ 9 Conclusion In this paper we have proposed a new approach for checking, on the basis of structural properties, termination of the bottom-up evaluation of logic programs with function symbols. We have first proposed a technique, called $\Gamma$-acyclicity, extending the class of argument-restricted programs by analyzing the propagation of complex terms among arguments using an extended version of the argument graph. Next, we have proposed a further extension, called safety, which also analyzes how rules can activate each other (using the activation graph) and how the presence of some arguments in a rule limits its activation. We have also studied the application of the techniques to partially ground queries and have proposed further improvements which generalize the safety criterion through the introduction of a hierarchy of classes of terminating programs, called $k$-safety, where each $k$-safe class strictly includes the $(k\mbox{-}1)$-safe class. Although our results have been defined for standard programs, we have shown that they can also be applied to disjunctive programs with negative literals, by simply rewriting the source programs. The semantics of the rewritten program is “equivalent” to the semantics of the source one and can be computed by current answer set systems. Even though our framework refers to the model theoretic semantics, we believe that the results presented here go beyond the ASP community and could be of interest also for the (tabled) logic programming community (e.g. tabled Prolog community). Acknowledgements. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. References Alviano et al. (2010) Alviano, M., Faber, W., and Leone, N. 2010. Disjunctive asp with functions: Decidable queries and effective computation. Theory and Practice of Logic Programming 10, 4-6, 497–512. Arts and Giesl (2000) Arts, T. and Giesl, J. 2000. Termination of term rewriting using dependency pairs. Theoretical Computer Science 236, 1-2, 133–178. Baselice et al. (2009) Baselice, S., Bonatti, P. A., and Criscuolo, G. 2009. On finitely recursive programs. Theory and Practice of Logic Programming 9, 2, 213–238. Beeri and Ramakrishnan (1991) Beeri, C. and Ramakrishnan, R. 1991. On the power of magic. Journal of Logic Programming 10, 1/2/3&4, 255–299. Bonatti (2004) Bonatti, P. A. 2004. Reasoning with infinite stable models. Artificial Intelligence 156, 1, 75–111. Brockschmidt et al. (2012) Brockschmidt, M., Musiol, R., Otto, C., and Giesl, J. 2012. Automated termination proofs for java programs with cyclic data. In CAV. 105–122. Bruynooghe et al. (2007) Bruynooghe, M., Codish, M., Gallagher, J. P., Genaim, S., and Vanhoof, W. 2007. Termination analysis of logic programs through combination of type-based norms. ACM Transactions on Programming Languages and Systems 29, 2. Calimeri et al. (2008) Calimeri, F., Cozza, S., Ianni, G., and Leone, N. 2008. Computable functions in asp: Theory and implementation. In International Conference on Logic Programming (ICLP). 407–424. Codish et al. (2005) Codish, M., Lagoon, V., and Stuckey, P. J. 2005. Testing for termination with monotonicity constraints. In ICLP. 326–340. Endrullis et al. (2008) Endrullis, J., Waldmann, J., and Zantema, H. 2008. Matrix interpretations for proving termination of term rewriting. Journal of Automated Reasoning 40, 2-3, 195–220. Fagin et al. (2005) Fagin, R., Kolaitis, P. G., Miller, R. J., and Popa, L. 2005. Data exchange: semantics and query answering. Theoretical Computer Science 336, 1, 89–124. Ferreira and Zantema (1996) Ferreira, M. C. F. and Zantema, H. 1996. Total termination of term rewriting. Applicable Algebra in Engineering, Communication and Computing 7, 2, 133–162. Gebser et al. (007a) Gebser, M., Kaufmann, B., Neumann, A., and Schaub, T. 2007a. clasp : A conflict-driven answer set solver. In International conference on Logic Programming and Nonmonotonic Reasoning (LPNMR). 260–265. Gebser et al. (007b) Gebser, M., Schaub, T., and Thiele, S. 2007b. Gringo : A new grounder for answer set programming. In International conference on Logic Programming and Nonmonotonic Reasoning (LPNMR). 266–271. Gelfond and Lifschitz (1988) Gelfond, M. and Lifschitz, V. 1988. The stable model semantics for logic programming. In International Conference and Symposium on Logic Programming (ICLP/SLP). 1070–1080. Gelfond and Lifschitz (1991) Gelfond, M. and Lifschitz, V. 1991. Classical negation in logic programs and disjunctive databases. New Generation Computing 9, 3/4, 365–386. Greco et al. (2005) Greco, G., Greco, S., Trubitsyna, I., and Zumpano, E. 2005. Optimization of bound disjunctive queries with constraints. Theory and Practice of Logic Programming 5, 6, 713–745. Greco (2003) Greco, S. 2003. Binding propagation techniques for the optimization of bound disjunctive queries. IEEE Transactions on Knowledge and Data Engineering 15, 2, 368–385. Greco and Spezzano (2010) Greco, S. and Spezzano, F. 2010. Chase termination: A constraints rewriting approach. Proceedings of the VLDB Endowment (PVLDB) 3, 1, 93–104. Greco et al. (2011) Greco, S., Spezzano, F., and Trubitsyna, I. 2011. Stratification criteria and rewriting techniques for checking chase termination. Proceedings of the VLDB Endowment (PVLDB) 4, 11, 1158–1168. Greco et al. (2012) Greco, S., Spezzano, F., and Trubitsyna, I. 2012. On the termination of logic programs with function symbols. In International Conference on Logic Programming (ICLP) - Technical Communications. 323–333. Krishnamurthy et al. (1996) Krishnamurthy, R., Ramakrishnan, R., and Shmueli, O. 1996. A framework for testing safety and effective computability. Journal of Computer and System Sciences 52, 1, 100–124. Leone et al. (2002) Leone, N., Pfeifer, G., Faber, W., Calimeri, F., Dell’Armi, T., Eiter, T., Gottlob, G., Ianni, G., Ielpa, G., Koch, K., Perri, S., and Polleres, A. 2002. The dlv system. In European Conference on Logics in Artificial Intelligence (JELIA). 537–540. Liang and Kifer (2013) Liang, S. and Kifer, M. 2013. A practical analysis of non-termination in large logic programs. TPLP 13, 4-5, 705–719. Lierler and Lifschitz (2009) Lierler, Y. and Lifschitz, V. 2009. One more decidable class of finitely ground programs. In International Conference on Logic Programming (ICLP). 489–493. Marchiori (1996) Marchiori, M. 1996. Proving existential termination of normal logic programs. In Algebraic Methodology and Software Technology. 375–390. Marnette (2009) Marnette, B. 2009. Generalized schema-mappings: from termination to tractability. In Symposium on Principles of Database Systems (PODS). 13–22. Meier et al. (2009) Meier, M., Schmidt, M., and Lausen, G. 2009. On chase termination beyond stratification. CoRR abs/0906.4228. Nguyen et al. (2007) Nguyen, M. T., Giesl, J., Schneider-Kamp, P., and Schreye, D. D. 2007. Termination analysis of logic programs based on dependency graphs. In LOPSTR. 8–22. Nishida and Vidal (2010) Nishida, N. and Vidal, G. 2010. Termination of narrowing via termination of rewriting. Appl. Algebra Eng. Commun. Comput. 21, 3, 177–225. Ohlebusch (2001) Ohlebusch, E. 2001. Termination of logic programs: Transformational methods revisited. Applicable Algebra in Engineering, Communication and Computing 12, 1/2, 73–116. Schneider-Kamp et al. (009b) Schneider-Kamp, P., Giesl, J., and Nguyen, M. T. 2009b. The dependency triple framework for termination of logic programs. In LOPSTR. 37–51. Schneider-Kamp et al. (009a) Schneider-Kamp, P., Giesl, J., Serebrenik, A., and Thiemann, R. 2009a. Automated termination proofs for logic programs by term rewriting. ACM Transactions on Computational Logic 11, 1. Schneider-Kamp et al. (2010) Schneider-Kamp, P., Giesl, J., Str$\mathrm{\ddot{o}}$der, T., Serebrenik, A., and Thiemann, R. 2010. Automated termination analysis for logic programs with cut. Theory and Practice of Logic Programming 10, 4-6, 365–381. Schreye and Decorte (1994) Schreye, D. D. and Decorte, S. 1994. Termination of logic programs: The never-ending story. Journal of Logic Programming 19/20, 199–260. Serebrenik and De Schreye (2005) Serebrenik, A. and De Schreye, D. 2005. On termination of meta-programs. Theory and Practice of Logic Programming 5, 3, 355–390. Simons et al. (2002) Simons, P., Niemel“a, I., and Soininen, T. 2002. Extending and implementing the stable model semantics. Artificial Intelligence 138, 1-2, 181–234. Sternagel and Middeldorp (2008) Sternagel, C. and Middeldorp, A. 2008. Root-labeling. In Rewriting Techniques and Applications. 336–350. Str$\ddot{o}$der et al. (2010) Str$\ddot{o}$der, T., Schneider-Kamp, P., and Giesl, J. 2010. Dependency triples for improving termination analysis of logic programs with cut. In LOPSTR. 184–199. Syrj$\ddot{a}$nen (2001) Syrj$\ddot{a}$nen, T. 2001. Omega-restricted logic programs. In International conference on Logic Programming and Nonmonotonic Reasoning (LPNMR). 267–279. Venturini Zilli (1975) Venturini Zilli, M. 1975. Complexity of the unification algorithm for first-order expressions. CALCOLO 12, 4, 361–371. Voets and Schreye (2010) Voets, D. and Schreye, D. D. 2010. Non-termination analysis of logic programs using types. In LOPSTR. 133–148. Zantema (1995) Zantema, H. 1995. Termination of term rewriting by semantic labelling. Fundamenta Informaticae 24, 1/2, 89–105.
\equalcont These authors contributed equally to this work. \equalcont These authors contributed equally to this work. \equalcont These authors contributed equally to this work. [1]\fnmJingyi \surYu [1,3]\fnmBaile \surChen 1]\orgdivSchool of Information Science and Technology, \orgnameShanghaiTech University, \cityShanghai, \postcode201210, \countryP. R. China 2]\orgdivShanghai Institute of Technical Physics, \orgnameChinese Academy of Sciences, \cityShanghai, \postcode200083, \countryP. R. China 3] \orgnameShanghai Engineering Research Center of Energy Efficient and Custom AI IC, \orgaddress\cityShanghai, \postcode201210, \countryP. R. China 4]\orgnameUniversity of Chinese Academy of Sciences, \cityBeijing, \postcode100049, \countryP. R. China Single-pixel p-graded-n junction spectrometers \fnmJingyi \surWang [email protected]    \fnmBeibei \surPan [email protected]    \fnmZi \surWang [email protected]    \fnmJiakai \surZhang [email protected]    \fnmZhiqi \surZhou [email protected]    \fnmLu \surYao [email protected]    \fnmYanan \surWu [email protected]    \fnmWuwei \surRen [email protected]    \fnmJianyu \surWang [email protected]    [email protected]    [email protected] [ [ [ [ Abstract Ultra-compact spectrometers are becoming increasingly popular for their promising applications in biomedical analysis, environmental monitoring, and food safety. In this work, we report a novel single-pixel-photodetector spectrometer with a spectral range from 480 nm to 820 nm, based on the AlGaAs/GaAs p-graded-n junction with a voltage-tunable optical response. To reconstruct the optical spectrum, we propose a tailored method called Neural Spectral Fields (NSF) that leverages the unique wavelength and bias-dependent responsivity matrix. Our spectrometer achieves a high spectral wavelength accuracy of up to 0.30 nm and a spectral resolution of up to 10 nm. Additionally, we demonstrate the high spectral imaging performance of the device. The compatibility of our demonstration with the standard III-V process greatly accelerates the commercialization of miniaturized spectrometers. keywords: Optical Spectrometers, III-V Photodetectors, Neural Spectral Fields, Electrically reconfigurable 1 Introduction Optical spectrometers are broadly deployed in scientific and industrial applications, ranging from spectral detection[1] to chemical sensing and to hyperspectral imaging[2]. Traditional solutions generally require using mechanically movable components such as optical gratings or Michelson interferometers[3, 4] and therefore tend to be too bulky for onsite deployment. Tremendous efforts have been focused on miniaturization, e.g., via dispersive optics[5, 6], narrow band filters[7, 8] or Fourier transform interferometers[9, 10]. More recent solutions have resorted to photodetector arrays, where each individual detector is equipped with a tailored optical responsivity [11, 12, 13, 14, 15, 16, 17] and the overall array recovers a broad spectrum. Array-based solutions also benefit from efficient computational techniques to either interpolate between sampled spectra (e.g., with narrow band filters [8]) or to separate the overlapping ones (e.g., with multiple detectors of different optical responses [6]). However, for each detector to sense a different spectrum, it is essential to split lights to multiple paths. More splitting increases the spectral resolutions but reduces the signal and subsequently measurement sensitivity. Striking an intricate balance between resolution and sensitivity remains challenging[18]. Instead of using an array, single-pixel-photodetector spectrometers have recently emerged as a plausible alternative. By employing electrically reconfigured response functions [19, 20, 21, 22], these devices aim to recover the full spectra using a single optical element without splitting the light, hence offering much higher sensitivity with comparable spectral resolution. The seminal work of single nanowire[22] and two-dimensional materials junction[19, 20, 21] techniques not only show promising results but also demonstrate the ultra-compact footprints for miniaturized spectrometer application. Yet it remains challenging to grow large-scale, high-quality two-dimensional materials or nano-wire for massive production. III-V semiconductor materials, in contrast, have matured over the past decades and have been massively deployed in high-performance optoelectronics devices including lasers[23, 24, 25, 26] and photodetectors[27, 28, 29]. In this research, we seek to design a novel III-V material-based miniature spectrometer. However, III-V material-based spectrometers cannot directly follow lateral composition gradation[22] and stark effect[19] schemes as in nanowire or two-dimensional materials, which is difficult for III-V materials as they would require lattice matching in planar epitaxial growth. We instead present a p-graded-n junction photodetector with voltage-tunable-response based on $Al_{x}Ga_{1-x}As/GaAs$ materials as a spectrometer. Such spectrometers can be fabricated by standard III-V process with an ultra-small footprint down to $\mu$m level and a broad spectral range from 480 nm to 820 nm. The device has a high responsivity of 0.51 A/W at 775 nm, and a low dark current density of $3.47\times 10^{-9}A/cm^{2}$ at room temperature, corresponding to a calculated detectivity of $1.86\times 10^{13}cm\textperiodcentered Hz^{1/2}/W$. For spectrometer applications, the device exhibits a longer cut-off wavelength as the bias voltage increases, contributed by the newly depleted region of the junction. Thus, the junction produces a unique ’voltage accumulative’ responsivity matrix: higher voltages induce broader spectral response curves. Yet, overlaps of these curves cause the spectral reconstruction problem ill-posed. Traditional methods based on L1 or L2 regularizers require elaborate parameter fine-tuning to achieve a high-resolution reconstruction. We tailor a fully automated scheme that directly extracts deep features from the measured current-voltage curves and then conducts continuous spectral reconstruction via Neural Fields (NFs). Specifically, our feature extractor is trained on a comprehensive synthetic dataset to reliably capture unique characteristics of the current-voltage curves. These features, concatenated with wavelength positional encoding, are then fed into an Multilayer Perceptron (MLP) to decode the inputs into a continuous spectral function. The self-supervised process is analogous to neural field reconstructions for radiance fields [30], heterogeneous protein structures [31], and biological tomography[32]. We further enforce the recovered spectral function to obey physical-based, spectrum-response integral constraints and achieve spectral reconstruction accuracy up to 0.30 nm and spectral resolution up to 10 nm. 2 Results Device design The p-graded-n structure of the device is illustrated in Fig. 1a, with all the epi-layers being lattice matched to the GaAs substrate. The epi-layer starts with a 500 nm n+ GaAs contact layer, with a doping concentration of $2\times 10^{18}\text{cm}^{-3}$. This layer is followed by a 2 $\mu$m grading n-doped $Al_{x}Ga_{1-x}As$ layer, with the Al composition gradually increasing from 0 to 0.5. A 50 nm InGaP hole barrier layer was inserted within the n- $Al_{0.5}Ga_{0.5}As$ layers, followed by a p-doped $Al_{0.5}Ga_{0.5}As$ layer. The GaAs p+ layer is grown as the top contact layer. Ti/Pt/Au contact metal is deposited as the contact metal layer. The detail of the device fabrication can be found in the Method section. The p-graded-n structure enables voltage-tunable-response in this photodetector. The structure comprises a gradient bandgap $Al_{x}Ga_{1-x}As$ n-layer, with different Al composition, where the higher bandgap $Al_{x}Ga_{1-x}As$ is closer to the p-n junction. At low reverse bias, holes generated by longer wavelength incident light absorbed in low-Al $Al_{x}Ga_{1-x}As$ layer are blocked by the valence band barrier from the high-Al $Al_{x}Ga_{1-x}As$ layer and InGaP layer. Thus, these holes cannot diffuse into the depleted region and have no contribution to the photocurrent, as shown in Fig. 1c. When the reverse bias increases, the low Al composition $Al_{x}Ga_{1-x}As$ n region is further depleted, allowing for the photogenerated minority carriers by the longer wavelength incident light to contribute to the photocurrent, as indicated in Fig.1d. In other words, this device architecture enables voltage-tunable-spectrum responsivity, which encodes the spectral signature of the incident light in the photocurrent-voltage characteristics. Furthermore, the p-graded-n structure design has a significant advantage in keeping the larger bandgap material near the junction, where the tunneling current is minimized under high reverse bias. This advantage is essential for photodetectors operating in longer wavelength ranges, such as the short-wavelength infrared (SWIR) or mid-wavelength infrared (MWIR) regions, which are susceptible to high tunneling currents. Electrical characteristics and optical characteristics Fig. 2a presents the device’s dark current and photo current density curve, revealing an extremely low dark current density of $3.47\times 10^{-9}A/cm^{2}$ under a -10 V bias voltage at room temperature. As shown in Fig. 2b and Fig. 2c, our device exhibits responsivity across the operational wavelength range from 480 nm to 820 nm. It has a peak responsivity of 0.51 A/W at 775 nm, corresponding to the external quantum efficiency of 81%. The corresponding thermal-noise-limited detectivity[33] of the detector is $1.86\times 10^{13}cm\textperiodcentered Hz^{1/2}/W$. Due to the mature III-V technology, the photodetector performance is considerably better than the previously reported single-pixel-photodetector spectrometers [22, 21]. For spectrometer applications, our device has electrically reconfigurable response. The device exhibits a longer cutoff wavelength as the reverse bias increases, where the longer wavelength response is contributed by the newly-depleted region of lower bandgap AlGaAs materials. We encoded the variation of responsivity versus the wavelength and reverse bias into a spectral response matrix for spectrum reconstruction. Spectrum reconstruction Fig. 3 demonstrates the pipeline of our spectrometer to measure the unknown spectrum. By varying the bias voltage, the spectrometer integrates the incident spectrum with corresponding spectral responsivity and results in a photocurrent curve. The photocurrent measurement at a bias voltage $V$ can be expressed as the integration equation by unknown spectrum $\textbf{{P}}(\lambda)$ and pre-calibrated spectral responsivity $\textbf{{R}}(\lambda,V)$: $$\textbf{{I}}(V)\ =\int_{\lambda_{min}}^{\lambda_{max}}\textbf{{R}}(\lambda,V)\textbf{{P}}(\lambda)d\lambda$$ (1) where $\textbf{{I}}(V)$ is the measured photocurrent curve by our spectrometer and the integration range $(\lambda_{min},\lambda_{max})$ denote the minimum and maximum functional wavelengths of the spectrometer, respectively. We observe that the voltage accumulative spectral responsivity produces some unique features in the measured current. And different input spectra will cause different features. Such features can play an important role in spectrum reconstruction, but they are hard to be analyzed by traditional spectral reconstruction algorithms. Thus, we adopt a neural feature extractor to utilize these features in measured current. We create a synthetic dataset to provide an amount of paired data of photocurrent and optical spectrum. Then we let the feature extractor learn deep priors of photocurrent on the synthetic dataset. In Fig. 3, we also visualize the distribution of three types of spectrum currents in the feature space’s principal component calculated by Principle Component Analysis (PCA). At the core of our approach, we use Neural Spectral Fields (NSF) to decode the spectrum from the extracted features and wavelength. NSF can provide a continuous reconstruction of spectrum inputting a spectrum feature and positional encoding of wavelength without any fine-tuned efforts. To further improve the accuracy of the spectrum reconstruction, we apply a simple yet effective physics-based refinement algorithm following NSF to adjust the amplitude, width, and shift of the NSF output spectrum to obey integral constraint Eqn. 1. Reconstruction results The incident light spectrum consists of single-peak and double-peak beams generated by a super-continuous laser and unknown spectra with uncertain full width at half maximum (FWHM) produced by LED lamps. The accuracy of the reconstructed spectra is evaluated against the ground truth measured with a commercial spectrometer YOKOGAWA AQ6374. Our proposed approach achieves high-quality reconstruction results for a series of single-peak incident light over a broad working range of 480-820 nm, as illustrated in Figure 4a. Next, we evaluate the performance of our single-pixel p-graded-n junction spectrometer on LED light sources. The device is measured under the LED illumination of green, yellow, and red to reconstruct the spectra, as presented in Figure 4b. Our method achieves decent reconstruction performance for incident light with uncertain bandwidth, demonstrating its potential for a wide range of applications. The minimum double-peak resolution of the single-pixel p-graded-n junction spectrometer is about 10 nm, as demonstrated in Figure 4c. To evaluate the resolving power of our method, we conduct a dense-sampling experiment on single-peak incident light with a spacing of 1 nm between adjacent spectra. The reconstruction result, shown in Figure 4d, exhibits excellent performance in distinguishing peaks of spectra with narrow spacing. The reconstruction wavelength peak accuracy is approximate 0.30 nm. Overall, these results demonstrate the remarkable potential of our single-pixel p-graded-n junction spectrometer for high-precision spectral reconstruction in a variety of settings. Spectral imaging results We conduct experiments on an imaging target of a painting to explore the potential applications of our spectrometer in spectral imaging. As shown in Fig. 5, a 180 $\mu$m diameter wire-bonded detector is placed on the image plane behind the lens and the device is swept along the X and Y axes using a motorized positioning system to cover the entire image area. The imaging system is shown in Fig. 5b. The image plane is divided into multiple units. In each unit, the detector performs a current-voltage measurement. The X-Y sampling rate is set to be 100 × 100. The target image is a yellow Chinese character ”Fu” on a red background, illuminated by a white LED light source. We reconstruct the image using the measured current-voltage data of the pixel, which is then mapped into RGB according to the CIE 1931[34] color space mapping, as shown in Fig. 5d. Fig. 5f is the spectral imaging results in different wavelength channels. The composited image is presented in Fig. 5e, compared with the image taken by CCD camera on the left. Our single-pixel p-graded-n junction spectrometer achieves high-quality spectral imaging within the functional waveband. The single-pixel spectrometer with this new scheme could easily be integrated on a chip with standard III-V process flow. This suggests the possibility of our solution in spectral imaging. 3 Discussion and conclusion Miniaturization of optical spectrometers is crucial for various applications, including portable spectral detection, hyperspectral imaging, and wearable spectroscopy. In this study, we introduce a novel voltage-tunable-response III-V spectrometer based on the p-graded-n junction, which exhibits significantly improved device performance as a photodetector, particularly in terms of dark current and optical responsivity. We also propose a new method, Neural Spectral Fields (NSF), to reconstruct the spectrum from accumulated current, achieving high accuracy and robustness in spectrum reconstruction. Our work represents a major advancement in the miniaturization of optical spectrometers for diverse applications. Future research will be aimed at realizing in-situ spectral imaging using arrays of single-pixel p-graded-n junction spectrometers with a readout circuit (ROIC). Our ‘spectrometer camera’ provides multi-channel spectral information with an ultra-small footprint, without the use of a dispersive light path or detector array. This technology could push the boundaries of true color imaging and find applications in aviation and machine vision. 4 Methods \bmhead Device fabrication The epitaxial layer structure of the detector is grown with Metal-organic Chemical Vapor Deposition (MOVCD). Mesa structures of different diameters are fabricated by wet etching with $H_{3}PO_{4}:H_{2}O_{2}:H_{2}O$ solution (1: 1: 10). The detector diameter varies from 50 $\mu m$ to 2 mm. Ti/Pt/Au contact metal is deposited with Denton EXPLOER-14 E-beam system on both P and N contact layers. A silicon dioxide ($SiO_{2}$) thin film is then deposited as the passivation layer. \bmhead Electrical characteristics measurement The dark current and photo current is measured with Keysight B1500 semiconductor parameter analyzer at room temperature. The device responsivity is measured by a halogen lamp passing through iHR 320 monochromator. The monochromatic light is modulated by a 180 Hz chopper and focused on the detector. The light spot size is around 50 $\mu m$. We use a Keithley 2400 source meter to bias the detector and SR 830 lock-in amplifier to measure the chopped signal. Thorlabs FDS-1010 Si standard detector is used to calibrate the responsivity of the device. In the measurement stage of the reconstruction experiments, the commercial LED light source of yellow, red, and green is applied to illuminate the detector. An NKT photonics supercontinuum laser is used to generate single and double peak narrow band light beams. We employ the commercial spectrometer of YOKOGAWA AQ6374 to measure the incident light spectrum for comparison. \bmhead NSF implementation Feature extractor. We use a Multilayer Perceptron(MLP) to extract the features from measured currents. The MLP contains 12 hidden layers with 1024 neurons activated by the leaky rectified linear unit (leaky ReLU). Each hidden layer is attached with a batch-normalization layer and a skip connection to increase the capability of the MLP. The feature dimension is set to be 501, which is enough and efficient with nanometer-resolution spectral reconstruction. The extracted features can be visualized to provide additional information. In Fig.3, it is shown that the single-peak features and double-peak features are distinguished from each other. And broadband natural light features have a different distribution from single-peak features. Such properties in feature space provide NSF with the capability of reconstructing the spectrum of different spectra. Neural spectral fields. We use the NSF to reconstruct the spectrum from the extracted features and given wavelength. The wavelength is firstly positionally encoded to improve the network’s ability to represent the high-frequency spectrum. The positional encoding process is expressed by: $$\displaystyle\gamma_{pos}(\lambda)=$$ $$\displaystyle[\sin{(2^{0}\pi\lambda)},\cos{(2^{0}\pi\lambda)},...,$$ $$\displaystyle\sin{(2^{n}\pi\lambda)},\cos{(2^{n}\pi\lambda)}]$$ (2) where $n$ is the dimension of positional encoding. We choose $n=5$ in all experiments in our manuscripts. The positional encoded wavelength and extracted features are then fed into a small MLP with only 3 hidden layers and 256 neurons for each hidden layer. The output of the MLP is the spectral intensity at the input wavelength. Thus the NSF outputs the continuous spectrum give the queried wavelength. Physics-based refinement. The output spectrum from NSF is close to the actual spectrum, but it does not accurately correspond with the measured current. We use a physics-based refinement algorithm that iteratively adjusts the amplitude and scale to minimize the loss between the corresponding current and the measured currents. During each iteration, we randomly shift the NSF spectrum by 1nm and scale the amplitude by 1.1 and 0.9. We repeat this process for 200 iterations for each input current. At the end of the refinement process, we select the spectrum with the lowest corresponding current error as the final result. Declarations \bmhead Data availability The data that support the plots within this paper are available from the corresponding author upon reasonable request. \bmhead Code availability The code and algorithm in this paper are available from the corresponding author upon reasonable request. \bmhead Funding This work was supported in part by the National Key Research and Development Program of China under Grant 2019YFB2203400 and in part by the National Natural Science Foundation of China under Grant 61975121. This work was sponsored by Double First-Class Initiative Fund of ShanghaiTech University \bmhead Acknowledgments We are grateful to the device fabrication support from the ShanghaiTech University Quantum Device Lab. If any of the sections are not relevant to your manuscript, please include the heading and write ‘Not applicable’ for that section. Editorial Policies for: Nature Portfolio journals: https://www.nature.com/nature-research/editorial-policies References \bibcommenthead Swanson and Gowen [2022] Swanson, A., Gowen, A.: Detection of previously frozen poultry through plastic lidding film using portable visible spectral imaging (443–726 nm). Poultry Science 101(2), 101578 (2022) https://doi.org/10.1016/j.psj.2021.101578 Elmasry and Sun [2010] Elmasry, G., Sun, D.-W.: CHAPTER 1. Principles of Hyperspectral Imaging Technology, pp. 3–43 (2010). https://doi.org/10.1016/B978-0-12-374753-2.10001-2 Uhmann et al. [1991] Uhmann, W., Becker, A., Taran, C., Siebert, F.: Time-resolved ft-ir absorption spectroscopy using a step-scan interferometer. Appl. Spectrosc. 45(3), 390–397 (1991) https://doi.org/10.1366/0003702914337128 Martin [1980] Martin, A.E.: Infrared interferometric spectrometers. NASA STI/Recon Technical Report A 81, 23378 (1980) Zhu et al. [2017] Zhu, A.Y., Chen, W.-T., Khorasaninejad, M., Oh, J., Zaidi, A., Mishra, I., Devlin, R.C., Capasso, F.: Ultra-compact visible chiral spectrometer with meta-lenses. APL Photonics 2(3), 036103 (2017) https://doi.org/10.1063/1.4974259 Bao and Bawendi [2015] Bao, J., Bawendi, M.G.: A colloidal quantum dot spectrometer. Nature 523(7558), 67–70 (2015) https://doi.org/10.1038/nature14576 Nitkowski et al. [2008] Nitkowski, A., Chen, L., Lipson, M.: Cavity-enhanced on-chip absorption spectroscopy using microring resonators. Opt. Express 16(16), 11930–11936 (2008) https://doi.org/10.1364/OE.16.011930 Wang et al. [2007] Wang, S.-W., Xia, C., Chen, X., Lu, W., Li, M., Wang, H., Zheng, W., Zhang, T.: Concept of a high-resolution miniature spectrometer using an integrated filter array. Opt. Lett. 32(6), 632–634 (2007) https://doi.org/10.1364/OL.32.000632 Zheng et al. [2019] Zheng, S.N., Zou, J., Cai, H., Song, J.F., Chin, L.K., Liu, P.Y., Lin, Z.P., Kwong, D.L., Liu, A.Q.: Microring resonator-assisted fourier transform spectrometer with enhanced resolution and large bandwidth in single chip solution. Nature Communications 10(1), 2349 (2019) https://doi.org/10.1038/s41467-019-10282-1 Velasco et al. [2013] Velasco, A.V., Cheben, P., Bock, P.J., Delâge, A., Schmid, J.H., Lapointe, J., Janz, S., Calvo, M.L., Xu, D.-X., Florjańczyk, M., Vachon, M.: High-resolution fourier-transform spectrometer chip with microphotonic silicon spiral waveguides. Opt. Lett. 38(5), 706–708 (2013) https://doi.org/10.1364/OL.38.000706 Meng et al. [2020] Meng, J., Cadusch, J.J., Crozier, K.B.: Detector-only spectrometer based on structurally colored silicon nanowires and a reconstruction algorithm. Nano Letters 20(1), 320–328 (2020) https://doi.org/10.1021/acs.nanolett.9b03862 Zhu et al. [2020] Zhu, X., Bian, L., Fu, H., Wang, L., Zou, B., Dai, Q., Zhang, J., Zhong, H.: Broadband perovskite quantum dot spectrometer beyond human visual resolution. Light: Science & Applications 9(1), 73 (2020) https://doi.org/10.1038/s41377-020-0301-4 Redding et al. [2016] Redding, B., Liew, S.F., Bromberg, Y., Sarma, R., Cao, H.: Evanescently coupled multimode spiral spectrometer. Optica 3(9), 956–962 (2016) https://doi.org/10.1364/OPTICA.3.000956 Yang et al. [2015] Yang, T., Xu, C., Ho, H.-p., Zhu, Y.-y., Hong, X.-h., Wang, Q.-j., Chen, Y.-c., Li, X.-a., Zhou, X.-h., Yi, M.-d., Huang, W.: Miniature spectrometer based on diffraction in a dispersive hole array. Opt. Lett. 40(13), 3217–3220 (2015) https://doi.org/10.1364/OL.40.003217 Redding et al. [2013] Redding, B., Liew, S.F., Sarma, R., Cao, H.: Compact spectrometer based on a disordered photonic chip. Nature Photonics 7(9), 746–751 (2013) https://doi.org/10.1038/nphoton.2013.190 Sharma et al. [2021] Sharma, N., Kumar, G., Garg, V., Mote, R.G., Gupta, S.: Reconstructive spectrometer using a photonic crystal cavity. Opt. Express 29(17), 26645–26657 (2021) https://doi.org/10.1364/OE.432831 Wang et al. [2019] Wang, Z., Yi, S., Chen, A., Zhou, M., Luk, T.S., James, A., Nogan, J., Ross, W., Joe, G., Shahsafi, A., Wang, K.X., Kats, M.A., Yu, Z.: Single-shot on-chip spectral sensors based on photonic crystal slabs. Nature Communications 10(1), 1020 (2019) https://doi.org/10.1038/s41467-019-08994-5 Yang et al. [2021] Yang, Z., Albrow-Owen, T., Cai, W., Hasan, T.: Miniaturization of optical spectrometers. Science 371(6528), 0722 (2021) https://doi.org/10.1126/science.abe0722 https://www.science.org/doi/pdf/10.1126/science.abe0722 Yuan et al. [2021] Yuan, S., Naveh, D., Watanabe, K., Taniguchi, T., Xia, F.: A wavelength-scale black phosphorus spectrometer. Nature Photonics 15(8), 601–607 (2021) https://doi.org/10.1038/s41566-021-00787-x Deng et al. [2022] Deng, W., Zheng, Z., Li, J., Zhou, R., Chen, X., Zhang, D., Lu, Y., Wang, C., You, C., Li, S., Sun, L., Wu, Y., Li, X., An, B., Liu, Z., Wang, Q.j., Duan, X., Zhang, Y.: Electrically tunable two-dimensional heterojunctions for miniaturized near-infrared spectrometers. Nature Communications 13(1), 4627 (2022) https://doi.org/10.1038/s41467-022-32306-z Yoon et al. [2022] Yoon, H.H., Fernandez, H.A., Nigmatulin, F., Cai, W., Yang, Z., Cui, H., Ahmed, F., Cui, X., Uddin, M.G., Minot, E.D., Lipsanen, H., Kim, K., Hakonen, P., Hasan, T., Sun, Z.: Miniaturized spectrometers with a tunable van der waals junction. Science 378(6617), 296–299 (2022) https://doi.org/10.1126/science.add8544 https://www.science.org/doi/pdf/10.1126/science.add8544 Yang et al. [2019] Yang, Z., Albrow-Owen, T., Cui, H., Alexander-Webber, J., Gu, F., Wang, X., Wu, T.-C., Zhuge, M., Williams, C., Wang, P., Zayats, A.V., Cai, W., Dai, L., Hofmann, S., Overend, M., Tong, L., Yang, Q., Sun, Z., Hasan, T.: Single-nanowire spectrometers. Science 365(6457), 1017–1020 (2019) https://doi.org/10.1126/science.aax8814 https://www.science.org/doi/pdf/10.1126/science.aax8814 Chen et al. [2016] Chen, S., Li, W., Wu, J., Jiang, Q., Tang, M., Shutts, S., Elliott, S.N., Sobiesierski, A., Seeds, A.J., Ross, I., Smowton, P.M., Liu, H.: Electrically pumped continuous-wave iii–v quantum dot lasers on silicon. Nature Photonics 10(5), 307–311 (2016) https://doi.org/10.1038/nphoton.2016.21 Huffaker et al. [1998] Huffaker, D.L., Park, G., Zou, Z., Shchekin, O.B., Deppe, D.G.: 1.3 $\mu$m room-temperature GaAs-based quantum-dot laser. Applied Physics Letters 73(18), 2564–2566 (1998) https://doi.org/10.1063/1.122534 https://pubs.aip.org/aip/apl/article-pdf/73/18/2564/6994358/2564_1_online.pdf Faist et al. [1994] Faist, J., Capasso, F., Sivco, D.L., Sirtori, C., Hutchinson, A.L., Cho, A.Y.: Quantum cascade laser. Science 264(5158), 553–556 (1994) https://doi.org/10.1126/science.264.5158.553 https://www.science.org/doi/pdf/10.1126/science.264.5158.553 Meyer [1996] Meyer, J.R.: Type-ii and type-i interband cascade lasers. Electronics Letters 32, 45–461 (1996) Campbell [2016] Campbell, J.C.: Recent advances in avalanche photodiodes. Journal of Lightwave Technology 34(2), 278–285 (2016) https://doi.org/10.1109/JLT.2015.2453092 Ishibashi and Ito [2020] Ishibashi, T., Ito, H.: Uni-traveling-carrier photodiodes. Journal of Applied Physics 127(3) (2020) https://doi.org/10.1063/1.5128444 https://pubs.aip.org/aip/jap/article-pdf/doi/10.1063/1.5128444/15239200/031101_1_online.pdf. 031101 Rogalski et al. [2017] Rogalski, A., Martyniuk, P., Kopytko, M.: InAs/GaSb type-II superlattice infrared detectors: Future prospect. Applied Physics Reviews 4(3) (2017) https://doi.org/10.1063/1.4999077 https://pubs.aip.org/aip/apr/article-pdf/doi/10.1063/1.4999077/14574962/031304_1_online.pdf. 031304 Mildenhall et al. [2021] Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R.: Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM 65(1), 99–106 (2021) Zhong et al. [2021] Zhong, E.D., Bepler, T., Berger, B., Davis, J.H.: Cryodrgn: reconstruction of heterogeneous cryo-em structures using neural networks. Nature methods 18(2), 176–185 (2021) Liu et al. [2022] Liu, R., Sun, Y., Zhu, J., Tian, L., Kamilov, U.S.: Recovery of continuous 3d refractive index maps from discrete intensity-only measurements using neural fields. Nature Machine Intelligence 4(9), 781–791 (2022) Chen et al. [2011] Chen, B., Jiang, W.Y., Yuan, J., Holmes, A.L., Onat, B.M.: Demonstration of a room-temperature inp-based photodetector operating beyond 3 $\mu$m. IEEE Photonics Technology Letters 23(4), 218–220 (2011) https://doi.org/10.1109/LPT.2010.2096205 Smith and Guild [1931] Smith, T., Guild, J.: The c.i.e. colorimetric standards and their use. Transactions of the Optical Society 33(3), 73 (1931) https://doi.org/10.1088/1475-4878/33/3/301
Leveraging traditional ecological knowledge in ecosystem restoration projects utilizing machine learning Bogdana Rakova Accenture, Responsible AI415 Mission St Floor 35San FranciscoUSA Partnership on AI115 Sansome St 1200San FranciscoUSA [email protected]  and  Alexander Winter California Institute of Integral Studies1453 Mission StSan FranciscoUSA [email protected] Abstract. Ecosystem restoration has been recognized to be critical to achieving accelerating progress on all of the United Nations’ Sustainable Development Goals. Decision makers, policymakers, data scientists, earth scientists, and other scholars working on these projects could positively benefit from the explicit consideration and inclusion of diverse perspectives. Community engagement throughout the stages of ecosystem restoration projects could contribute to improved community well-being, the conservation of biodiversity, ecosystem functions, and the resilience of socio-ecological systems. Conceptual frameworks are needed for the meaningful integration of traditional ecological knowledge of indigenous peoples and local communities with data science and machine learning work practices. Adaptive frameworks would consider and address the needs and challenges of local communities and geographic locations by improving community and inter-agent communication around restoration and conservation projects and by making relevant real-time data accessible. In this paper, we provide a brief analysis of existing Machine Learning (ML) applications for forest ecosystem restoration projects. We go on to question if their inherent limitations may prevent them from being able to adequately address socio-cultural aspects of the well-being of all involved stakeholders. Bias and unintended consequences pose significant risks of downstream negative implications of ML-based solutions. We suggest that adaptive and scalable practices could incentivize interdisciplinary collaboration during all stages of ecosystemic ML restoration projects and align incentives between human and algorithmic actors. Furthermore, framing ML projects as open and reiterative processes can facilitate access on various levels and create incentives that lead to catalytic cooperation in the scaling of restoration efforts. 1. Introduction United Nations (UN) has recently announced 2021-2030 to be the UN decade on ecosystem restoration (Nations, 2020a) and aims to act as an accelerator towards the achievement of a shared vision: ”A world where we have restored the relationship between humans and nature: Where we increase the area of healthy ecosystems and put a stop to their loss and degradation – for the health and wellbeing of all life on earth and that of future generations” (Nations, 2020b). Ecosystem restoration holds increasing potential for maintaining and increasing the rates of carbon sequestration (Project Drawdown, 2019). Furthermore, it conserves biodiversity through maintaining a range of ecosystem services and safeguarding rich cultures and traditional ways of life. Indigenous and local knowledge systems for ecosystem restoration have been recognized by researchers as key components of sustainable development (Berkes, 1993). Reyes-Garcia V et al. (2019) propose that actively involving indigenous peoples and local communities (IPLCs) in restoration efforts (1) can help in selecting sites and species, (2) can increase local participation in the planning, execution, and monitoring of restoration activities, and (3) can provide historical information on ecosystem state and management (Reyes-García et al., 2019). The importance of IPLCs knowledge systems for environmental sustainability have been acknowledged by the UN System Task Team on the Post 2015 UN Development Agenda, stating that ”traditional and indigenous knowledge, adaptation, and coping strategies can be major assets for local response strategies” (Nations, 2012). Similarly, the Intergovernmental Panel on Climate Change (IPCC) Fifth Assessment Report Summary for Policymakers emphasizes that ”indigenous, local and traditional knowledge systems and practices, including indigenous peoples’ holistic view of community and environment, are a major resource for adapting to climate change, but these have not been used consistently in existing adaptation efforts” (IPCC, 2014). Traditional ecological knowledge (TEK) encapsulates indigenous cultural practices, world views, and ways of life which offer myriad epistemological and ontological approaches, including mythologies passed down as songs and stories, embedded in geographic representations, and more (Berkes, 2017). It is a field of study in anthropology defined as the cumulative body of knowledge, practices, and beliefs, passed down from one generation to the next. However, the romanticization of indigeneity poses a common trap in which the complexities of indigenous epistemologies are reduced and distorted. Historically, this has negatively impacted indigenous peoples and local communities (Povinelli, 2016). The main contribution of our work is to demonstrate the need for data science and ML-based environmental projects to consider the diverse perspectives within place-based and traditional ecological knowledge systems. We provide an overview of how ML is currently used in the planning, execution, and monitoring stages of forest ecosystem restoration projects and aim to highlight how IPLCs could participate and immensely contribute to achieving positive environmental impacts. Ultimately, by building on existing climate governance models, we propose that new methodological and governance frameworks could reduce the negative impacts of the use of ML in restoration efforts. 2. Literature Review There has been a growing interest in the intersection of data science, machine learning, and sustainability. Recent work by Rolnick et al. provides an overview of how ML is being applied to address climate change in the domains of: electricity systems, transportation, buildings and cities, industry, farms and forests, carbon sequestration, climate prediction, societal impacts, solar geoengineering, education, and finance (Rolnick et al., 2019). Related to ecosystem restoration projects, they highlight the use of computer vision techniques, transfer learning, reinforcement learning, and control theory (Rolnick et al., 2019). For example, computer vision on satellite images is used for modeling the amount of carbon stored in forests (Rodriguez-Veiga et al., 2017). The model is then used for predicting the carbon storage potential of deforested areas. ML is also employed for verification of conservation projects through satellite imagery (Santamaria et al., 2020). More broadly, the field of Computational Sustainability has enabled computer scientists to apply their skills towards a broad range of sustainability challenges. Gomes et al. call for transformative synthesis by incorporating a combination of techniques from (1) data and ML, (2) optimization and dynamic models simulation, and (3) multi-agent crowdsourcing and citizen science (Gomes et al., 2019). Levy et al. go on to investigate the links between specific areas of sustainability and broader social challenges by studying the role of global commodity supply chains in deforestation through ML modeling (Gardner et al., 2019). The question of malicious use and unintended consequences of ML in society has emerged from the early work of Norbert Wiener (1954) and others. More recently, such interest has led to the development of the areas of Social Informatics, Participatory Design, Computer-Supported Cooperative Work, Critical Algorithm Studies, and others. The main challenges they discuss include the negative impacts of biased and non-representative datasets (Mehrabi et al., 2019), the lack of diversity and inclusion in the perspectives taken into account (Hoffmann, 2019), the need for algorithmic impact assessments that evaluate ML models based on the broader socio-technical context within which they are situated (Selbst et al., 2019). The decisions informing the design, development, and deployment of ML applications are often vulnerable to various forms of bias. The use of non-representative or culturally-skewed datasets, oversimplifications in optimization algorithms, model interpretation assumptions, and self-reinforcing feedback loops have lead to unintended and harmful consequences, for example reinforcement of gender, sex, and race inequities in facial recognition models (Mehrabi et al., 2019; Angwin et al., 2019). In the context of ecosystem restoration, biases may result in damages and harm to ecosystems and stakeholder communities. Principles for ML governance in regenerative ecosystem practices could make use of a framework proposed by Suresh and Guttag which identifies the following biases: (1) historical bias, (2) representation bias, (3) measurement bias, (4) aggregation bias, (5) evaluation bias, and (6) deployment bias (Suresh and Guttag, 2019). Margins of error can arise at each step of the ML process, highlighting the importance of a systemic approach for risk mitigation. Environmental and social problems are ultimately interconnected and therefore can only be overcome by changing fundamentally the way society is organized, and not simply through technical interventions (Rico, 1998). Rico (1986) argues that ”human and environmental dimensions of sustainability are inseparable, and that this link is a result both of the aggregate effect of social relationships and actions as they influence the natural ecology, and of the impact of environmental changes on society” (Rico, 1998). Recent focus on investigating the downstream societal effects of the use of machine learning have lead to the development of multiple assessment and governance models (European Commission High-Level Expert Group on AI, 2019; Latonero, 2018; Gasser and Almeida, 2017). We build on these works and aim to highlight the need for such methodological frameworks in the way ML is used in restoration projects, also building on prior work in climate governance models. The negotiations under the United Nations Framework Convention on Climate Change (UNFCCC) have instigated the 1997 Kyoto Protocol, the 2009 Copenhagen Accord, and the 2015 Paris Agreement. These can be considered as distinct climate governance models which have been studied by policymakers and other scholars. Held and Roger (2019) provide a comparative analysis of these models (Held and Roger, 2018), while other scholars bring political economy perspectives to the increasing but uneven uptake of transnational climate governance (Roger et al., 2017). Hale (2018) identify the unique characteristics of the Paris Agreement which allow for the re-framing of the international politics of climate change as a ”catalytic” collective action problem (Hale, 2018). In collective action problems, individual stakeholders could benefit from cooperation. Particularly, we hope our work could inspire new kinds of cooperation between applied machine learning researchers, policymakers, and IPLCs, leveraging traditional ecological knowledge systems. Uprety et al. provide a review of the use of TEK in ecological restoration and planning in 17 different kinds of restoration projects around the world (Uprety et al., 2012). However, scholars have also been concerned about the incorporation of traditional knowledge systems into ”top-down” approaches to ecological restoration, pointing out ethical and social challenges (Chalmers and Fabricius, 2007). We aim to bring these perspectives to data scientists and machine learning researchers working in sustainability, highlighting the need for participatory methodological frameworks that are centered on equity and collective action. 3. Leveraging Traditional Ecological Knowledge Ecologist Fikret Berkes (1999) brings together indigenous knowledge systems and Western scientific theories of conservation and biodiversity through his extensive fieldwork around the world. He explains TEK through four interrelated levels of ecosystem management, defined by a model known as the knowledge-practice-belief complex (Berkes, 2017). The levels in this model are: (1) the local knowledge of animals, plants, solids, and landscapes; (2) resource management, composed of local environmental knowledge, practices, and tools; (3) community and social organizing offering coordination, cooperation, and governance; and (4) worldviews involving general ethics and belief systems (Berkes, 2017). We propose that there’s an opportunity to further bridge knowledge gaps between TEK and scientific knowledge through participatory methods building on decades of fieldwork. Utilizing methodologies such as participatory action research and appreciative inquiry in the planning stage of ecosystem restoration projects could improve the well-being of climate-vulnerable communities by building trust and facilitating collaborations among stakeholders. It could allow machine learning engineers, community members, and stakeholders to collectively make decisions about data collection, usage, storage, and removal protocols to ensure data sovereignty and build local capacity. We suggest that any ecosystem restoration effort needs to consider and plan for the economic and social-equity aspects of the proposed project. For example, ML projects might leverage IPLCs’ deep knowledge and understanding of their land, while the IPLCs could co-benefit from the social and economic value created by the restored ecosystem services in the area. Luedeling et al. (2019) highlight the importance of drawing upon ”expert knowledge to ensure that relevant constraints are considered” (Luedeling et al., 2019). Working closely with experts and IPLCs will help guard against misguided strategy in the implementation stages by aligning incentives between stakeholders, including ecosystems, with well-being in mind. In what follows, we provide a brief analysis of how ML is used in the planning, executing, and monitoring stages of forest ecosystem restoration projects. We don’t aim to provide a full overview of the ecological restoration practices and point interested readers to the work of Egan and Howell for a detailed review (Egan, 2005). 3.1. Planning of restoration ML on satellite and drone image data is being used in various ways throughout the planning stage of forest restoration projects. Through complex ML modeling, researchers aim to more accurately determine forest carbon sequestration potential worldwide (Mushegian, N., Zhu, A., Santini, G., Uliana, P., & Balingit, K., 2018; Rodriguez-Veiga et al., 2017). We propose that satellite and drone image data alone cannot be sufficient resources in the restoration planning stage because ML-based pattern recognition algorithms may fail to capture the significance of socio-ecological aspects of the land. In order to communicate the importance of these aspects, ethnographers have evolved the concept of cultural keystone place to denote ”a given site or location with high cultural salience for one or more groups of people and which plays, or has played in the past, an exceptional role in a people’s cultural identity, as reflected in their day to day living, food production and other resource-based activities, land and resource management, language, stories, history, and social and ceremonial practices” (Cuerrier et al., 2015). We suggest that ML practitioners will benefit from new organizing principles allowing for and incentivizing teaming up with people from interdisciplinary fields during all stages of an ecosystem restoration project. In order to partner with IPLCs, a technical ML team might employ qualitative research methods while collaborating with local municipalities, foresters, and other subject matter experts. The organizing principle needs to enable open and transparent participation of all involved stakeholders, including seeing the IPLCs as equally participating actors throughout the process. Restoration efforts looking to enable higher levels of biodiversity need to assess the flow of ecosystem services in the target restoration areas (Cuerrier et al., 2015). Ecosystem services encompass supporting, provisioning, regulating, and cultural services (Cuerrier et al., 2015), where there exists a reciprocal relationship between people and ecosystems. Ethnographic surveys of communities impacted by ecological degradation could serve as helpful resources to ML practitioners. The consideration of cultural ecosystem services (Brancalion et al., 2014) could further enable practitioners to plan how a proposed ML project could be integrated in a socio-economic and socio-ecological context. For example, the planning project stage could explore topics such as (1) impact on the well-being of local communities, (2) long-term environmental and biodiversity impact, (3) economic opportunities likely to be created through the implementation of the project, (4) provision of new educational and knowledge-generation value, and (5) resource stewardship. Ecosystems maintain stability through interactions between various feedback loops (Neutel and Thorne, 2017). ML systems interfacing with ecosystems could create ecosystemic dependencies with ML feedback loops becoming an integral part of the ecosystem. We propose the assessment of necessary resources for ongoing ML efforts and the consideration of ML applications toward and beyond a threshold for ecosystemic self-sufficiency. 3.2. Execution of restoration Environmental scholars have argued about the need for recognition of the roles of communities in actively cultivating, improving, and positively contributing to ecosystem services, challenging ”the false concept of ecosystem services as a one-way flow of benefits from ecosystems to humans” (Comberti et al., 2015). Building on cultural ecosystem services as well as the concept of ”services to ecosystems” (Comberti et al., 2015), we suggest ML practitioners who execute restoration projects need to see these projects as continuous reiterative efforts where involved stakeholders such as IPLCs might be the ones providing the needed services to ecosystems support in return for sharing the benefits from social, economic, and other values created by the restoration projects. We propose that in order to ensure equity in the IPLCs’ involvement in restoration efforts, ML developers need to create rich interaction interfaces which lower the barriers to access, allow for people from varying backgrounds and levels of technical skills to participate, provide explanations for algorithmic outcomes (Doshi-Velez and Kim, 2017), and allow for people to intervene at every step (Orseau and Armstrong, 2016). 3.3. Monitoring of restoration Among the many applications of ML operating on satellite and drone image data in the monitoring stages of forest ecosystem restoration projects are detecting and mapping selective logging (Hethcoat et al., 2019), high-resolution land cover mapping (Robinson et al., 2020), and predicting forest wildfire spread (Radke et al., 2019, 2019). Monitoring is also an important stage in the lifecycle of any ML model which is deployed in a real-world system. Continuous monitoring, integration, re-training and tuning of model parameters is part of a more general quality assurance process which ultimately aims to make sure that a ML model is able to perform efficiently as people are interacting with it. An increased number of feedback loops between the model-users and model-creators will positively contribute to models which are adaptive to the dynamic context within which they operate. As explored by Ortega et al., monitoring and control of system activity, framed as assurance, is an integral part of building safe artificial intelligence (Ortega et al., 2018). We suggest that community engagement could help large-scale continuous ML model assurance efforts, enabling the alignment of incentives between human and algorithmic actors. During the execution of the restoration stage, partners in the project collaboratively act and implement solutions to address an agreed-upon goal. Similarly, during the monitoring stage, the outcomes of their actions are collectively reviewed and evaluated. This could include engaging communities as a sensor network, sharing examples of the changes they see, or early indications that something new or different might happen. Should the efforts of the implementation be found unsuccessful or partially successful, the adjusted results are fed into the next iteration of the planning cycle. We suggest that throughout this adaptive process it is important to establish ongoing commitments for each project stakeholder. For the ML research team, this could include establishing practical ethical guidelines, adopting an impact assessment framework, involving impacted communities in active participation rather than passive acceptance to ensure cultural relevance to the community, as well as building community capacity and resiliency with new skills. This could also include an exit strategy of ML interventions once desired outcomes have been achieved. 4. Enabling catalytic cooperation through a shift in ML governance models An interdisciplinary worldview helps us recognize the need for multifaceted feedback loops in order to inform the discussion of how ML could meaningfully contribute to Sustainability. As Gregory Bateson would say, some questions are not meant to be answered but they show us new perspectives about the relations between all involved actors (Bateson, 2000). Similarly, the question of how to use ML to alleviate the fragility of our planet could bring awareness to these relations and inform new organizing principles. One such organizing principle is modeling the responsible design, development, and deployment of ML as a tragedy of the commons problem (Hardin, 2009; Ostrom, 1990; Reilly and Kort, 2001; Greco and Floridi, 2004) as well as investigating how the problem of reducing the negative impacts of ML departs from the logic of the tragedy of the commons. Leveraging work on other tragedy of the commons systems such as climate action (Held and Roger, 2018), we could make more informed decisions about the organizing and governance principles that could enable positive results, reducing the negative impacts of ML-based ecosystem restoration efforts. The problem of reducing environmental degradation has the characteristics of joint products, preference heterogeneity, and increasing returns discussed by Hale (Hale, 2018). Joint products means that actions could benefit the actors while also contributing public good to communities. Preference heterogeneity relates to the fact that there’s no symmetry of preferences across actors and actions. In the tragedy of the commons model, the free-rider problem poses that an actor is generally disincentivized from action by the efforts of others. In reality, many actions reinforce themselves through a variety of feedback loops that generate increasing returns to action over time - action in the past can reduce the cost and increase the benefit of action in the future. We propose that many ML applications fit the features of joint products, preference heterogeneity, and increasing returns, which creates the possibility for re-framing the methodological and governance frameworks towards a ”catalytic cooperation” model (Hale, 2018). An example of such a governance model is the 2015 Paris Agreement on climate change. By doing a comparative analysis of the problem structures of climate action and the use of ML, we find that they exhibit similarities in their distributional effects, the spread of individual vs. collective harms, and first-order vs. second order impacts. Hence, we propose that it is helpful to restructure and address ML governance questions through a catalytic cooperation model which recognizes that: • Good intentions are not good enough. Acknowledging the fallacy of technological solutionism, there’s a need to stimulate incremental action within academia, the private sector, civil society, etc. Frameworks can facilitate measurability, which helps actionability and adds to the conceptual toolbelt for the assessment of complex problems arising from the interplay of many agents (Schiff et al., 2020). • Scale matters. There’s a need for new ways to participate. For every area of sustainability, it is possible to shift from a worldview where negative impacts of ML are diffused in society towards an increasing number of positive examples of ML contributing to socio-economic and socio-ecological well-being of people and the planet. Increasing the number of actors working on these issues lowers the costs and risks for more actors to become involved in this space until the kickstart of a ”catalytic effect” resulting in a tipping point where the new behaviors and norms become self-reinforcing (Hale, 2018). • There’s a need for intergenerational consent, consensus, commitments, and cooperation through transparent normative goal-setting and benchmarking. Metrics frameworks, standards, best-practices, and guidelines could contribute to an iterative evaluation process which enables collective action aligned with society’s broader values and beliefs (Musikanski et al., 2018). 5. Conclusion Ecological restoration and regeneration efforts sit at the heart of moving towards accelerating positive environmental impacts. However, there’s a growing need for governance frameworks which could enable collective action through empowering community engagement, equity, and long-term impacts. Traditional ecological knowledge systems of indigenous peoples around the world have been globally recognized as a major asset in local restoration efforts. By bringing interdisciplinary perspectives to the work of data science and machine learning scholars, we aim to highlight the methodological opportunities and principles for integrating traditional ecological knowledge systems in the design, development, and deployment of ML-based forest restoration projects. We have provided an overview of how machine learning is used in the planning, execution, and monitoring stages of ecological restoration and hope to engage in applied work in our ongoing research. Future work needs to also consider what methodological frameworks could bridge the gaps between applied ML-based projects and environmental policy. We imagine that ensuring stakeholder equity could unleash conceptual tools, building on the principles we have hereby proposed. Furthermore, catalytic cooperation shows immense potential as a systemic approach towards such an integration between data scientists, indigenous and local communities, policymakers, and others, while optimizing for ecosystem regeneration, maximizing biodiversity, and community well-being. References (1) Angwin et al. (2019) Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner. 2019. Machine bias: There’s software used across the country to predict future criminals and it’s biased against blacks. 2016. URL https://www. propublica. org/article/machine-bias-risk-assessments-in-criminal-sentencing (2019). Bateson (2000) Gregory Bateson. 2000. Steps to an ecology of mind: Collected essays in anthropology, psychiatry, evolution, and epistemology. University of Chicago Press. Berkes (1993) Fikret Berkes. 1993. Traditional ecological knowledge in perspective. Traditional ecological knowledge: Concepts and cases 1 (1993). Berkes (2017) Fikret Berkes. 2017. Sacred Ecology: Traditional Ecological Knowledge and Resource Management. Routledge. Brancalion et al. (2014) Pedro HS Brancalion, Ines Villarroel Cardozo, Allan Camatta, James Aronson, and Ricardo R Rodrigues. 2014. Cultural ecosystem services and popular perceptions of the benefits of an ecological restoration project in the Brazilian Atlantic Forest. Restoration Ecology 22, 1 (2014), 65–71. Chalmers and Fabricius (2007) Nigel Chalmers and Christo Fabricius. 2007. Expert and generalist local knowledge about land-cover change on South Africa’s Wild Coast: can local ecological knowledge add value to science? Ecology and Society 12, 1 (2007). Comberti et al. (2015) Claudia Comberti, Thomas F Thornton, V Wyllie de Echeverria, and Trista Patterson. 2015. Ecosystem services or services to ecosystems? Valuing cultivation and reciprocal relationships between humans and ecosystems. Global Environmental Change 34 (2015), 247–262. Cuerrier et al. (2015) Alain Cuerrier, Nancy J Turner, Thiago C Gomes, Ann Garibaldi, and Ashleigh Downing. 2015. Cultural keystone places: conservation and restoration in cultural landscapes. Journal of Ethnobiology 35, 3 (2015), 427–448. Doshi-Velez and Kim (2017) Finale Doshi-Velez and Been Kim. 2017. Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608 (2017). Egan (2005) Dave Egan. 2005. The historical ecology handbook: a restorationist’s guide to reference ecosystems. Island Press. European Commission High-Level Expert Group on AI (2019) European Commission High-Level Expert Group on AI. 2019. Trustworthy AI Assessment List. Technical Report. {https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60440} Gardner et al. (2019) Toby A Gardner, Magnus Benzie, Jan Börner, Elena Dawkins, Stephen Fick, Rachael Garrett, Javier Godar, A Grimard, Sarah Lake, Rasmus K Larsen, et al. 2019. Transparency and sustainability in global commodity supply chains. World Development 121 (2019), 163–177. Gasser and Almeida (2017) Urs Gasser and Virgilio AF Almeida. 2017. A layered model for AI governance. IEEE Internet Computing 21, 6 (2017), 58–62. Gomes et al. (2019) Carla Gomes, Thomas Dietterich, Christopher Barrett, Jon Conrad, Bistra Dilkina, Stefano Ermon, Fei Fang, Andrew Farnsworth, Alan Fern, Xiaoli Fern, et al. 2019. Computational sustainability: Computing for a better world and a sustainable future. Commun. ACM 62, 9 (2019), 56–65. Greco and Floridi (2004) Gian Maria Greco and Luciano Floridi. 2004. The tragedy of the digital commons. Ethics and Information Technology 6, 2 (2004), 73–81. Hale (2018) Thomas Hale. 2018. Catalytic cooperation. Blavatnik School of Government Working Paper Series 26 (2018). Hardin (2009) Garrett Hardin. 2009. The tragedy of the commons. Journal of Natural Resources Policy Research 1, 3 (2009), 243–253. Held and Roger (2018) David Held and Charles Barclay Roger. 2018. Three Models of Global Climate Governance: From Kyoto to Paris and Beyond. Global Policy 9 (2018), 527–537. Hethcoat et al. (2019) Matthew G Hethcoat, David P Edwards, Joao MB Carreiras, Robert G Bryant, Filipe M Franca, and Shaun Quegan. 2019. A machine learning approach to map tropical selective logging. Remote sensing of environment 221 (2019), 569–582. Hoffmann (2019) Anna Lauren Hoffmann. 2019. Where fairness fails: data, algorithms, and the limits of antidiscrimination discourse. Information, Communication & Society 22, 7 (2019), 900–915. IPCC (2014) IPCC. 2014. Fifth Assessment Report Summary for Policymakers. https://www.ipcc.ch/site/assets/uploads/2018/02/AR5_SYR_FINAL_SPM.pdf Latonero (2018) Mark Latonero. 2018. Governing artificial intelligence: Upholding human rights & dignity. Data & Society (2018), 1–37. Luedeling et al. (2019) Eike Luedeling, Jan Börner, Wulf Amelung, Katja Schiffers, Keith Shepherd, and Todd Rosenstock. 2019. Forest restoration: Overlooked constraints. Science 366, 6463 (2019), 315–315. Mehrabi et al. (2019) Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2019. A survey on bias and fairness in machine learning. arXiv preprint arXiv:1908.09635 (2019). Mushegian, N., Zhu, A., Santini, G., Uliana, P., & Balingit, K. (2018) Mushegian, N., Zhu, A., Santini, G., Uliana, P., & Balingit, K. 2018. Modelling Carbon Sequestration Rates in Recovering and Secondary Forests Worldwide. http://ai4good.org/carbon-mitigation/ Musikanski et al. (2018) Laura Musikanski, John Havens, and Gregg Gunsch. 2018. IEEE P7010 Well-being Metrics Standard for Autonomous and Intelligent Systems. IEEE, New York, NY, Tech. Rep (2018). Nations (2012) United Nations. 2012. UN System Task Team on the Post-2015 UN Development Agenda. https://sustainabledevelopment.un.org/content/documents/843taskteam.pdf Nations (2020a) United Nations. 2020a. Strategy of the United Nations Decade on Ecosystem Restoration. https://wedocs.unep.org/bitstream/handle/20.500.11822/31813/ERDStrat.pdf Nations (2020b) United Nations. 2020b. UN Decade on Ecosystem Restoration 2021-2030 Partnership, Outreach and Communication Strategy. https://wedocs.unep.org/bitstream/handle/20.500.11822/31814/ERDSum.pdf Neutel and Thorne (2017) Anje-Margriet Neutel and Michael AS Thorne. 2017. Symmetry, asymmetry, and beyond: the crucial role of interaction strength in the complexity–stability debate. Adaptive food webs (eds Moore JCM, de Ruiter PC, McCann KS, Wolters V.) (2017), 31–44. Orseau and Armstrong (2016) Laurent Orseau and MS Armstrong. 2016. Safely interruptible agents. (2016). Ortega et al. (2018) Pedro A Ortega, Vishal Maini, and DeepMind Safety Team. 2018. Building safe artificial intelligence: specification, robustness, and assurance. DeepMind Safety Research Blog (2018). Ostrom (1990) Elinor Ostrom. 1990. Governing the commons: The evolution of institutions for collective action. Cambridge university press. Povinelli (2016) Elizabeth A Povinelli. 2016. Geontologies: A requiem to late liberalism. Duke University Press. Project Drawdown (2019) Project Drawdown. 2019. Land use. Tropical Forests. https://www.drawdown.org/solutions/land-use/tropical-forests. Radke et al. (2019) David Radke, Anna Hessler, and Dan Ellsworth. 2019. FireCast: Leveraging Deep Learning to Predict Wildfire Spread.. In IJCAI. 4575–4581. Reilly and Kort (2001) Rob Reilly and Barry Kort. 2001. Establishing rules and conventions for the infrastructure: the’Tragedy of the unmanaged commons’ as a determinate factor. In Proceedings Tenth IEEE International Workshop on Enabling Technologies: Infrastructure for Collaborative Enterprises. WET ICE 2001. IEEE, 315–316. Reyes-García et al. (2019) Victoria Reyes-García, Álvaro Fernández-Llamazares, Pamela McElwee, Zsolt Molnár, Kinga Öllerer, Sarah J Wilson, and Eduardo S Brondizio. 2019. The contributions of Indigenous Peoples and local communities to ecological restoration. Restoration Ecology 27, 1 (2019), 3–8. Rico (1998) María Nieves Rico. 1998. Gender, the environment and the sustainability of development. (1998). Robinson et al. (2020) Caleb Robinson, Anthony Ortiz, Kolya Malkin, Blake Elias, Andi Peng, Dan Morris, Bistra Dilkina, and Nebojsa Jojic. 2020. Human-Machine Collaboration for Fast Land Cover Mapping.. In AAAI. 2509–2517. Rodriguez-Veiga et al. (2017) Pedro Rodriguez-Veiga, James Wheeler, Valentin Louis, Kevin Tansey, and Heiko Balzter. 2017. Quantifying forest biomass carbon stocks from space. Current Forestry Reports 3, 1 (2017), 1–18. Roger et al. (2017) Charles Roger, Thomas Hale, and Liliana Andonova. 2017. The comparative politics of transnational climate governance. International Interactions 43, 1 (2017), 1–25. Rolnick et al. (2019) David Rolnick, Priya L Donti, Lynn H Kaack, Kelly Kochanski, Alexandre Lacoste, Kris Sankaran, Andrew Slavin Ross, Nikola Milojevic-Dupont, Natasha Jaques, Anna Waldman-Brown, et al. 2019. Tackling climate change with machine learning. arXiv preprint arXiv:1906.05433 (2019). Santamaria et al. (2020) Simona Santamaria, David Dao, Björn Lütjens, and Ce Zhang. 2020. TrueBranch: Metric Learning-based Verification of Forest Conservation Projects. arXiv preprint arXiv:2004.09725 (2020). Schiff et al. (2020) Daniel Schiff, Aladdin Ayesh, Laura Musikanski, and John C Havens. 2020. IEEE 7010: A New Standard for Assessing the Well-being Implications of Artificial Intelligence. arXiv preprint arXiv:2005.06620 (2020). Selbst et al. (2019) Andrew D Selbst, Danah Boyd, Sorelle A Friedler, Suresh Venkatasubramanian, and Janet Vertesi. 2019. Fairness and abstraction in sociotechnical systems. In Proceedings of the Conference on Fairness, Accountability, and Transparency. 59–68. Suresh and Guttag (2019) Harini Suresh and John V Guttag. 2019. A framework for understanding unintended consequences of machine learning. arXiv preprint arXiv:1901.10002 (2019). Uprety et al. (2012) Yadav Uprety, Hugo Asselin, Yves Bergeron, Frédérik Doyon, and Jean-François Boucher. 2012. Contribution of traditional knowledge to ecological restoration: practices and applications. Ecoscience 19, 3 (2012), 225–237.
Evolution of the spectral index after inflation Ali A. Asgari Amir H. Abbassi* Department of Physics, School of Sciences, Tarbiat Modares University, P.O.Box 14155-4838, Tehran, Iran Abstract In this article we investigate the time evolution of the adiabatic (curvature) and isocurvature (entropy) spectral indices after end of inflation for all cosmological scales and two different initial conditions. For this purpose, we first extract an explicit equation for the time evolutin of the comoving curvature perturbation (which may be known as the generalized Mukhanov-Sasaki equation). It shall be manifested that the evolution of adiabatic spectral index severely depends on the initial conditions and just for the super-Hubble scales and adiabatic initial conditions is constant as be expected. Moreover, it shall be clear that the adiabatic spectral index after recombination approach to a constant value for the isocurvature perturbations. Finally, we re-investigate the Sachs-Wolfe effect and show that the fudge factor $\frac{1}{3}$ in the adiabatic ordinary Sachs-Wolfe formula must be replaced by 0.4. 1 Introduction Inflation was produced first for resolving three classical cosmological problems: the horizon, flatness and monopole problems [1]. It also explains the origin of the CMB anisotropy and structure formation [2, 3], indeed, during inflation the quantum vacuum fluctuations of the scalar field(s) on the scales less than the Hubble horizon are magnified into the classical perturbations in the scalar field(s) on scales larger than the Hubble horizon [4, 5]. It can be shown that these perturbations have the nearly scale-invariant spectrum and can explain the origin of the inhomogeneities in the recent universe such as large structures and CMB anisotropies as well [2, 6]. These classical perturbations can be described by some perturbative cosmic potentials which are related to the FLRW metric perturbations and maybe cosmic fluid perturbations. One of these perturbative cosmic potentials is the comoving curvature perturbation $\mathcal{R}$ which is significant in cosmology due to the following reasons • It is conserved for the adiabatic perturbations when the scales of the perturbations are extremely longer than the Hubble horizon [6] • Gauge-invariance and resemblance to the physical observables [7] • Sasaki-Stewart $\delta N$-formula which express the perturbation of e-folding number in terms of $\mathcal{R}$ [8] • The scalar primordial power spectrum usually refers to the power spectrum of $\mathcal{R}$ which characterizes the adiabatic mode [5, 9, 10] Furthermore, it can be shown that the linear perturbation of the scale factor and signature of the spatial curvature of the universe in the comoving gauge merely depends on $\mathcal{R}$ [11, 12, 13] $$\displaystyle\delta a\left(t,\textbf{x}\right)=a\left(t\right)\mathcal{R}\left% (t,\textbf{x}\right),$$ $$\displaystyle\delta K\left(t,\textbf{x}\right)=-\frac{2}{3}\nabla^{2}\mathcal{% R}\left(t,\textbf{x}\right).$$ Finally, $\mathcal{R}$ can be used to connect observed cosmological perturbations in the adiabatic mode with quantum fluctuations at very early times [6]. The dynamic of $\mathcal{R}$ during the inflation is described by the well-known Mukhanov-Sasaki equation [14, 15]. This equation yields the power spectrum and spectral index of $\mathcal{R}$ at the inflation era. In this paper we generalize the Mukhanov-Sasaki equation to be included all the history of the universe and then by invoking a simple model, show that how $\mathcal{R}$ evolves after inflation. We also discuss about the spectral index evolution after the inflation. The outline of this paper is as follows. In Section 2, we present an explicit equation for the time evolution of $\mathcal{R}$ which can be used for all history eras of the universe and then investigate its solutions for some very simple cases. In Section 3, we investigate a universe which contains mixture of radiation and matter and then show that the $\mathcal{R}$-evolution equation can be solved after coupling to theKodama-Sasaki equation [16, 17]. We consider two adiabatic and isocurvature initial conditions and present the numerical solutions. We also supply an analytic method which helps us to approximate solutions. Moreover, we present the numerical results of the curvature spectral index and also entropy spectral index evolution in the post-inflationary universe. In Section 4, we re-investigate the Sachs-Wolfe effect and point out that $\frac{1}{3}$ factor in the Sachs-Wolfe formula must be enhanced as will be said. We present our conclusion in Section 5. 2 A general equation for evolution of $\mathcal{R}_{q}$ We may consider the metric of the universe as [6, 18, 19] $$ds^{2}=a^{2}\left\{−\left(1+E\right)d\tau^{2}+2\partial_{i}Fd\tau dx^{i}+% \left[\left(1+A\right)\delta_{ij}+\partial_{i}\partial_{j}B\right]dx^{i}dx^{j}% \right\},$$ (1) which is the FLRW metric with $K=0$ in the comoving quasi-Cartesian coordinates accompanied by the most general scalar linear perturbations. Similarly, the energy-momentum tensor of the cosmic fluid can be written as $$\displaystyle T_{00}=a^{2}\left[\bar{\rho}\left(1+E\right)+\delta\rho\right],$$ (2) $$\displaystyle T_{i0}=a^{2}\left[\bar{p}\partial_{i}F-\left(\frac{\bar{\rho}+% \bar{p}}{a}\right)\partial_{i}\left(\delta u\right)\right],$$ (3) $$\displaystyle T_{ij}=a^{2}\left[\bar{p}\left(1+A\right)\delta_{ij}+\delta p% \delta_{ij}+\bar{p}\partial_{i}\partial_{j}B+\partial_{i}\partial_{j}\Pi^{S}% \right],$$ (4) where $\delta u$ and $\Pi^{S}$ are respectively the scalar velocity potential and scalar anisotropic inertia of the cosmic fluid. $\Pi^{S}$ represents departures of the cosmic fluid from perfectness. Furthermore, $\rho=\bar{\rho}+\delta\rho$ and $p=\bar{p}+\delta p$ are the energy density and pressure of the cosmic fluid respectively. Notice that bar over every quantity stands for its unperturbed value. It can be shown that except $\Pi^{S}$ all of the perturbative scalars in equations (1) to (4) are not gauge-invariant [6], so we may invoke combinational gauge-invariant scalars like Bardeen potentials [7] $$\displaystyle\Psi=−\frac{A}{2}-\mathcal{H}\sigma,$$ (5) $$\displaystyle\Phi=\frac{E}{2}+\mathcal{H}\sigma+\sigma^{\prime}.$$ (6) Here the prime symbol stands for the derivative respect to the $\tau$ and $\mathcal{H}=Ha$ is the comoving Hubble parameter. Furthermore, $\sigma=F-\frac{1}{2}B^{\prime}$ is the shear potential of the cosmic fluid. Some other gauge-invariant scalars are $$\displaystyle\mathcal{R}=\frac{A}{2}+H\delta u,$$ (7) $$\displaystyle\zeta=\frac{A}{2}-\mathcal{H}\frac{\delta\rho}{\bar{\rho}^{\prime% }},$$ (8) $$\displaystyle\Delta=a\frac{\delta\rho}{\bar{\rho}^{\prime}}+\delta u,$$ (9) $$\displaystyle\Gamma=\delta p-{c_{s}}^{2}\delta\rho,$$ (10) where $c_{s}^{2}$ is the adiabatic sound speed in the cosmic fluid. $\mathcal{R}$ is known as comoving (intrinsic) curvature perturbation. According to the perturbative form of the field equations and also energy-momentum conservation law we can write $$\displaystyle\mathcal{R}_{q}$$ $$\displaystyle=$$ $$\displaystyle\frac{-2\mathcal{H}^{2}+\mathcal{H}^{\prime}}{\mathcal{H}^{2}-% \mathcal{H}^{\prime}}\Psi_{q}-\frac{\mathcal{H}}{\mathcal{H}^{2}-\mathcal{H}^{% \prime}}\Psi^{\prime}_{q}+\frac{8\pi G\mathcal{H}^{2}a^{2}}{\mathcal{H}^{2}-% \mathcal{H}^{\prime}}\Pi^{S}_{q},$$ (11) $$\displaystyle\mathcal{R}^{\prime}_{q}$$ $$\displaystyle=$$ $$\displaystyle\frac{\mathcal{H}{c_{s}}^{2}}{\mathcal{H}^{2}-\mathcal{H}^{\prime% }}q^{2}\Psi_{q}-\frac{4\pi G\mathcal{H}a^{2}}{\mathcal{H}^{2}-\mathcal{H}^{% \prime}}\Lambda_{q},$$ (12) where $\mathcal{R}_{q}$ denotes the Fourier transformation of $\mathcal{R}$ with the comoving wave number $q$ and $\Lambda_{q}=\Gamma_{q}-q^{2}\Pi^{S}_{q}$. Notice that we have taken $K=0$ which is in agreement with the observational data[20]. By combination of equations (11) and (12) after some tedious but straightforward calculation we can derive an explicit equation in terms of $\mathcal{R}_{q}$ $$\mathcal{R}^{\prime}_{q}=-\frac{4\pi G\mathcal{H}a^{2}}{\mathcal{H}^{2}-% \mathcal{H}^{\prime}}\Lambda_{q},\quad{c_{s}}^{2}=0$$ (13) and $$\displaystyle\mathcal{R}^{\prime\prime}_{q}+2\frac{\mathcal{Z}^{\prime}}{% \mathcal{Z}}\mathcal{R}^{\prime}_{q}+{c_{s}}^{2}q^{2}\mathcal{R}_{q}=\\ \displaystyle\qquad\frac{4\pi Ga^{2}}{\mathcal{H}^{2}-\mathcal{H}^{\prime}}% \Bigg{[}\Bigg{(}-4\mathcal{H}^{2}+\frac{\left(\mathcal{H}{c_{s}}^{2}\right)^{% \prime}}{{c_{s}}^{2}}\Bigg{)}\Lambda_{q}-\mathcal{H}\Lambda^{\prime}_{q}-2q^{2% }\mathcal{H}^{2}{c_{s}}^{2}\Pi^{S}_{q}\Bigg{]},\quad{c_{s}}^{2}\neq 0$$ (14) where $\mathcal{Z}=\frac{a}{\mathcal{H}}\sqrt{\frac{|\mathcal{H}^{2}-\mathcal{H}^{% \prime}|}{{c_{s}}^{2}}}$. Equations (13) and (14) together with each other include the most general case of the fluid even the scalar field, so the Mukhanov-Sasaki equation is a special case of this equation. For the pure dust universe equation (13) yields $\mathcal{R}_{q}=const$. It means $\mathcal{R}$ is conserved if the cosmic fluid is dust regardless of the comoving wave number scale. On the other hand, for the pure radiation case equation (14) reduces to $\mathcal{R}^{\prime\prime}_{q}+\frac{q^{2}}{3}\mathcal{R}_{q}=0$ and consequently, $\mathcal{R}_{q}\propto\cos\left(\frac{q}{\sqrt{3}}\tau\right)$. Besides, if we suppose the radiation era starts immediately after the inflation, it may apply the following initial condition [6] $$\tau\longrightarrow 0\qquad:\qquad\mathcal{R}_{q}\longrightarrow Nq^{-2+\frac{% n_{s_{0}}}{2}}\quad\left(N\simeq 10^{-5}\quad\rm{and}\quad n_{s_{0}}\simeq 0.9% 6\right)$$ ( $\tau=0$ is supposed to be the end of inflation era and start time of the radiation epoch.) Thus $$\mathcal{R}_{q}\left(\tau\right)=Nq^{-2+\frac{n_{s_{0}}}{2}}\cos\left(\frac{q}% {\sqrt{3}}\tau\right).$$ (15) Now lets turn to the inflaton case. Every scalar field can be treated as a perfect fluid. For the homogeneous inflaton field $\bar{\varphi}\left(t\right)$ with potential $V\left(\bar{\varphi}\right)$ we have $$\displaystyle\bar{\rho}=\frac{1}{2a^{2}}\bar{\varphi}^{\prime}{}^{2}+V\left(% \bar{\varphi}\right),$$ (16) $$\displaystyle\bar{p}=\frac{1}{2a^{2}}\bar{\varphi}^{\prime}{}^{2}-V\left(\bar{% \varphi}\right).$$ (17) It can be shown that under the linear perturbations of the metric i.e. equation (1) $$\displaystyle\delta\rho=-\frac{1}{2a^{2}}E\bar{\varphi}^{\prime}{}^{2}+\frac{1% }{a^{2}}\bar{\varphi}^{\prime}\delta\varphi^{\prime}+\frac{\partial V}{% \partial\bar{\varphi}}\delta\varphi,$$ (18) $$\displaystyle\delta p=-\frac{1}{2a^{2}}E\bar{\varphi}^{\prime}{}^{2}+\frac{1}{% a^{2}}\bar{\varphi}^{\prime}\delta\varphi^{\prime}-\frac{\partial V}{\partial% \bar{\varphi}}\delta\varphi.$$ (19) Now lets confine ourselves to the comoving gauge which indicates $\delta\varphi=0$, thus $\delta\rho=\delta p$, consequently ${c_{s}}^{2}=1$ and $\Gamma_{q}=\Lambda_{q}=0$. Note that anisotropic inertia $\Pi^{S}_{q}$ for the scalar fields vanishes. So equation (14) reduces to $$\mathcal{R}^{\prime\prime}_{q}+2\frac{\mathcal{Z}^{\prime}}{\mathcal{Z}}% \mathcal{R}^{\prime}_{q}+q^{2}\mathcal{R}_{q}=0.$$ (20) On the other hand, according to the Friedmann equation $\mathcal{H}^{2}-\mathcal{H}^{\prime}=4\pi G\bar{\varphi}^{2}$ Thus $\mathcal{Z}=\frac{a\varphi^{\prime}}{\mathcal{H}}$. Now by introducing the Sasaki-Mukhanov variable as $v_{q}=\mathcal{Z}\mathcal{R}_{q}$, equation(20) reduces to $$v^{\prime\prime}_{q}+\left(q^{2}-\frac{\mathcal{Z}^{\prime\prime}}{\mathcal{Z}% }\right)v_{q}=0,$$ (21) which is the famous Mukhanov-Sasaki equation. 3 Evolution of $\mathcal{R}_{q}$ in a simplified universe In this section we consider the case where the cosmic fluid has been constructed from two perfect fluids: matter and radiation which dont interact with each other, i.e., there is no energy or momentum transfer between them. This model was introduced first by Peebles and Yu [21] and is used by Seljak [22] in order to approximate CMB anisotropy. Compared to the real universe this is a simplification since, contrary to the CDM, the baryonic matte does interact with photons. Indeed, the radiation components of the real universe i.e. neutrinos and photons behave like a perfect fluid only until their decoupling [17]. We have now two fluid components, so $\rho=\rho_{R}+\rho_{M}$ where $\bar{\rho}_{M}\propto\frac{1}{a^{3}}$ and $\bar{\rho}_{R}\propto\frac{1}{a^{4}}$. Lets define the normalized scale factor as $$y=\frac{a}{a_{eq}}=\frac{\bar{\rho}_{M}}{\bar{\rho}_{R}},$$ (22) where $a_{eq}$ is the scale factor in the time of matter-radiation equality. It is clear that $$\displaystyle\bar{\rho}_{M}=\frac{y}{y+1}\bar{\rho},$$ $$\displaystyle\bar{\rho}_{R}=\frac{1}{y+1}\bar{\rho}.$$ Consequently, $$\omega=\frac{1}{3\left(y+1\right)},\hskip 35.0pt{c_{s}}^{2}=\frac{4}{3\left(3y% +4\right)},$$ (23) Also $$\mathcal{H}=\frac{y^{\prime}}{y}=\frac{\mathcal{H}_{eq}}{\sqrt{2}}\frac{\sqrt{% y+1}}{y},$$ (24) where $\mathcal{H}_{eq}$ is the comoving Hubble parameter in the time of matter-radiation equality. It is not hard to show that $$\Gamma=-\bar{\rho}_{M}{c_{s}}^{2}\mathcal{S},$$ (25) where $\mathcal{S}=\delta_{M}-\delta_{R}=\frac{\delta\rho_{M}}{\bar{\rho}_{M}}-\frac{% 3}{4}\frac{\delta\rho_{R}}{\bar{\rho}_{R}}$ is the entropy perturbation between matter and radiation. Thus $$\Lambda_{q}=\Gamma_{q}=-\bar{\rho}_{M_{eq}}\frac{4\mathcal{S}_{q}}{3y^{3}\left% (3y+4\right)}=-\frac{\mathcal{H}_{eq}^{2}}{4\pi Ga_{eq}^{2}}\frac{\mathcal{S}_% {q}}{y^{3}\left(3y+4\right)},$$ (26) By substituting equations (23) , (24) and (26) in equation (14) we find $$\left(y+1\right)\mathcal{R}_{q}^{\ast\ast}+\frac{21y^{2}+36y+16}{2y\left(3y+4% \right)}\mathcal{R}_{q}^{\ast}+\frac{8}{3\left(3y+4\right)}\left(\frac{q}{% \mathcal{H}_{eq}}\right)^{2}\mathcal{R}_{q}=\frac{2}{y\left(3y+4\right)}% \mathcal{S}_{q}+\frac{4\left(y+1\right)}{\left(3y+4\right)^{2}}\mathcal{S}_{q}% ^{\ast},$$ (27) where ”$\ast$” stands for the partial derivative respect to $y$. The adiabatic solution of equation (27) for the super-Hubble scales may be found by putting $\mathcal{S}_{q}=0$ and neglecting the term contains $\frac{q}{\mathcal{H}_{eq}}$ i.e. $$\mathcal{R}_{q}^{o}=C_{1}\left(q\right)\left(\ln\frac{\sqrt{y+1}+1}{\sqrt{y}}-% \frac{\sqrt{y+1}}{y}\frac{3y+2}{3y+4}\right)+C_{2}\left(q\right).$$ (28) The first term of equation (28) is decaying through the time and has no any significance in the late times, so we conclude the conservation of $\mathcal{R}$ as it is be expected [6]. In order to solve equation (27) in general, we ought to determine $\mathcal{S}_{q}$. $\mathcal{S}_{q}$ for a mixture of matter and radiation may be obtained from Kodama-Sasaki equation[16] $$\frac{1}{\mathcal{H}^{2}}\mathcal{S}^{\prime\prime}_{q}+3\frac{{c_{s}}^{2}}{% \mathcal{H}}\mathcal{S}^{\prime}_{q}=-\frac{q^{2}}{\mathcal{H}^{2}}\left[% \tilde{\Delta}_{q}-\frac{1}{3}\left(3{c_{s}}^{2}-1\right)\mathcal{S}_{q}\right],$$ (29) where $\tilde{\Delta}=H\Delta$. By re-writing equation (29) in terms of $y$ we find $$\displaystyle y\mathcal{S}_{q}^{\ast\ast}+\left(1+\frac{4}{3y+4}-\frac{y+2}{2% \left(y+1\right)}\right)\mathcal{S}_{q}^{\ast}=-\frac{q^{2}}{\mathcal{H}_{eq}^% {2}}\frac{2y}{y+1}\left(\tilde{\Delta}_{q}+\frac{y}{3y+4}\mathcal{S}_{q}\right).$$ (30) On the other hand, we have $$\mathcal{R}^{\prime}=-\frac{\mathcal{H}{c_{s}}^{2}}{\mathcal{H}^{2}-\mathcal{H% }^{\prime}}\nabla^{2}\Psi-\frac{4\pi G\mathcal{H}a^{2}}{\mathcal{H}^{2}-% \mathcal{H}^{\prime}}\Lambda.$$ (31) Furthermore, from Poisson’s equation we can write $$\nabla^{2}\Psi=-12\pi G\left(\bar{\rho}+\bar{p}\right)a^{2}\tilde{\Delta}.$$ (32) Combination of equations (31) and (32) yields $$\mathcal{R}_{q}^{\ast}=\frac{4}{y\left(3y+4\right)}\left(\tilde{\Delta}_{q}+% \frac{y}{3y+4}\mathcal{S}_{q}\right).$$ (33) Substituting equation (33) in equation (30) yields $$\left(y+1\right)\mathcal{S}_{q}^{\ast\ast}+\frac{3y^{2}+12y+8}{2y\left(3y+4% \right)}\mathcal{S}_{q}^{\ast}=-\frac{1}{2}\left(\frac{a}{\mathcal{H}_{eq}}% \right)^{2}y\left(3y+4\right)\mathcal{R}_{q}^{\ast}.$$ (34) Equations (27) and (34) are coupled and must be solved simultaneously $$\displaystyle\left\{\begin{aligned} &\displaystyle\left(y+1\right)\mathcal{S}_% {q}^{\ast\ast}+\frac{3y^{2}+12y+8}{2y\left(3y+4\right)}\mathcal{S}_{q}^{\ast}=% -\frac{1}{2}\epsilon y\left(3y+4\right)\mathcal{R}_{q}^{\ast},\\ \\ &\displaystyle\left(y+1\right)\mathcal{R}_{q}^{\ast\ast}+\frac{21y^{2}+36y+16}% {2y\left(3y+4\right)}\mathcal{R}_{q}^{\ast}+\frac{8\epsilon}{3\left(3y+4\right% )}\mathcal{R}_{q}=\frac{2}{y\left(3y+4\right)}\mathcal{S}_{q}+\frac{4\left(y+1% \right)}{\left(3y+4\right)^{2}}\mathcal{S}_{q}^{\ast},\end{aligned}\right.$$ where $\epsilon=\left(\frac{a}{\mathcal{H}_{eq}}\right)^{2}$. At early times, all cosmologically intersecting scales are outside the Hubble horizon, so we may find the behavior of the solutions in the early stage by setting $\epsilon=0$. In this case we will have two independent solutions • Solution 1 $$\displaystyle\left\{\begin{aligned} &\displaystyle\mathcal{S}_{q}=0,\\ \\ &\displaystyle\mathcal{R}_{q}=const.\end{aligned}\right.$$ • Solution 2 $$\displaystyle\left\{\begin{aligned} &\displaystyle\mathcal{S}_{q}=const,\\ \\ &\displaystyle\mathcal{R}_{q}=\frac{y}{3y+4}\mathcal{S}_{q}=\frac{1}{3}\left(1% -3{c_{s}}^{2}\right)\mathcal{S}_{q}.\end{aligned}\right.$$ Solutions 1 and 2 are called adiabatic and isocurvature initial conditions respectively. Note that they are solutions of the equations (27) and (34) only in the early times when the scales of the perturbations have been extremely longer than the Hubble horizon. The ”initial condition” expression refers to this point. From equation(27) it is clear that the entropy perturbation is a source for the curvature perturbation, so if $\mathcal{S}\neq 0$, $\mathcal{R}$ is never conserved. In order to solve equations (27) and (34) analytically, we may expand $\mathcal{S}_{q}$ and $\mathcal{R}_{q}$ in terms of $\epsilon$ by the Frobenius method[23] $$\displaystyle\mathcal{S}_{q}\left(y\right)=\epsilon^{\alpha}\sum_{n=0}^{\infty% }{\epsilon^{n}\mathcal{S}_{n}\left(y\right)},$$ (35) $$\displaystyle\mathcal{R}_{q}\left(y\right)=\epsilon^{\beta}\sum_{n=0}^{\infty}% {\epsilon^{n}\mathcal{R}_{n}\left(y\right),}$$ (36) Where $\alpha$ and $\beta$ , in general, are two arbitrary complex numbers. After substituting equations (35) and (36) in equations (27) and (34) and also setting $\alpha=\beta$ we have $$\displaystyle\left\{\begin{aligned} &\displaystyle\left(y+1\right)\mathcal{S}_% {0}^{\ast\ast}+\frac{3y^{2}+12y+8}{2y\left(3y+4\right)}\mathcal{S}_{0}^{\ast}=% 0.\\ \\ &\displaystyle\left(y+1\right)\mathcal{R}_{0}^{\ast\ast}+\frac{21y^{2}+36y+16}% {2y\left(3y+4\right)}\mathcal{R}_{0}^{\ast}=\frac{2}{y\left(3y+4\right)}% \mathcal{S}_{0}+\frac{4\left(y+1\right)}{\left(3y+4\right)^{2}}\mathcal{S}_{0}% ^{\ast}.\end{aligned}\right.$$ (37) And also for $n\geq 1$ the we find two recursive equations as follows $$\displaystyle\left\{\begin{aligned} &\displaystyle\left(y+1\right)\mathcal{S}_% {n}^{\ast\ast}+\frac{3y^{2}+12y+8}{2y\left(3y+4\right)}\mathcal{S}_{n}^{\ast}=% -\frac{1}{2}y\left(3y+4\right)\mathcal{R}_{n-1}^{\ast}.\\ \\ &\displaystyle\left(y+1\right)\mathcal{R}_{n}^{\ast\ast}+\frac{21y^{2}+36y+16}% {2y\left(3y+4\right)}\mathcal{R}_{n}^{\ast}=-\frac{8}{3\left(3y+4\right)}% \mathcal{R}_{n-1}+\frac{2}{y\left(3y+4\right)}\mathcal{S}_{n}+\frac{4\left(y+1% \right)}{\left(3y+4\right)^{2}}\mathcal{S}_{n}^{\ast}.\end{aligned}\right.$$ (38) After fixing the initial conditions, we can solve equations (37) and (38). On the other hand, the adiabatic initial condition according to the inflationary theory may be written as[6] $$\epsilon\longrightarrow 0\quad\mbox{or}\quad y\longrightarrow 0\quad:\quad% \mathcal{R}_{q}\longrightarrow Nq^{-2+\frac{n_{s_{0}}}{2}}\quad\mbox{and}\quad% \mathcal{S}_{q}\longrightarrow 0.$$ (39) Where according to the observational data $N\simeq 10^{-5}$ and $n_{s_{0}}\simeq 0.96$ [20]. So under the adiabatic initial condition, we have $$\displaystyle\alpha=\beta=-1+\frac{n_{s_{0}}}{4},$$ $$\displaystyle\mathcal{S}_{0}\left(y\right)=\mathcal{S}_{1}\left(y\right)=0,$$ $$\displaystyle\mathcal{R}_{0}\left(y\right)=N\mathcal{H}_{eq}^{-2+\frac{n_{s_{0% }}}{2}},$$ $$\displaystyle\mathcal{R}_{1}\left(y\right)=-\frac{16}{45}N\left[\frac{1}{y}+% \frac{13}{3\left(3y+4\right)}-\frac{2\sqrt{y+1}\left(3y+2\right)}{y\left(3y+4% \right)}+2\ln{\frac{1+\sqrt{y+1}}{2}}-\frac{13}{12}\right].$$ So, when the scales of the perturbations are not extremely smaller than the Hubble horizon, under the adiabatic initial condition the following approximation is appropriate $$\displaystyle\left\{\begin{aligned} &\displaystyle\mathcal{S}_{0}\left(y\right% )\simeq 0,\\ \\ &\displaystyle\mathcal{R}_{q}\left(y\right)\simeq Nq^{-2+\frac{n_{s_{0}}}{2}}-% \frac{16N}{45\mathcal{H}_{eq}^{2}}\Bigg{[}\frac{1}{y}+\frac{13}{3\left(3y+4% \right)}-\frac{2\sqrt{y+1}\left(3y+2\right)}{y\left(3y+4\right)}+2\ln{\frac{1+% \sqrt{y+1}}{2}}-\frac{13}{12}\Bigg{]}q^{\frac{n_{s_{0}}}{2}}.\end{aligned}\right.$$ On the other hand, the isocurvature initial condition may be written as $$\epsilon\longrightarrow 0\quad\mbox{or}\quad y\longrightarrow 0\quad:\quad% \mathcal{S}_{q}\longrightarrow Mq^{-2+\frac{n_{iso_{0}}}{2}}\quad\mbox{and}% \quad\mathcal{R}_{q}\longrightarrow\frac{y}{3y+4}\mathcal{S}_{q}.$$ (40) where in accordance with the Liddle and Mazumdar model [24] $n_{iso_{0}}\simeq 4.43$. It may be also considered $M\simeq 10^{-5}$ as similar as domain of the adiabatic perturbation. Unfortunately, in this case solving the recursive equations result in calculation of some enormous integrals, so we leave it. Nevertheless, it is possible to solve equations (27) and (34) numerically. We present the results in some figures. In Figures 1 and 2 the curve of $\mathcal{R}_{q}$ for the adiabatic and isocurvature initial conditions have been plotted. The spatial indices of the adiabatic and isocurvature perturbations respectively are defined as $$\displaystyle n_{s}\left(q\right)=4+\frac{q}{\mathcal{P}_{\mathcal{R}}\left(q% \right)}\frac{\partial\mathcal{P}_{\mathcal{R}}\left(q\right)}{\partial q}=4+2% \frac{q}{\mathcal{R}_{q}}\frac{\partial\mathcal{R}_{q}}{\partial q},$$ (41) $$\displaystyle n_{iso}\left(q\right)=4+\frac{q}{\mathcal{P}_{\mathcal{S}}\left(% q\right)}\frac{\partial\mathcal{P}_{\mathcal{S}}\left(q\right)}{\partial q}=4+% 2\frac{q}{\mathcal{S}_{q}}\frac{\partial\mathcal{S}_{q}}{\partial q},$$ (42) where $\mathcal{P}_{\mathcal{R}}\left(q\right)$ and $\mathcal{P}_{\mathcal{S}}\left(q\right)$ are respectively the power spectrums of $\mathcal{R}_{q}$ and $\mathcal{S}_{q}$. The curves of $n_{s}\left(q\right)$ and $n_{iso}\left(q\right)$ in terms of $y$ has plotted in Figures 3, 4, 5 and 6 for the adiabatic and isocurvature initial conditions respectively. It is clear that $n_{s}$ and $n_{iso}$ depend on $q$ severely, so we may say the spectral indices are running. 4 The Sachs-Wolfe effect: a new survey Cosmic Microwave Background (CMB) was discovered in a study of noise backgrounds in a radio telescope by Penzias and Wilson in 1965 [25]. Two years later, Sachs and Wolfe pointed out that the CMB must show the temperature anisotropy as a result of photon traveling in the perturbed universe [26]. An important contribution in temperature anisotropy of CMB is called the ordinary Sachs-Wolfe effect which stems from the intrinsic temperature inhomogeneities on the last scattering surface and also the inhomogeneities of the metric at the time of last scattering. It can be shown that [27, 28, 29] $$\left[\frac{\Delta T\left(\hat{\textbf{n}}\right)}{T_{0}}\right]_{\rm{S.W.}}=% \zeta_{R}\left(t_{L},\hat{\textbf{n}}r_{L}\right)+\Psi\left(t_{L},\hat{\textbf% {n}}r_{L}\right)+\Phi\left(t_{L},\hat{\textbf{n}}r_{L}\right),$$ (43) where $\zeta_{R}$ denotes the curvature perturbation of the radiation in the uniform density slices. Moreover, $\hat{\textbf{n}}$ is the unit vector stands for the direction of observation. Notice all quantities are evaluated in last scattering surface. The corresponding scales are well outside the Hubble horizon has dominant imprint in the ordinary Sachs-Wolfe effect. In the multipole space, the ordinary Sachs-Wolfe effect is responsible for $l\lesssim 30$ [5]. Moreover, its angular power spectrum has a plateau which is known as Sachs-Wolfe plateau. In the mixture of radiation and dust equation (43) reduces to $$\left[\frac{\Delta T\left(\hat{\textbf{n}}\right)}{T_{0}}\right]_{\rm{S.W.}}=% \zeta_{R}\left(t_{L},\hat{\textbf{n}}r_{L}\right)+2\Psi\left(t_{L},\hat{% \textbf{n}}r_{L}\right).$$ (44) Calculating of $\zeta_{R}$ for a radiation-dust mixture is straightforward, because $\mathcal{S}=3\left(\zeta_{M}-\zeta_{R}\right)$. On the other hand, $\zeta$ is the weighted average of $\zeta_{R}$ and $\zeta_{R}$ i.e. $$\zeta=\frac{4}{3y+4}\zeta_{R}+\frac{3y}{3y+4}\zeta_{M}.$$ (45) Thus $$\zeta_{R}=\zeta-\frac{y}{3y+4}\mathcal{S}.$$ (46) Moreover, we have $$\zeta=\mathcal{R}+\frac{1}{3\left(\mathcal{H}^{2}-\mathcal{H}^{\prime}\right)}% \nabla^{2}\Psi.$$ (47) Consequently $$\zeta_{R}=\mathcal{R}+\frac{4y^{2}}{3\mathcal{H}_{eq}^{2}\left(3y+4\right)}% \nabla^{2}\Psi-\frac{y}{3y+4}\mathcal{S}.$$ (48) Returning equation (48) to equation (44) we find that $$\left[\frac{\Delta T\left(\hat{\textbf{n}}\right)}{T_{0}}\right]_{\rm{S.W.}}=% \mathcal{R}\left(t_{L},\hat{\textbf{n}}r_{L}\right)+2\Psi\left(t_{L},\hat{% \textbf{n}}r_{L}\right)-\frac{y_{L}}{3y_{L}+4}\mathcal{S}\left(t_{L},\hat{% \textbf{n}}r_{L}\right).$$ (49) Note that we omitted the term containing $\nabla^{2}\Psi$ in equation (49), because in the ordinary Sachs-Wolfe the small scales has subdominant contribution. In the case of adiabatic initial condition equation (49) reduces to $$\left[\frac{\Delta T\left(\hat{\textbf{n}}\right)}{T_{0}}\right]_{\rm{S.W.}}^{% \rm{Ad}}=\mathcal{R}\left(t_{L},\hat{\textbf{n}}r_{L}\right)+2\Psi\left(t_{L},% \hat{\textbf{n}}r_{L}\right).$$ (50) On the other hand, $$\frac{2}{3\left(\omega+1\right)\mathcal{H}}\Psi^{\prime}+\frac{3\omega+5}{3% \left(\omega+1\right)}\Psi=-\mathcal{R}.$$ (51) or $$\Psi^{\ast}+\frac{5y+6}{2y\left(y+1\right)}\Psi=-\frac{3y+4}{2y\left(y+1\right% )}\mathcal{R}.$$ (52) Here $\mathcal{R}$ is independent of $y$, so equation (52) yields $$\Psi=\frac{-9y^{3}-2y^{2}+8y+16-16\sqrt{y+1}}{15y^{3}}\mathcal{R}.$$ (53) Notice that $$\displaystyle\lim\Psi=-\frac{2}{3}\mathcal{R}$$ $$\displaystyle\lim\Psi=-\frac{3}{5}\mathcal{R}$$ $$\displaystyle y\longrightarrow 0$$ $$\displaystyle y\longrightarrow\infty$$ which is expected for the pure radiation and pure dust respectively. By substituting equation (53) in equation (50) we find that $$\left[\frac{\Delta T\left(\hat{\textbf{n}}\right)}{T_{0}}\right]_{\rm{S.W.}}^{% \rm{Ad}}=\left[2-\frac{15y_{L}^{3}}{9y_{L}^{3}+2y_{L}^{2}-8y_{L}-16+16\sqrt{y_% {L}+1}}\right]\Psi\left(t_{L},\hat{\textbf{n}}r_{L}\right).$$ (54) In addition, we have $$y_{L}=\frac{1+z_{eq}}{1+z_{L}}=\frac{1+3263}{1+1091}=2.98.$$ (55) So $$\left[\frac{\Delta T\left(\hat{\textbf{n}}\right)}{T_{0}}\right]_{\rm{S.W.}}^{% \rm{Ad}}=0.4\Psi\left(t_{L},\hat{\textbf{n}}r_{L}\right).$$ (56) So the fudge factor of Sachs-Wolfe effect should be substitude by 0.4 which is greater than the ordinary 1/3 factor [29, 30, 31]. Finally, lets turn to the isocurvature initial condition. In this case we have $\mathcal{R}=\frac{y}{3y+4}\mathcal{S}$, so $$\left[\frac{\Delta T\left(\hat{\textbf{n}}\right)}{T_{0}}\right]_{\rm{S.W.}}^{% \rm{Iso}}=2\Psi\left(t_{L},\hat{\textbf{n}}r_{L}\right),$$ (57) which coincides with previous results completely [29, 30]. 5 Discussion We have derived a neat equation for the evolution of comoving curvature perturbation and then have found its solutions for some simple cases. We showed that the Mukhanov-Sasaki equation is a special case of this equation. We also found its numerical solutions for the radiation and matter mixing by fixing two different adiabatic and isocurvature initial conditions. As we have seen, in this case the equation cannot be solved alone due to the presence of another unidentified quantity: entropy perturbation. So we coupled the equation to the Kodama-Sasaki equation. We also investigated the time evolution of the adiabatic and isocurvature spectral indices for both initial conditions separately. We showed that not only the curvature spectral index for adiabatic initial condition and severe super-Hubble scales is constant, but also the entropy spectral index for isocurvature initial condition and severe super-Hubble scales is constant too. It seems $n_{iso}$ for $y\gtrsim 2$ and severe supe-Hubble scales is approximately flat regardless to the initial condition. Moreover, we found that $n_{s}$ is roughly constant for $y\gtrsim 4$ regardless to the scale of perturbations even though the isocurvature initial condition has been chosen. Finally, we re-investigated the ordinary Sachs-Wolfe effect and showed that the factor $\frac{1}{3}$ should be increased as amount $\sim$0.06 by considering more actual situation. This is consistent by White and Hu pedagogical derivation of Sachs-Wolfe effect. References [1] Guth A H,Inflationary universe: a possible solution to the horizon and flatness problems, Phys.Rev. D 23 (1981) 347. [2] Guth A H and Pi So-Young, Fluctuations in the new inflationary universe , Phys.Rev.Lett 49 (1982) 1110. [3] Bardeen J M, Steinhardt P J and Turner M H, Spontaneous creation of almost scale-free density perturbations in an inflationary universe , Phys.Rev. D 28 (1983) 679. [4] Mukhanov V, Physical foundations of cosmology, Cambridge University Press (2005). [5] Lyth D H and Liddle A R, The Primordial density perturbation: cosmology, inflation and the origin of structure, Cambridge University Press (2009). [6] Weinberg S, Cosmology,Oxford University Press (2008). [7] Bardeen J M , Gauge-invariant cosmological perturbations, Phys.Rev. D 22 (1980) 1882. [8] Sasaki M and Stewart E D, A General analytic formula for the spectral index of the density perturbations produced during inflation, Prog.Theor.Phys. 95 (1996) 71. [9] Gordon C, Wands D, Bassett B A and Maartens R, Adiabatic and entropy perturbations from inflation, Phys.Rev. D 63 (2000) 023506. [10] Bartolo N and Matarrese S, Adiabatic and isocurvature perturbations from inflation: Power spectra and consistency relations , Phys.Rev. D 64 (2001) 123504. [11] Liddle A R and Lyth D H, Cosmological inflation and large-scale structure , Cambridge University Press (2000). [12] Lyth D H, Malik K A and Sasaki M, A general proof of the conservation of the curvature perturbation, J. Cosmol. Astropart. Phys. 05 (2005) 004. [13] Lyth D H, Large-scale energy-density perturbations and inflation , Phys.Rev. D 31 (1985) 1792. [14] Mukhanov V, Gravitational instability of the universe filled with a scalar field, JETP Lett 41 (1985) 493. [15] Sasaki M, Large scale quantum fluctuations in the inflationary universe, Prog.Theor. Phys. 76 (1986) 1036. [16] Kodama H and Sasaki M, Cosmological perturbation theory, Progr. Theor. Phys. Suppl 78 (1984) 1. [17] Kurki-Suonio H, Cosmological perturbation theory,(2005) unpublished lecture notes available online at http://theory.physics.helsinki.fi/$\sim$cmb/ [18] Hwang J and Noh H, Relativistic hydrodynamic cosmological perturbations, Gen. Rel. Grav. 31 (1999) 1131. [19] Ellis G F R, Maartens R and Mac Callum M A H Relativistic cosmology, Cambridge University Press (2012). [20] Ade P A R et al., Planck 2013 results. XVI. Cosmological parameters, arXiv:1303.5076. [21] Peebles P J E and Yu J T, Primeval adiabatic perturbation in an expanding universe, Astrophys. J. 162 (1970) 815. [22] Seljak U, A two-fluid approximation for calculating the cosmic microwave background anisotropies, Astrophys. J. 435 (1994) L87. [23] Arfken G B, Weber H J and Harris F E, Mathematical methods for physicists, seventh edition: A comprehensive guide, Academic Press (2013). [24] Liddle A R and Mazumdar A, Perturbation amplitude in isocurvature inflation scenarios, Phys.Rev. D 61 (2000) 123507. [25] Penzias A A and Wilson R W, A measurement of excess antenna temperature at 4080 Mc/s., Astrophys. J. 142 (1965) 419. [26] Sachs R K and Wolfe A M, Perturbations of a cosmological model and angular variations of the microwave background, Astrophys. J. 147 (1967) 73. [27] Hwang J and Noh H, Sachs-Wolfe effect: Gauge independence and a general expression , Phys.Rev. D 59 (1999) 067302. [28] Peter P, and Uzan J P, Primordial cosmology, Oxford University Press (2009). [29] Durrer R, The cosmic microwave background, Oxford University Press (2008). [30] Hwang J, Padmanabhan T, Lahav O and Noh H, 1/3 factor in the CMB Sachs-Wolfe effect, Phys.Rev. D 65 (2002) 043005. [31] White M and Hu W, The Sachs-Wolfe effect, Astron. Astrophys. 321 (1997) 8.
Deep network for rolling shutter rectification Praveen K, Lokesh Kumar T,and A.N. Rajagopalan Abstract CMOS sensors employ row-wise acquisition mechanism while imaging a scene, which can result in undesired motion artifacts known as rolling shutter (RS) distortions in the captured image. Existing single image RS rectification methods attempt to account for these distortions by either using algorithms tailored for specific class of scenes which warrants information of intrinsic camera parameters or a learning-based framework with known ground truth motion parameters. In this paper, we propose an end-to-end deep neural network for the challenging task of single image RS rectification. Our network consists of a motion block, a trajectory module, a row block, an RS rectification module and an RS regeneration module (which is used only during training). The motion block predicts camera pose for every row of the input RS distorted image while the trajectory module fits estimated motion parameters to a third-order polynomial. The row block predicts the camera motion that must be associated with every pixel in the target i.e, RS rectified image. Finally, the RS rectification module uses motion trajectory and the output of row block to warp the input RS image to arrive at a distortion-free image. For faster convergence during training, we additionally use an RS regeneration module which compares the input RS image with the ground truth image distorted by estimated motion parameters. The end-to-end formulation in our model does not constrain the estimated motion to ground-truth motion parameters, thereby successfully rectifying the RS images with complex real-life camera motion. Experiments on synthetic and real datasets reveal that our network outperforms prior art both qualitatively and quantitatively. 1 Introduction Most present-day cameras are equipped with CMOS sensors due to advantages such as slimmer readout circuitry, lower cost, and higher frame rate over their CCD counterparts. While capturing an image, CMOS sensor array is exposed to a scene in a sequentially row-wise manner. The flip side is that, in the presence of camera motion, the inter-row delay leads to undesirable geometric effects, also known as rolling shutter (RS) distortions. This is because rows of the RS image do not necessarily sense the same camera motion. A prominent effect of RS distortion is the manifestation of straight lines as curves which call for correction of RS effect also termed as RS rectification. Rectification of RS distortion involves finding the camera motion for every row of RS image (row motions). Each row of the RS image is then warped using estimated row motions by taking one of the rows as reference. More than aesthetic appeal, the implications of RS rectification are critical for vision-tasks such as image registration, structure from motion (SFM), etc which perform scene inference based on geometric attributes in the captured images. Multi-frame RS rectification methods use videos [10, 21, 7, 2] and estimate the motion across the frames using point-correspondences. The inter-frame motion helps with estimating the row motion of each RS frame, aiding the warping process to obtain distortion-free frames. Different algorithms are proposed for RS deblurring [20, 13], RS super-resolution [16], RS registration [26], and change detection [15]. The works in [25, 30] have addressed the problem of depth-aware RS rectification and again rely on multiple input frames. A differential SFM based framework is employed in [30] to perform RS rectification of input images captured by a slow-moving camera. [25] can handle the additional effects of occlusion that arise while capturing images using a fast-moving RS camera. Some methods have used external sensor information such as a gyroscope [4, 5, 14] to stabilize RS distortion in videos. Moreover, these methods are strongly constrained by the availability as well as reliability of external sensor data. The afore-mentioned methods are data greedy and time-consuming except [2]. Moreover, they are rendered unusable when only a single image is available. [19, 17, 9] rely on straight lines becoming curves as a prominent effect to correct RS distortions. However, these methods are tailored for scenes that consist predominantly of straight lines and hence fail to generalize to natural images where actual curves are present in the 3D world. Moreover, they require knowledge of intrinsic camera parameters for RS rectification. In this paper, we address the problem of single image RS rectification using a deep neural network. The prior work to use a deep network for RS rectification for 2D scenes is [18] wherein a neural network is trained using RS images as input, and ground truth distortion (i.e., motion) parameters as the target. Given an RS image during inference, the trained neural network predicts motion parameters corresponding to a set of key rows which is then followed by interpolation for all rows. A main drawback of this approach is that it restricts the solution space of estimated camera parameters to the ground truth parameters used during training. Moreover, arriving at the rectified image is challenging since the association between the estimated motion parameters and the pixel position of ground truth global shutter (GS) image is unknown. [18] attempts to solve this problem using an optimization framework as a complex post-processing step. Recent findings in image restoration advocate that end-to-end training performs better than decoupled or piece-wise training such as in image deblurring [12, 24], ghost imaging [27], hyperspectral imaging [1] and image super-resolution [11, 6]. As also reiterated in [29], a fisheye distortion rectification network, regressing for ground truth distortion parameters and then rectifying the distorted image gives sub-optimal performance compared to an end-to-end approach for the clean image. To this end, we propose a simple and elegant end-to-end deep network which uses ground truth image to guide the rectification process during training. RS rectification is done in a single step during inference. 2 RS Image Generation and Rectification Rolling shutter distortion due to row-wise exposure of sensor array depends on the relative motion between camera and scene. Fig. LABEL:rollingShutter shows a scene captured using RS camera under different camera trajectories i.e., different values of $[r_{x},r_{y},r_{z},t_{x},t_{y},t_{z}]$ where $t_{\phi}$ and $r_{\phi}$, $\phi\in\{x,y,z\}$, indicate translations along and rotations about $\phi$ axis, respectively. As observed in the figure and as also stated in [18], the effect of $t_{y},t_{z}$ and $r_{x}$ on RS distortion is negligible as compared to the effect of $t_{x},r_{y}$ and $r_{z}$. Moreover, the effect of $r_{y}$ can be approximated by $t_{x}$ for large focal length and when the movement of camera towards or away from scene is minimal. Hence, it suffices to consider only $t_{x}$ and $r_{z}$ to be essentially responsible for RS image formation. The GS image coordinates $(x_{\mbox{gs}},y_{\mbox{gs}})$ are related to RS image coordinates $(x_{\mbox{rs}},y_{\mbox{rs}})$ by $$\begin{split}x_{\mbox{rs}}=x_{\mbox{gs}}\cdot\,\textrm{cos}\,(r_{z}(x_{\mbox{rs}}))-y_{\mbox{gs}}\cdot\,\textrm{sin}\,(r_{z}(x_{\mbox{rs}}))+t_{x}(x_{\mbox{rs}})\\ y_{\mbox{rs}}=x_{\mbox{gs}}\cdot\,\textrm{sin}\,(r_{z}(x_{\mbox{rs}}))+y_{\mbox{gs}}\cdot\,\textrm{cos}\,(r_{z}(x_{\mbox{rs}}))\end{split}$$ (1) where $r_{z}(x_{\mbox{rs}})$ and $t_{x}(x_{\mbox{rs}})$ are the rotation and translation motion experienced by the $x_{\mbox{rs}}^{th}$ row of RS image. The GS-RS image pairs required for training our neural network are synthesized using Eq 1. Given a GS image and the rotation and translation motion for every row of the RS image, the RS image can be generated using either source-to-target (S-T) or target-to-source (T-S) mapping with GS coordinates as source and RS coordinates as target. Since the motion parameters are associated with RS coordinates, S-T mapping is not employed for RS image generation. In T-S mapping, each pixel location of target RS image is multiplied with corresponding warping matrix formed by the motion parameters to yield source GS pixel location. The intensity at the resultant GS pixel coordinate is found by using bilinear interpolation and then copied to the RS pixel location. Given an RS image and motion parameters for each row of RS image, the RS observation can be rectified akin to the process of RS image formation except that the source is now the RS image while target is the RS rectified image. In S-T mapping, each pixel location of RS image along with its row wise camera motion is substituted in Eq. (1) to get RS rectified (target) pixel location. However, there is a possibility that some of the pixels in the target RS rectified image may go unfilled leaving holes in the resultant RS image. In T-S mapping, for every pixel location of RS rectified image (target), the same set of equations (i.e., Eq. (1)) can be used to solve for RS image (source) coordinates provided the camera motion acting on RS rectified coordinates is known. 3 Network architecture Our main objective is to find a direct mapping from the input RS image to the target RS rectified image. This requires estimation of row-wise camera motion parameters, and the correspondence between estimated motion parameters and target pixel locations. We achieve the above in a principled manner as follows. We propose to use image features of input RS image for estimating camera motion parameters and devise a mapping function to relate estimated motion parameters with pixel locations of target image. Our network architecture (Fig. 1) consists of five basic modules: motion block, trajectory module, row block, RS regeneration module and RS rectification module. The motion block predicts camera motion parameters ($t_{x}$ and $r_{z}$) for each row of the input RS image. The trajectory module ensures that the estimated camera motion parameters follow a smooth and continuous curve as we traverse the rows in the RS image, in compliance with real-life camera trajectories. For each pixel location in the target image, the corresponding camera motion is found using the row block. The output of row block and trajectory module are used by the RS rectification module to warp the input RS image to get the RS rectified image. For faster convergence during training and to better-condition the optimisation process, we also employ an RS regeneration module which takes motion parameters from the trajectory module and warps the GS image to estimate the given (input) RS image. A detailed discussion of each of the module follows. Motion block This consists of a base block followed by $t_{x}$(translation) and $r_{z}$(rotation) blocks, respectively. The base block extracts features from the input RS image which are then used to find row wise translation and rotation motion of the input image. Thus, the motion block takes input color image of size $r\times r\times 3$ and outputs two 1D vectors of length $r$ indicating rotation and translation motion parameters for each row of the input RS image. The base block is designed using three convolutional layers. Both translation and rotation blocks, which take the output of the base network, are designed using three convolutional layers followed by two fully connected (FC) layers. The final FC layer is of dimension [$r$,1] reflecting the motion for every row of the input RS image. Each convolutional layer is followed by batch normalization. Row block As discussed in the second section, every pixel coordinate in the GS (target) image is substituted in Eq. (1) along with the corresponding camera motion to get the RS (source) image coordinate. However, camera motion acting on each GS pixel to form the RS image is known only to the extent that it is one of the motions experienced by the rows of RS image. Specifically, for a given input RS image, all the pixels in a row stem from a single camera motion and the motion will generally be different for each row. In contrast, pixels in a row of the GS image need not be influenced by a single motion. The ambiguity of which motion to associate to a pixel in target image was addressed in [18] as a post-processing step, which is a complicated exercise and implicitly constrains the estimated motion parameters. We propose to use a deep network to solve this issue thus rendering our network end-to-end. The row block takes an image of size $r\times r\times 3$ and outputs a matrix of dimension $r\times r$, with each location indicating which row motion of RS image must be considered from the estimated motion parameters for rectification. In real camera trajectory, camera motion is typically smooth. Consequently, the output of row block at each coordinate location can be expected to be close to its corresponding row number. Hence, for both stability and faster convergence, we learn to solve only for the residual (motivated by [3, 6]). The residual image is an $r\times r$ matrix and is the output of the row block. This image is added to another matrix (say $A$) which is initialized with its own row number i.e., $A(i,j)=i$ for the reason stated above. The resultant matrix values indicate which camera motion is to be considered from the estimated motion parameters for rectification of RS image. From now, we refer to output of row block as the sum of learnt residual with a matrix initialized with its row number at each coordinate position. The row block consists of five convolutional layers with each layer followed by batch normalization and an activation function. Use of three layers for base block and a total of 6 convolution layers for motion block is partly motivated by [18] (it uses five convolution layers for motion estimation). Since, the objective of row block (finding residual for) is comparatively less complex compared to motion block we used only 3 layers. 3.1 Loss functions Given an input RS image, using the trajectory module, row block and RS rectification module, an input image can be rectified to give an RS distortion-free or rectified image. We employ different loss functions to enable the network to learn the rectification process. The first loss is the mean squared error (MSE) between rectified RS image and ground truth image but with a modification. In the rectified RS image, it is possible that certain regions in the boundary are not recovered (when compared with GS image) since these regions were not present in the RS image itself due to camera motion. This can be noticed in Fig. 5 (third row) where the building has been rectified but there are regions on the boundary where the rectification algorithm could not retrieve pixel values as they were not present in the original RS image. To account for this effect, we used a visibility aware MSE loss where MSE is considered between two pixels only if the intensity in at least one of the channels in the rectified RS image is non-zero. Let $I_{\mbox{rs}},I_{\mbox{gs}},I_{\mbox{rs\_rec}}$ be input RS, ground truth GS, and RS rectified image, respectively. Then, we define mask $M_{\mbox{rs\_rec}}$, such that $$M_{\mbox{rs\_rec}}(i,j)=\begin{cases}0&\mbox{if}\sum\limits_{k=1}^{3}I_{\mbox{rs\_rec}}(i,j,k)=0\\ 1&\,\textrm{otherwise}\end{cases}$$ where $k$ indicates color channel in the RGB image. The error between GS and RS rectified image can be written as $$L_{\mbox{rs\_rec\_MSE}}=||I_{\mbox{rs\_rec}}-M_{\mbox{rs\_rec}}\otimes I_{\mbox{gs}}||^{2}_{2}$$ where $\otimes$ refers to point-wise multiplication. The second loss that we devise is based on the error between the given RS image and the GS image distorted by estimated motion parameters. To account for holes in the boundary, we again define mask $M_{\mbox{rs\_reg}}(i,j)$ such that $$M_{\mbox{rs\_reg}}(i,j)=\begin{cases}0&\mbox{if}\sum\limits_{k=1}^{3}I_{\mbox{rs\_reg}}(i,j,k)=0\\ 1&\,\textrm{otherwise}\end{cases}$$ where $I_{\mbox{rs\_reg}}$ is the image obtained by applying estimated motion parameters on the GS image. The error between the RS image and the RS regenerated image is given by $$L_{\mbox{rs\_reg\_MSE}}=||I_{\mbox{rs\_reg}}-M_{\mbox{rs\_reg}}\otimes I_{\mbox{rs}}||^{2}_{2}$$ Since edges play a very important role in RS rectification, we also compare Sobel edges of RS rectified and RS regenerated images with ground truth GS and input RS images, respectively. Let the Sobel operation be represented as $E(.)$. Then the edge losses for regeneration phase and rectification phase can be formulated as $$L_{\mbox{rs\_rec\_edge}}=||E(I_{\mbox{rs\_rec}})-M_{\mbox{rs\_rec}}\otimes E(I_{\mbox{gs}})||^{2}_{2}$$ $$L_{\mbox{rs\_reg\_edge}}=||E(I_{\mbox{rs\_reg}})-M_{\mbox{rs\_reg}}\otimes E(I_{\mbox{rs}})||^{2}_{2}$$ The overall loss function (please refer to Appendix for back propagation equations w.r.t different loss functions) of our network is a combination of the afore-mentioned loss functions and is given by $$L_{total}=\lambda_{1}L_{\mbox{rs\_rec\_MSE}}+\lambda_{2}L_{\mbox{rs\_reg\_MSE}}+\lambda_{3}L_{\mbox{rs\_rec\_edge}}+\lambda_{4}L_{\mbox{rs\_reg\_edge}}$$ (2) 4 Experiments This section is arranged as follows: (i) dataset generation, (ii) implementation details, (iii) competing methods, (iv) quantitative analysis, and (v) visual results. 4.1 Dataset generation To train our network, we used images of buildings but tested on buildings, as well as real-world RS images (having atleast few real-world straight lines). In this section, we explain the synthesis of camera motion and generation of RS images for training. Since fully connected (FC) layers are present in our network, we used images of constant size (256x256) for both training and testing. Camera motion and Training dataset: Because it is difficult to capture real GS-RS pairs, following [18] we synthesized camera motion using a second-degree polynomial for generating the RS images. We used the Buildings dataset from [28, 22] with a total of 440 clean images cropped to a size of 256x256. Out of those, we randomly chose 400 images and each image is distorted using 200 synthesized camera motions resulting in 80K images for training. The remaining 40 images are used in the test dataset. In order to ensure that there are no missing parts in the boundaries of generated RS images, we increased the size of each image to 356x356, applied RS distortions, and then cropped them back to 256x256. 4.2 Implementation details and competing methods To stabilize training and mitigate the ill-conditioness during the initial steps, we trained our network to regress for only ground truth motion parameters using 50 images from the training dataset for 5 epochs. Then the network is trained using Eq. 2 as our loss function with full size training dataset. We used TensorFlow for both training and testing with following options: ADAM optimizer to minimize the loss function, momentum values with $\beta_{1}$ = 0.9 , $\beta_{2}$ = 0.99 and with a learning rate of 0.001. The weights of different cost functions are set as $\lambda_{1}$ = $\lambda_{2}$ = 1 and $\lambda_{3}$ = $\lambda_{4}$ = 0.5. We compared our method with state-of-the-art single image RS rectification methods [18, 9, 17, 19]. Note that non-learning based methods [19, 9, 17] require intrinsic camera parameters while ours and [18] do not. We gave our set of RS images for comparison to the respective authors and obtained the results from them. 4.3 Visual comparisons We give results on the test dataset and RS images captured using a hand-held mobile camera. Fig. 3 depicts qualitative comparisons with competing methods. The RS image in the first row is an Indoor scene image ([23]) affected by a real-life camera trajectory ([8]). Because the Manhattan world assumption is not satisfied and due to the presence of influential outliers in the background (cloth), [9, 17, 19] is not able to rectify the image properly. Our rectification result is better than that of [18] since the solution space of estimated motion parameters is not skewed, unlike [18]. The RS images in the second and third (taken from [9]) row, also part of test dataset, are affected by complex real-life camera motion which is evident from the RS distortions. Since strong outliers are present in these images in the form of branches, [9, 17, 19] which depend on the detection of curves for estimation of camera motion fail to rectify the image. Due to restrictions on estimated camera motion estimation, [18] is unable to properly rectify the images in comparison to ours. Refined and complete version of this work appeared in JOSA 2020 References [1] Hao Fu, Liheng Bian, Xianbin Cao, and Jun Zhang. Hyperspectral imaging from a raw mosaic image with end-to-end learning. Optics Express, 28(1):314–324, 2020. [2] Matthias Grundmann, Vivek Kwatra, Daniel Castro, and Irfan Essa. Calibration-free rolling shutter removal. In 2012 IEEE international conference on computational photography (ICCP), pages 1–8. IEEE, 2012. [3] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [4] Sung Hee Park and Marc Levoy. Gyro-based multi-image deconvolution for removing handshake blur. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3366–3373, 2014. [5] Chao Jia and Brian L Evans. Probabilistic 3-d motion estimation for rolling shutter video rectification from visual and inertial measurements. In 2012 IEEE 14th International Workshop on Multimedia Signal Processing (MMSP), pages 203–208. IEEE, 2012. [6] Jiwon Kim, Jung Kwon Lee, and Kyoung Mu Lee. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1646–1654, 2016. [7] Young-Geun Kim, Venkata Ravisankar Jayanthi, and In-So Kweon. System-on-chip solution of video stabilization for cmos image sensors in hand-held devices. IEEE transactions on circuits and systems for video technology, 21(10):1401–1414, 2011. [8] Rolf Köhler, Michael Hirsch, Betty Mohler, Bernhard Schölkopf, and Stefan Harmeling. Recording and playback of camera shake: Benchmarking blind deconvolution with a real-world database. In European conference on computer vision, pages 27–40. Springer, 2012. [9] Yizhen Lao and Omar Ait-Aider. A robust method for strong rolling shutter effects correction using lines with automatic feature selection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4795–4803, 2018. [10] Chia-Kai Liang, Li-Wen Chang, and Homer H Chen. Analysis and compensation of rolling shutter effect. IEEE Transactions on Image Processing, 17(8):1323–1330, 2008. [11] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pages 136–144, 2017. [12] Michael Mathieu, Camille Couprie, and Yann LeCun. Deep multi-scale video prediction beyond mean square error. arXiv preprint arXiv:1511.05440, 2015. [13] Mahesh MR Mohan, AN Rajagopalan, and Gunasekaran Seetharaman. Going unconstrained with rolling shutter deblurring. In Proceedings of the IEEE International Conference on Computer Vision, pages 4010–4018, 2017. [14] Alonso Patron-Perez, Steven Lovegrove, and Gabe Sibley. A spline-based trajectory representation for sensor fusion and rolling shutter cameras. International Journal of Computer Vision, 113(3):208–219, 2015. [15] Vijay Rengarajan Angarai Pichaikuppan, Rajagopalan Ambasamudram Narayanan, and Aravind Rangarajan. Change detection in the presence of motion blur and rolling shutter effect. In European Conference on Computer Vision, pages 123–137. Springer, 2014. [16] Abhijith Punnappurath, Vijay Rengarajan, and AN Rajagopalan. Rolling shutter super-resolution. In Proceedings of the IEEE International Conference on Computer Vision, pages 558–566, 2015. [17] Pulak Purkait, Christopher Zach, and Ales Leonardis. Rolling shutter correction in manhattan world. In Proceedings of the IEEE International Conference on Computer Vision, pages 882–890, 2017. [18] Vijay Rengarajan, Yogesh Balaji, and AN Rajagopalan. Unrolling the shutter: Cnn to correct motion distortions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2291–2299, 2017. [19] Vijay Rengarajan, Ambasamudram N Rajagopalan, and Rangarajan Aravind. From bows to arrows: Rolling shutter rectification of urban scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2773–2781, 2016. [20] Vijay Rengarajan, Ambasamudram Narayanan Rajagopalan, Rangarajan Aravind, and Guna Seetharaman. Image registration and change detection under rolling shutter motion blur. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(10):1959–1972, 2017. [21] Erik Ringaby and Per-Erik Forssén. Efficient video rectification and stabilisation for cell-phones. International Journal of Computer Vision, 96(3):335–352, 2012. [22] Hao Shao, Tomáš Svoboda, and Luc Van Gool. Zubud-zurich buildings database for image based recognition. Computer Vision Lab, Swiss Federal Institute of Technology, Switzerland, Tech. Rep, 260(20):6–8, 2003. [23] Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. Indoor segmentation and support inference from rgbd images. In European Conference on Computer Vision, pages 746–760. Springer, 2012. [24] Xin Tao, Hongyun Gao, Xiaoyong Shen, Jue Wang, and Jiaya Jia. Scale-recurrent network for deep image deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 8174–8182, 2018. [25] Subeesh Vasu, Mahesh MR Mohan, and AN Rajagopalan. Occlusion-aware rolling shutter rectification of 3d scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 636–645, 2018. [26] Subeesh Vasu, Ambasamudram Narayanan Rajagopalan, and Guna Seetharaman. Camera shutter-independent registration and rectification. IEEE Transactions on Image Processing, 27(4):1901–1913, 2018. [27] Fei Wang, Hao Wang, Haichao Wang, Guowei Li, and Guohai Situ. Learning from simulation: An end-to-end deep-learning approach for computational ghost imaging. Optics express, 27(18):25560–25572, 2019. [28] Jianxiong Xiao, James Hays, Krista A Ehinger, Aude Oliva, and Antonio Torralba. Sun database: Large-scale scene recognition from abbey to zoo. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 3485–3492. IEEE, 2010. [29] Xiaoqing Yin, Xinchao Wang, Jun Yu, Maojun Zhang, Pascal Fua, and Dacheng Tao. Fisheyerecnet: A multi-context collaborative deep network for fisheye image rectification. In Proceedings of the European Conference on Computer Vision (ECCV), pages 469–484, 2018. [30] Bingbing Zhuang, Loong-Fah Cheong, and Gim Hee Lee. Rolling-shutter-aware differential sfm and image rectification. In Proceedings of the IEEE International Conference on Computer Vision, pages 948–956, 2017.
Characterizing the structural diversity of complex networks across domains Kansuke Ikehara [email protected] Department of Computer Science, University of Colorado, Boulder, CO, USA    Aaron Clauset [email protected] Department of Computer Science, University of Colorado, Boulder, CO, USA BioFrontiers Institute, University of Colorado, Boulder, CO, USA Santa Fe Institute, Santa Fe, NM, USA Abstract The structure of complex networks has been of interest in many scientific and engineering disciplines over the decades. A number of studies in the field have been focused on finding the common properties among different kinds of networks such as heavy-tail degree distribution, small-worldness and modular structure and they have tried to establish a theory of structural universality in complex networks. However, there is no comprehensive study of network structure across a diverse set of domains in order to explain the structural diversity we observe in the real-world networks. In this paper, we study 986 real-world networks of diverse domains ranging from ecological food webs to online social networks along with 575 networks generated from four popular network models. Our study utilizes a number of machine learning techniques such as random forest and confusion matrix in order to show the relationships among network domains in terms of network structure. Our results indicate that there are some partitions of network categories in which networks are hard to distinguish based purely on network structure. We have found that these partitions of network categories tend to have similar underlying functions, constraints and/or generative mechanisms of networks even though networks in the same partition have different origins, e.g., biological processes, results of engineering by human being, etc. This suggests that the origin of a network, whether it’s biological, technological or social, may not necessarily be a decisive factor of the formation of similar network structure. Our findings shed light on the possible direction along which we could uncover the hidden principles for the structural diversity of complex networks. 111This document was first published as K. Ikehara, “The Structure of Complex Networks across Domains.” MS Thesis, University of Colorado Boulder (2016). Introduction Almost every scientific and engineering discipline deals with data that comes from some sort of experimental observations. Traditionally, such data are expressed as numbers, which may represent temperature, velocity, or voltage, and methodologies that analyze the data have been established for over hundreds of years. A graph or network, which is a kind of data representation describing the relations among some entities such as persons, molecules, animals, logical gates, etc., has been used as a new way to approach, interpret and solve real-world problems in the last decades. Social science, for instance, has witnessed the power of network analysis with the recent emergence of online social network services such as Facebook, Twitter, LinkedIn, etc.Kleinberg (2008). Previously invisible social phenomena at the scale of off-line social networks, which usually consist of tens to hundreds of nodes at most, have been observed in large scale online social networks with the abundance of online data, faster and more efficient computational resources along with advancements of graph algorithms. Biological sciences use networks as a tool to dissect biological, chemical and ecological processes in order to gain insight into the functionality of such processes. These include brain that consists of networks of neurons Bullmore and Sporns (2009), complex metabolic reactions within a human’s cells and relationships between malfunctioning metabolic processes and human diseases Lee et al. (2008) and the effect in biodiversity, that might result from perturbation in an ecological network, such as food-web, mutualistic network, etc Ings et al. (2009a). Engineering systems such as the Internet Faloutsos et al. (1999), power grids Hines et al. (2010), water distribution networks Yazdani and Jeffrey (2011), transportation networks Roth et al. (2012), etc. have also been investigated using network analysis tools for constructing more efficient and robust systems. Common patterns across domains The term complex network depicts an essential difference from ordinary graphs having some kind of regular structure that have been studied in the field of mathematics for a long time: the structure of real-world networks almost always exhibits an unusual pattern that greatly deviates from the regular structure and this seemingly irregular and complex structure can often be a clue to the underlying mechanism of the process of interest. For example, the unusual density of triangles in social networks implies an underlying mechanism in our society: the formation of our social circles tends to be made by local interactions, such as introducing your friend to another friend in your circle that results in a new connection between your friends, making a triangle in the circle. There have been a number of studies that have investigated the structure of complex networks of diverse fields and connections between the structure and the underlying mechanism of the process Strogatz (2001); Newman (2003); Albert and Barabási (2002); Boccaletti et al. (2006). In the following sections we discuss the theories in the structure of complex networks in great detail. The structure of a network can be characterized in a number of ways, but there are three structural properties that are found to be common in many types of networks: the skewed degree distribution, small-worldness and community structure. Degree distributions. The degree of a node is a measure of how many edges are connected to a particular node, and the probability distribution $p(k)$ over all nodes essentially describes the “unfairness” in the network. If all of the nodes in a network have the exact same number of connections, the fairest case, then $p(k)$ behaves like Kronecker delta, where $p(k)$ has a value $1$ only at a specific degree $k$. Or if it is less fair, one may observe a narrow Gaussian-like distribution around an average degree. In the real world network, however, this almost never happens. What is observed most of the time instead, is a very skewed degree distribution in which most of the nodes have a few connections incident upon them, and very few nodes have a disproportionally large number of connections with them. The power-law distribution is one candidate distribution for describing the observed phenomenon, and a network having this distribution is often called scale-free network Barabasi and Albert (1999). A number of networks from diverse fields, such the Internet, metabolic reactions, World-Wide-Web, etc. have been claimed to be a scale-free network. However, one needs to be careful in order to validate if a network of interest indeed has the power-law degree distribution and it appears that a number of such claims need more statistically valid treatment such as the one proposed by Clauset et al. for justification Clauset et al. (2009). The disproportionality of the skewed degree distribution indicates the existence of hubs, cores or elites in a network. Such important nodes in a network can play a critical role for a network to function properly and the failure of such nodes may result in a catastrophe Albert et al. (2000). Geodesic paths. The next common structural characteristic of networks is small-worldness, first conceptually introduced by a Milgram’s experiment Milgram (1967) and mathematically proposed in a paper by Watts and Strogatz Watts and Strogatz (1998). Watts and Strogatz proposed a random network model that, depending on a parameter setting, produces a network which has the following properties: (i) the high density of triangles, implying that if three nodes are connected, it is likely that those three nodes actually compose a triangle; (ii) low average distance between a pair of nodes, which indicates that from any node it is just a few steps, in average, to reach any other node in the network. These properties are accomplished by the existence of “long-range” connections bridging together pairs of nodes topologically far away from each other. The small-worldness holds seemingly contradicting properties together: the large degree of local-ness exhibited as a large clustering coefficient value and the large degree of global-ness expressed as a small value of mean geodesic distance. These properties make small-world networks a very efficient system for information flow Latora and Marchiori (2001) and synchronizing coupled oscillators in the network Barahona and Pecora (2002). Note that the small-worldness itself is orthogonal to the skewed degree distribution, meaning that so-called small-world networks can be constructed without having the heavy-tail degree distribution. The definition of small world network presented above is, however, not generally used. Most of the researchers today regard small-world property as simply the low average pairwise distance between nodes that grows approximately as $O(\log n)$. Community structure. Many of the real-world networks contain modules or communities in which nodes are densely connected but among which there are sparse edges running. This is called community structure and there has been a number of studies investigating communities and inventing a new algorithm for detecting communities in networks Girvan and Newman (2002); Newman and Girvan (2004); Newman (2012). The communities in a network found by algorithms often correspond to the functional units of the network: a unit of chemical reactions producing vital chemical product in the metabolic network, a group of densely connected neurons taking charge of a cognitive function, such as language and visual processing, and a group of scientists working together in the same field. Stylized structural “facts” The properties such as skewed-degree distribution, small-worldness and community structure are found in networks of various kinds, but they alone cannot explain the diversity of networks in terms of network structure. It has been believed by a number of researchers that some classes of networks have a set of distinguishing structural features that makes the specific network class “stand out” among others. Here, we show some examples of network class that have distinguishing structural features, including social networks, brain networks and subway networks. Social networks. Social networks, regardless of whether or not they are off-line as we see in our daily lives or online like Facebook, have long been known for having distinguishing structural features: clustering and positive degree assortativity Newman (2002); Newman and Park (2003); Mislove et al. (2007). The large degree of clustering indicates the high probability of one’s friends being friends to each other and the positive degree assortativity shows a tendency that high-degree nodes (nodes having many connections) connect to other high-degree nodes while low-degree nodes connect to other low-degree nodes. Real-world networks, except social networks, in general exhibit low clustering and negative degree assortativity that are almost in accordance with ones of their randomized counterparts or null models having the same degree distribution. Newman has argued Newman and Park (2003) that negative degree assortativity is a natural state for most networks. Therefore, in order for a network to have positive degree assortativity, it needs a specific structure that favors the assortativity. The community structure of social networks, as mentioned in the paper, is the key aspect as to why they exhibit the properties, namely clustering and positive degree assortativity. People or nodes in social networks usually belong to some sort of groups or communities and people in the same community are likely to know each other. This community membership yields the high clustering in the social network. For positive degree assortativity, the size of the communities may play an essential role: individuals in a small group can only have low degree whereas individuals in a large group can, potentially, have much larger degree. This approximate correspondence of degree in the groups is essentially degree assortativity itself. Brain networks. Brain networks, often referred to as connectome, have been in the focus of neuroscience in the last decade in order to solve the long-standing scientific question: How does a brain work? The field that studies extensively brain networks using a tool set from mathematics, computer science, etc., is called connectomics and one of the recent research topics in this field is investigating how a topology of a brain network affects the brain’s function van den Heuvel et al. (2016). A number of studies in the topological features of brain networks show that: (i) they are highly modularly structured; (ii) they have topological connections that are anatomically long, thus come with the high cost of wiring, and yet make a brain network topologically small, like a small-world network; and (iii) they have a core of highly connected nodes, called rich-club in some literatures, that connect modules across the network together. The modular structure in a brain network is found to be correlating with a discrete cognitive function of a brain such as processing visual signals from eyes and audio signals from ears. The existence of topologically short yet anatomically long connections enables a brain to have an efficient way to process the information flowing on the network. Furthermore the rich club of a brain network plays an important role of integrating information that is produced and processed in different parts of the brain and this integration of information enables animals including human beings to do complex tasks Bullmore and Sporns (2012); Crossley et al. (2013). Transportation networks. Networks of subways in the major cities around the world have an interesting common feature unique to them, that is core-branch structure Roth et al. (2012). They have a ring-shape connections of stations and dense connections therein, referred to as the “core” of the network. In the core, stations are relatively densely connected to each other, enabling residents of a city to move around quickly. From the ring of the core, branches radiate outward connecting stations far from the center of the city. This structural feature is the result of balance between an efficiency of flow of people and cost for constructing rail lines between stations Gastner and Newman (2006a, b). Distinguishing different “classes” of networks In the last decades, researchers from the wide range of fields including biology, social science and physics, have been interested in if there is any structural difference between different classes of networks, if there is a set of structural features unique to a specific network class and if there is any “family” of networks in which networks of different classes share the same structural patterns. These questions are usually converted into a problem of comparing and classifying networks according to some criteria and these criteria span from a simple feature such as clustering coefficient, to more complicated ones such as network motifs, which are going to be explained later. One of the earliest works in network comparison and classification was done by Faust and Skvoretz Faust and Skvoretz (2002), in which they compared various kinds of offline social networks, such as grooming relationships among monkeys, co-sponsorship among U.S. Senate of 1973-1974 and so on. Their comparison of networks is based on a statistical model that incorporates parameters each describing a characteristic network structural feature, for example the frequency of a cyclic triangle (here the networks are directed, meaning that edges have directionality). The statistical model essentially predicts the existence of an edge $(i,j)$ in a network based on the parameters and the authors used such a statistical model “trained” on a network for predicting an edge in a different network. Their assumption is that if two networks are similar, a model trained on one of the pair should predict well an edge existence in another network. With this assumption they define the Euclidian distance as a function of a summation over all edge predictions, construct a distance matrix and project it onto a two dimensional space using a technique called Correspondence Analysis, which is similar to Principal Component Analysis. They have found that what makes networks similar in terms of structure is the property of edges, namely a kind of relation. For example, networks describing agonistic relations, regardless of kinds of species, exhibit the similar network structure. Although this study was pioneering in graph comparison and classification, it only focused on offline social networks, which themselves are a very narrow field of study. The breakthrough in graph comparison and classification came along a series of papers by Milo et al. that introduced the idea of network motifs and “super family” of networks Milo et al. (2002, 2004). Network motifs are essentially patterns of frequent sub-graph in a network compared to its randomized networks having the same degree distributionMilo et al. (2002). They have shown that each category of network, such as gene regulation (transcription), food webs, electronic circuits, etc., has distinct network motifs. In many cases the distinction of patterns indicates the functional difference in those networks. Furthermore, they revealed the existence of super families of networks that are groups of network categories having the highly convergent motif profiles. These studies are, as far as we know, the pioneers which investigated a diverse set of networks from different domains and found the underlying similarities among the network categories. Nevertheless, this study is far from proving to be a general theory as it only investigated 35 networks of a few categories. One of the most recent studies in graph comparison and classification investigated 746 networks and constructed a taxonomy of networks Onnela et al. (2012). Onnela et al. used a technique called mesoscopic response functions (MRFS) that essentially describes the change of a specific functional value related to the community structure of a network with respect to a parameter $\xi\in[0,1]$. Each network has its own MRFS and the authors calculated the distance between networks which is defined as an area of difference between two networks’ MRFS. Their framework successfully identified groups of networks that are similar in terms of community structure. The drawback of study is, however, the fact that the metric they have used for clustering the networks, namely modularity, is implicitly correlated with the size of a network that is a very strong distinguishing feature for classification of networks. Quantifying the structural diversity of networks As we have seen, a number of studies have been conducted in order to find groups or super families of networks that have similar structure. However, only few of them have investigated the fundamental concepts that create the structural differences of networks and none of them have done so with a comprehensive set of complex networks. With the abundance of available network data of various kinds and scales and the techniques from the field of machine learning, we could tackle a fundamental yet unexplored question: What structural features distinguish different types of networks? Or more generally: What drives structural diversity among all complex networks? There have been a number of studies trying to discover or formulate an idea that explains the structural universality of networks, including the skewed degree distribution, small-world networks, community structure and so on. There is, however, no general theory that explains the structural diversity of complex networks across a number of domains/fields. The aim of this paper is to establish a theory that explains the structural diversity of complex networks. Below are the three questions we have formulated as research objectives: 1. What aspects of network structure do make a specific category of network different from others? This question is, in some extent, an extension to the studies investigating social networks’ distinguishing structural features. For example, what kinds of network structure are distinguishing, say metabolic networks from the other kinds? As far as we know, few previous studies have done extensive investigation in finding distinctive characteristics of specific kinds of networks compared to other kinds except social networks. 2. Are there any sets of network categories that are inherently indistinguishable from each other based on network structure? This question asks if there is any structural similarity between different kinds of networks. We, however, use the word “indistinguishable” in stead of “similar” since we try to observe the commonality from a confusion matrix of a classifier, where a misclassified instance is considered to be indistinguishable from a class it was classified as because it is so similar to other instances of the wrong class, the algorithm fails to label the instance correctly. 3. If two networks of different categories are indistinguishable by network structure, are their mechanisms of underlying processes the same? And vice versa. This question is essentially all about elucidating the meta-structure among the network domains. What if two very distinct domains of networks, say biological and technological ones, exhibit very similar network structure and a classifier misclassifies them many times? Is this because their underlying network generative processes, or their processes on networks themselves are the same? Answering this question helps us understand a mechanism for the formation of a specific network structure. Our contributions. The contribution of answering those questions comes in two ways: first it gives us a general conceptual framework upon which networks are studied across domains. Previous studies have only looked at a single category or multiple categories as one category, ignoring the relationships between categories. By studying networks across domains, one could find general theories of networks or test a hypothesis across domains in a more plausible way. For instance, one could test validity of a network model across all of domains and find out which domains the network model can explain well; second it gives us the knowledge base upon which various network-related algorithms could be constructed and tuned properly. Many practical graph algorithms take no assumption in domain-specific network structure. It may be possible, however, to construct a new algorithm which runs faster and performs more efficiently on a specific kind of networks by taking into account of such domain-specific knowledge. For example, if there exists any unique network structural property of recommendation networks, it may be applied to construct or fine-tune a recommendation engine that utilizes knowledge of unique network structure. As the size of some real-world networks has grown to an unprecedented scale, domain-specific knowledge in network structure may be a key to analyze such large networks in a faster and more efficient way. In this paper, we study 986 of various kinds of real-world networks, ranging from ecological food-web to online social network to digital circuit along with 575 of synthetic networks generated from four different models. We extract network statistics as features of a network, construct a high-dimensional feature space, where each axis corresponds to one of the network features, and map each network onto the feature space. We then train a machine learning algorithm called Random Forest with the training set of network data in order to learn the function that relates structural features of networks to class labels, namely network domains and sub-domains. As the category distribution is skewed, meaning that some categories of networks have larger number of instances than the other minority categories, we try several sampling methods and show the effect of each methodology. We construct confusion matrices based upon classification results and proceed to analyze the misclassification with which we can answer research questions explained previously. We then conclude with discussion based on several hypotheses and experimental results along with some ideas for future work. Formal Definitions Formally, a network or graph is a mathematical object consisting of sets of edges (arcs) and nodes (vertices), which can be written as $G=(E,V)$. In many cases, an adjacency matrix is used as a representation of a network, where each element of the matrix $A_{ij}$ takes a binary value, $1$ for presence of an edge between nodes $i$ and $j$, and $0$ if there is no edge between these nodes. The network having binary values for edges is called an unweighted networks If the matrix is symmetric, namely $A_{ij}=A_{ji},\forall i,j\in V$, the matrix represents an undirected network in which edges do not have directionality at all. If the matrix is asymmetric, on the other hand, it’s representing a directed network, where an element $A_{ij}$ implies an edge originating from the node $i$ and pointing to the node $j$. If edges of the network are weighted, meaning that edges can have real values, then the network is called weighted. Related with weighted networks is multigraph in which any pair of nodes is allowed to have multiple edges connecting them. One should be careful here with the representation with an adjacency matrix for weighted network and multigraph, as the interpretation of the value of $A_{ij}$ depends on the context in which one treats the graph for the problem. For example, $A_{ij}=2$ may describe an edge between $i$ and $j$ weighted by $2$, or it may represent two edges running between $i$ and $j$. If diagonal elements of a matrix $A$ are non-zero, the network has self-loops which indicates that there are edges originating from and pointing to the exact same node. In many studies such self-loops are simply ignored for the sake of simplicity. The last, but not the least kind of network is bipartite network, in which there are two groups of nodes and edges exist only between nodes of different groups. An example of bipartite network is a network of cooperate board membership where there are groups of companies and board members and edges connect companies and board members. Often times, a bipartite network is converted into a network in which there is only one kind of nodes, for instance only board members in the case of cooperate board membership network, by an operation called one-mode projection. In the one-mode projected network of board members, an edge between persons $i$ and $j$ now represents the frequency that they sit on the same board for companies. As such, there are many kinds of networks, each describing networks differently. In our study, for the sake of simplicity and comparability, we focus on using a simple graph, in which edges have no directionality and weight and there is no self-loops. A set of procedures for converting non-simple graphs into simple graphs is defined as follows: 1. if a network is directed, then discard the directionality. 2. If a network is weighted, then convert any non-zero weight into $1$ and $0$ otherwise. 3. If a network is multigraph, namely having multiple edges between a pair of nodes, then merge them into one edge. 4. if a network contains self-loops, then discard them. Structural features of a network There are many ways to characterize a network in a quantitative manner that use the network’s structural features Newman (2010); da F. Costa et al. (2007). Some network features, however, are implicitly correlated with the size of networks, which itself is a very strong feature: the number of nodes in web graphs is usually a magnitude of $10^{6}$ or more whereas ecological food webs contain usually less than 100 nodes. Examples of such features include, for instance, distance-related features such as the average path length and network diameter that are believed to grow approximately in $O(\log n)$, where $n$ is the number of nodes in a network. Another structural feature correlated with the size of the network is the modularity of network Girvan and Newman (2002). Simply put, modularity quantifies the degree to which how much an observed network can be partitioned into modules within which edges are densely present but between which edges are sparsely present. The derivation of modularity is essentially based on comparing the original network with a random network having the same degree distribution which is called the configuration model and the value of modularity is practically the degree of difference between the original graph and its randomized counterpart. If the size of a network is large, the random network becomes so different that the modularity value becomes large as well Fortunato and Barthélemy (2007); Good et al. (2010). The feature set we use in this study is scale-invariant, meaning that the size of network does not affect the value of a feature. This set of features allows us to compare networks without the notion of network size. In the following sections we describe important structural features that are relevant in this study. Degree. The degree of a node in a network is the number of edges attached to the node. For a node $i$ in an unweighted network, the degree $k_{i}$ of the node can be written mathematically as: $$k_{i}=\sum_{j=1}^{n}A_{ij}.$$ (1) Clustering coefficient. Clustering coefficient, which describes a degree of transitivity in a network, is one of the most widely used metrics in network analysis especially in the context of social network. Transitivity in the context of networks means that if nodes $a$ and $b$ are connected as well as nodes $b$ and $c$, then nodes $a$ and $c$ are connected. Mathematically, the definition of clustering coefficient of a network is given by the following equation: $$C=\frac{\text{number of closed paths of length two}}{\text{number of paths of % length two}}.$$ (2) Degree assortativity. Assortativity in a network indicates a tendency for nodes to be connected to other nodes with similar node attribute. In a social network, for instance, node attributes which could be the basis for assortativity include language, age, income, alma mater, etc. On the other hand, disassortativity exhibits nodes’ tendency to be connected to other nodes having different node attribute in the network. An example network of disassortatvity would be a social network of heterosexual-relationship among people. Degree assortativity is a form of assortativity in which nodes with similar degree value tend to be connected together. Therefore in a network exhibiting high degree assortativity, there is a core of highly connected nodes with the high degree and periphery of nodes sparsely connected to other low-degree nodes. Here, we first define the covariance of $x_{i}$ and $x_{j}$ for the vertices at the ends of all edges, which leads us to a general assortativity coefficient: $$\text{cov}(x_{i},x_{j})=\frac{1}{2m}\sum_{ij}(A_{ij}-\frac{k_{i}k_{j}}{2m})x_{% i}x_{j},$$ (3) where $x_{i}$,$x_{j}$ are nodes $i$ and $j$’s attributes. This covariance will be positive if both $x_{i}$ and $x_{j}$ have, in average, similar values, and will be negative if both $x_{i}$ and $x_{j}$ tend to change in a different direction. We normalized cov($x_{i}$,$x_{j}$) by another quantity which represents a perfect assortativity, so that it takes a value $r\in[-1,1]$. The perfect matching happens if $x_{i}=x_{j}$ for all edges, and cov($x_{i}$,$x_{i}$) becomes: $$\begin{split}\displaystyle\text{cov}(x_{i},x_{i})&\displaystyle=\frac{1}{2m}% \sum_{ij}(A_{ij}x_{i}^{2}-\frac{k_{i}k_{j}}{2m}x_{i}x_{j})\\ &\displaystyle=\frac{1}{2m}\sum_{ij}(k_{i}\delta_{ij}-\frac{k_{i}k_{j}}{2m})x_% {i}x_{j},\end{split}$$ (4) where $\delta_{ij}$ is Kronecker delta. Thus, the normalized covariance becomes as follows: $$\begin{split}\displaystyle r=\frac{\text{cov}(x_{i},x_{j})}{\text{cov}(x_{i},x% _{i})}=\cfrac{\sum_{ij}(A_{ij}-\cfrac{k_{i}k_{j}}{2m})x_{i}x_{j}}{\sum_{ij}(k_% {i}\delta_{ij}-\cfrac{k_{i}k_{j}}{2m})x_{i}x_{j}}.\end{split}$$ (5) Degree assortativity coefficient can easily be obtained by substituting $x_{i}$ and $x_{j}$ with degrees of respective vertices, thus: $$r=\cfrac{\sum_{ij}(A_{ij}-\cfrac{k_{i}k_{j}}{2m})k_{i}k_{j}}{\sum_{ij}(k_{i}% \delta_{ij}-\cfrac{k_{i}k_{j}}{2m})k_{i}k_{j}}.$$ (6) Network motifs. The idea of network motifs was first introduced by Milo et al. Milo et al. (2002). A network motif is a sub-graph of a network that appears more statistically significant than in random networks having the same degree distribution, namely the configuration model. There are a number of studies using network motifs of directed networks, where edges have directions, especially in biological sciences Alon (2007); Sporns and Kotter (2004); Orr et al. (2002). Biologists have been particularly interested in motifs of networks, such as gene regulatory networks, since they are thought to correspond to the functional building blocks in biological systems and may help scientists understand the underlying principles of biological complex systems. As opposed to the number of studies using directed network motifs, in this study we only use network motifs of undirected networks, that produce fewer variations in motif kinds, but can be applied to any network regardless of edge directionality which is crucial for our study. Fig. 1 shows the complete list of $k=4$ undirected connected subgraphs used in this study. In order to quantify network motifs, we first count the occurrence of each sub-graph in the original network, then repeat the same process on the configuration models. After counting occurrences of sub-graphs in both original and multiple random networks, we proceed to calculate z-score for each sub-graph $i$ as follows: $$Z_{i}=\cfrac{N_{i}^{\text{original}}-\langle N_{i}^{\text{random}}\rangle}{% \sigma_{i}^{\text{random}}},$$ (7) where $N_{i}^{\text{original}}$ is the number of occurrence of a sub-graph $i$ in the original network and $\langle N_{i}^{\text{random}}\rangle$ and $\sigma_{i}^{\text{random}}$ are the average and the standard deviation of the number of occurrence of a sub-graph $i$ in an ensemble of random networks. It is usually convenient to normalize this z-score as some networks exhibits very large values due to the size of the networks. Such normalized z-score is called significance profile and is defined as follows: $$SP_{i}=\cfrac{Z_{i}}{\sum_{j}Z_{j}^{2}}.$$ (8) In this study, we treat the significance profile for each subgraph as a feature and in total we have six features as network motifs. Network data sets Our real-world network data sets are drawn from the Colorado Index of Complex Networks (ICON) icon.colorado.edu, which is a large index of real-world network data sets available online. ICON is an index not than a repository, and hence we used ICON to locate the original network data files associated with a large number of studies from a diverse set of scientific domains. Since the data format of real world networks is not standardized, we proceeded to convert all the data into a single format called Graph Modeling Language (GML) Himsolt (1999). The format allows us to flexibly specify arbitrary node and edge attributes. We, however, do not use any node and edge attribute, including edge weight, as well as edge directionality at all since not all networks have such properties and we wanted to analyze a diverse set of networks. Thus, we treat all networks used in this study as a simple graph which is defined in the previous section. We have used only a fraction of all networks available on ICON due to the time constraint. We have also added some synthesized network data which are generated from four specific models briefly explained in the following: 1. Erdős-Rényi random network model (ER Network) Erdős and Rényi (1960), where given $n$ the number of edges and $p$ the probability that a pair of nodes gets connected, for each pair of nodes in the network, one connects the nodes according to $p$. This model yields the average path length of $O(\log n)$ and low clustering coefficient. 2. Watts-Strogatz model (Small World) Watts and Strogatz (1998) which produces a network having the high clustering coefficient and low average path length of $O(\log n)$. The model starts with a grid network and then rewires some edges according to some probability $p$. The rewiring makes the network’s path length smaller while keeping the high clustering due to the gird structure. 3. Barabási-Albert model (Scale Free) Barabasi and Albert (1999) with which one grows a network over the course of time. Newly added nodes have a fixed number of edges attached to them, and these edges connect to the existing nodes according to the probability $p$ that is proportional to the degree of an existing node. Therefore nodes having many connections will be more likely to receive more edges attached to them. Although this model was originally invented by Price in a paper in 1965 de Solla Price (1965), we call the model as BA model since it’s more widely known as its name. 4. The Forest Fire network model (Forest Fire Network) Leskovec et al. (2007) which is a network generative model with the following procedures: (i) a newly added node $u$ attaches to (cites) some existing nodes, called ambassadors, chosen uniformly at random; (ii) for each newly cited node $v$, its incoming and out-going neighbors are also cited by $u$, the new node; (iii) the same procedure is done recursively for all of the newly cited nodes. We show all the details of network sub-domains for our study in the Table 2 (located at the end of this paper). The distributions of network domains and sub-domains, as shown in Figs. 2a and 2b, are very skewed since instances of some network categories are hard to obtain due to their inherent difficulty of collecting data or legal concerns, or hard to analyze due to their network size and this leads us to explore several sampling methods, which are explained in the following section. After converting into GML format, we calculate a set of features explained in the previous section for each network. We have extensively used a Python library igraph Csardi and Nepusz (2006) for extracting features including clustering coefficient and degree assortativity and other miscellaneous operations on network data. For calculating network motifs, we used a parallel motif computing algorithm for undirected motifs developed by Ahmed et al. Ahmed et al. (2015). A number of computations involved in this study are parallelized by using a command-line tool GNU Parallel Tange (2011). Methodological issues After having converted networks into vectors in the feature space, there are a number of possible ways to analyze the distribution of points and labels and possibly learn the concept that governs such distribution in the feature space. In this study, we use random forest classifier along with the confusion matrix as a way to learn the underlying concept that differentiates different classes of networks. As we have seen in the previous section, the distribution of class labels is obviously skewed which leads us to use several sampling methods that are supposed to alleviate the problem. Here, we explain such methodologies in detail. Managing class imbalance Most of the machine learning algorithms perform well on evenly populated instances of multiple classes. However, once this class balance no longer persists, the algorithms perform poorly on minority classes. This problem, called class imbalance, causes any machine learning algorithm that is naive to the data set to focus exclusively on the majority class, ignoring any minority classes. One of the most widely used approaches for mitigating the problem is sampling the data set of interest so that the distribution of classes becomes balanced. Although there are many proposed sampling strategies as of now He and Garcia (2009), we primarily use three sampling strategies in this study: random over-sampling, random under-sampling and SMOTE Chawla et al. (2002). Here we establish the mathematical notations used in explaining sampling methods. Let $S$ be a set of pairs of $\vec{x_{i}}$ and $y_{i}$, namely $S=\{(\vec{x_{i}},y_{i})\}$, for $i=1,...,n$ where $n$ is the number of data, $\vec{x_{i}}\in X$ is an instance of networks in the $N$-dimensional feature space and $y_{i}\in Y=\{1,...,C\}$ is a class label associated with the instance $\vec{x_{i}}$. Random over-sampling. This method is one of the simplest strategies for mitigating the class imbalance problem. It over-samples any minority classes to an extent that the number of instances in each class becomes even. Here we explain this sampling method in a mathematical sense. Let $S_{\rm{maj}}\subset S$ be the majority class, meaning a class having the largest number of instances, $S_{\rm{min}}^{j}\subset S$ the $j$th minority class where $j=1,...,C-1$ and $P$ a set $\{S_{\rm{maj}},S_{\rm{min}}^{1},...,S_{\rm{min}}^{C-1}\}$ such that $$S_{\rm{maj}}\cup S_{\rm{min}}^{1}\cup S_{\rm{min}}^{2},...,\cup\,S_{\rm{min}}^% {C-1}=S,$$ (9) and $$\forall S_{1},S_{2}\in P\land S_{1}\neq S_{2}\Rightarrow S_{1}\cap S_{2}=\emptyset,$$ (10) where $\emptyset$ means an empty set. Let $E_{\rm{min}}^{j}$ be a set of points that are sampled at uniformly random from a set $S_{\rm{min}}^{j}$ that satisfies the following equality: $$|S_{\rm{min}}|+|E_{\rm{min}}^{j}|=|S_{\rm{maj}}|.$$ (11) We then append the set $E_{\rm{min}}^{j}$ to the corresponding set of the minority class $S_{\rm{min}}^{j}$, namely $S_{\rm{min}}^{j}:=S_{\rm{min}}^{j}+E_{\rm{min}}^{j}$. The drawback of random over-sampling is a potential poor generalization due to the overfit of a classifier to the over-sampled instances. Random Under-sampling. On the other hand, this method under-samples the majority class, essentially throwing out some data in order to make the “cloud” of data points sparser. This“throwing out instances” implies an obvious drawback of this sampling method: It discards potentially important instances that compose the backbone of majority classes, implying the true shape of majority class is no longer retained. Here, again, we explain this method using mathematics. Let $S_{\rm{min}}\subset S$ be the minority class, meaning a class having the least number of instances, $S_{\rm{maj}}^{j}$ the $j$th majority class where $j=1,...,C-1$ and $P$ a set $\{S_{\rm{min}},S_{\rm{maj}}^{1},...,S_{\rm{maj}}^{C-1}\}$ such that $$S_{\rm{min}}\cup S_{\rm{maj}}^{1}\cup S_{\rm{maj}}^{2},...,\cup\,S_{\rm{maj}}^% {C-1}=S$$ (12) and $$\forall S_{1},S_{2}\in P\land S_{1}\neq S_{2}\Rightarrow S_{1}\cap S_{2}=\emptyset.$$ (13) Let $E_{\rm{maj}}^{j}$ be a set of points that are sampled at uniformly random from a set $S_{\rm{maj}}^{j}$ that satisfies the following equality: $$|S_{\rm{min}}|=|S_{\rm{maj}}^{j}-E_{\rm{maj}}^{j}|.$$ (14) Then, we subtract the set $S_{\rm{maj}}^{j}$ by $E_{\rm{maj}}^{j}$, thus $S_{\rm{maj}}^{j}:=S_{\rm{maj}}^{j}-E_{\rm{maj}}^{j}$. SMOTE. Synthetic Minority Over-sampling Technique, widely known as SMOTE Chawla et al. (2002), is an alternative sampling method that synthesizes data points in the training set for a classifier. Mathematically speaking, SMOTE is quite similar to random over-sampling, except for how it generates the set $E_{\rm{min}}^{j}$ for the minority class $S_{\rm{min}}^{j}$. For each set of the minority class $j$, namely $S_{\rm{min}}^{j}$, consider $K$ nearest neighbors of a point $\vec{x_{i}}\in S_{\rm{min}}^{j}$ in the $N$-dimensional feature space. To generate a new point, first pick up one of the $K$ nearest neighbors, say $\vec{x_{n}}$, then multiply the corresponding feature vector difference with a weight $\delta$ chosen from an interval $[0,1]$ at uniformly random and add this vector to $\vec{x_{i}}$. Therefore, we have a newly synthesized point defined as follows: $$\vec{x_{\rm{new}}}=\vec{x_{i}}+\delta*(\vec{x_{n}}-\vec{x_{i}}).$$ (15) And one repeats this procedure for other neighbors of the point $\vec{x_{i}}$ and construct a set $E_{\rm{min}}^{j}$. See the figure 3 for visualization of the synthesizing process. The core concept of SMOTE is filling out the cloud of minority class instances by interpolating existing data points so that it closely resembles a convex set. This idea, making a convex set of minority instances by interpolation, assumes that the shape of a manifold of the underling data distribution itself is convex. In a high dimensional space, it is often the case that the distribution of data forms a quite intricate non-convex manifold on which the data we observe is generated. Therefore one must be careful when using a sampling method like SMOTE that the resulting set of points by interpolation of data points of such complex manifold is likely to be a convex set, that may radically be different from the underlying concept that a classifier tries to learn. Learning what distinguishes domains In order to find similarities and differences of data of different classes, one needs to develop a notion of similarity between the classes. In this study we derive such notion of similarity from confusion matrices that are produced by random forest classifiers. Following sub-sections describe the details of decision tree which is an essential component of random forest, random forest classifier and confusion matrix. Decision trees. Decision tree is a model which describes the relationship between input variables and output class by recursively asking a question on a single input variable and splitting the data set into two based on the answer to the question until a data set has enough homogeneity of a class in it. In a high dimensional space, such spitting the data set corresponds to hyperplane in the space. The algorithm for learning decision tree splits the data set based on a criterion of values of an input variable such that the resulting data sets become less heterogeneous or less impure in terms of class labels. One of the widely used such criteria and the one we use in this study is Gini impurity. The definition of Gini impurity for a data set with $J$ classes is the following: $$\displaystyle I_{G}(f)$$ $$\displaystyle=\sum_{i=1}^{J}f_{i}(1-f_{i})=\sum_{i=1}^{J}(f_{i}-f_{i}^{2})$$ $$\displaystyle=\sum_{i=1}^{J}f_{i}-\sum_{i=1}^{J}f_{i}^{2}=1-\sum_{i=1}^{J}f_{i% }^{2},$$ (16) where $f_{i}$ is a probability that an item that belongs to class $i$ is chosen in the data set. Gini impurity becomes $0$ if all items in a set belong to the same class, meaning the set is “pure” and takes a value greater than $0$ if the set contains items of multiple classes. Each splitting essentially seeks the best possible value of an input variable such that the decrease of Gini impurity is the largest when the data set is split at the value (or hyperplane defined with it). Splitting continues until no further improvement can be made and the terminal of a tree are called leaves of the tree, each corresponding to one of the class labels in the data set. Random forests. The popular random forest classifier is a type of ensemble learning method that combines a number of so called“‘sweak” classifiers together Breiman (2001). When it’s given a data set to predict after training weak classifiers, it outputs the majority of all outputs from the weak classifiers and this aggregation of weak classifiers prevents random forest from overfitting to the training data. In random forest, such weak classifiers are a decision tree which is explained in the previous section. The learning phase of random forest classifier first involves random sub-sampling of the original data with replacement for $B$ times, each time the sub-sampled data set is fed to a decision tree. For each decision tree a set of randomly sampled input variables (features) is used for splitting and this random selection of features is called random subspace method or feature bagging. This prevents the classifier to focus too much on feature sets that are highly predictive in the training set. One of the advantageous byproducts of random forest is that one can rank input variables or features based on the importance in the classification. Each time a split is made on a node in a decision tree, the decease of Gini impurity can be attributed to selecting a feature as a splitting point. Calculating the average decrease of Gini impurity for selecting a feature over all decision trees in random forest gives us the importance of the feature that is very consistent with the result of the original method for calculating variable importance Breiman (2001); Ran . This ranking of feature importance is the crucial part of the analysis that we describe in the next section. Confusion matrices and Domain similarity. A confusion matrix depicts when and how frequently a classifier makes mistakes. The row labels of the matrix usually correspond to true labels and column labels correspond to predicted labels. An element $c_{ij}$ in a confusion matrix represents the number of occurrences that a classifier predicted an instance of class $i$ as class $j$. So it is easy to notice that diagonal elements of a confusion matrix, namely $c_{ii}$ for $i=1,...,n$ represents the correct predictions of a classifier. What we are interested in, however, lies in off-diagonal elements of a confusion matrix. The information that a classifier gets confused with classes $i$ and $j$ implies the similarity between classes: if the points of two different classes in a high dimensional feature space are often misclassified as each other, the points of the two classes are in fact close to each other in the feature space, implying that these classes are so similar that a classifier cannot distinguish one from the other. The information in a confusion matrix, when and how frequently a classifier makes mistakes, is essential in order for us to answer one of the questions we have asked: Are there any sets of network categories that are inherently indistinguishable from each other based solely on network structure? It is tempting to use a confusion matrix as a similarity matrix owing to the fact that off-diagonal elements imply the similarities among class labels. This, using a confusion matrix as a similarity matrix, involves following issues however: 1. In a confusion matrix, classes with abundant data tend to have large counts for elements in the matrix due to the abundance of test data whereas classes with fewer data have fewer counts in the matrix. 2. Usually the confusion matrix is not symmetrical, but a number of similarity-related methods assume an input matrix has the symmetry. Therefore we proceed on the following operations in order to derive a similarity matrix based on a confusion matrix: 1. Normalize each row $i$ of the confusion matrix so that $\sum_{j=1}^{J}c_{ij}=1$ 2. Symmetrize the resulting matrix from operation $1$ by setting each pair of symmetric elements as: $c_{ij},c_{ji}=\max(c_{ij},c_{ji})$ Experimental results In this section, we present experimental settings for analyses, each of which helps us to gain insights for answering the questions we proposed in the introduction, namely questions about the distinguishing structural features, existence of indistinguishable pairs of network categories and its implication. We present the results of such analyses in a sequential order along with those questions. Discriminative features The first question of focus is about the set of distinguishing features that make a category of networks “stand out” among others. In order to address this question, we look at the statistics of feature importance that are derived from a random forest classifier. The assumption here is that distinguishing features should be able to split the data in the feature space into sub-spaces such that separation of class labels is good, meaning that these sub-spaces should only contain nearly a single class label within themselves. The degree of goodness of separation is expressed as the decrease of Gini impurity and if a feature is distinguishing one category of networks from the others, then selecting the feature in a decision tree should gain a large decrease of Gini impurity in splitting the data and the feature should be highly ranked in the feature importance ranking. Therefore, if we observe a feature that ranks as the top in the ranking many times for binary classifications in which positive label corresponds to the class of interest and the negative label corresponds to everything else grouped together, we could assert that the feature is distinguishing for a specific kind of networks from the rest. Individual feature approach. First, since not all of the sub-domains have enough number of samples to support our analysis and due to the limitation of time and space, we select a set of six representative network sub-domains that have been investigated in a number of studies and attract a particular interest in each domain. These representative sub-domains are: protein interaction, ecological food web, metabolic, connectome (brain networks), online social and communication (autonomous systems). Second, for each representative class we proceed to run binary classification 1000 times using random forest in which the sub-domain of interest corresponds to the positive and the other sub-domains grouped together correspond to the negative. A set of features for this task includes: clustering coefficient, degree assortativity and $k=4$ connected network motifs, thus the total of eight features. In each run we split the data set into training and test sets with the ratio of $7:3$ while preserving the ratio of class distribution. In each run the score of AUC (Area Under the ROC Curve) is calculated in order to see the performance of random forest classifier for the binary classifications. We also record for each run the ranking of feature importance in which features are sorted according to their Gini impurity decrease in the training phase of random forest classifier. We then average AUC scores over 1000 runs and aggregate all of the recorded rankings of feature importance. Finally, we select the two most important features from the aggregated ranking, plot all of the data points in the two dimensional feature space in which $x$- and $y$-axes correspond to the most and second most important features, respectively. This visualization will help us understand to what degree and how a category of networks can be separated from the rest in the two dimensional feature space if there exists any pair of distinguishing features. Results on individual features. Fig. 4 shows two dimensional plots for the representative network sub-domains with axes being the top two important features selected from the statistics of aggregated feature importance shown in Fig. 5. One important observation here is that the AUC score captures the separability and the degree of spread of a specific class label in the two dimensional feature space: the averaged AUC score for protein interaction networks shown in Fig. 4a is 0.693 and the data points are spread across the large region of the feature space; the 2D plot of communication networks as shown in Fig.  4f, on the other hand, displays a cluster of points corresponding to the class and the averaged AUC score for binary classifications is 0.992 which is much higher than that of protein interaction networks. The degree of spread of data points implies the structural variability of networks within a single category. For example, protein interaction networks and connectome exhibit the high structural variability of networks in a space defined by the selected features, whereas other network categories such as metabolic networks, ecological food webs, etc. exhibit the low structural variability in the selected features. This suggests the possible need for adding a new set of features for protein interaction networks and connectome with which we could distinguish them from the other kinds. Fig. 5 shows the aggregated rankings of feature importance. In this figure, one can observe the general trend of an informative feature set and the strength of those features in the ranking. For example, the motif $m4\_6$, a connected path, of metabolic networks and the motif $m4\_1$, $k=4$ clique, of online social networks are the most important features and their strength is quite dominant: $m4\_6$ of metabolic networks ranks as the first 939 times out of 1000 runs and $m4\_1$ of online social networks ranks as the first 974 times out of 1000 runs, respectively. It is interesting to note that for online social networks clustering coefficient and degree assortativity, both long known for distinguishing features for social networks in general do not rank as the most or even the second most important features here. On the other hand, the ranking of feature importance for connectome displays a lack of definitive features that can be seen in the almost equal frequencies of features appearing at any rank. This indicates that none of the features we have used cannot explain the network category succinctly: for online social networks we can describe the networks as having an extraordinarily number of $k=4$ cliques when compared to their randomized counterparts; we cannot, however, describe connectome in such a way with the feature set. These distinguishing features observed in the figures seem to have some implications for the process in a network. Ecological food-webs display the abundance of $m4\_4$ subgraphs, namely a square of four nodes, that are thought to be major constituent elements in the food-webs as it describes layers of the food chain Milo et al. (2002); Rip et al. (2010): animals in the same layer in the food chain do not often prey on each other, but prey on animals in a layer below and they are preyed on by animals in a layer above; usually this relationship is depicted in a directed network motif named as bi-parallel motif. Note that, however, this directed motif is converted into an undirected version of itself, namely $m4\_4$ in our study. Online social networks have an unusual number of $m4\_1$ subgraphs, namely $k=4$ clique, and this indicates a strong local bonding mechanism in social networks: suppose a person $A$ has friends $B$ and $C$ that know each other (triangle of $A$,$B$ and $C$); if $B$ and $C$ have a mutual friend, called $D$, then it is likely in online social networks that the person $A$ is also a friend to $D$ and this forms a $k=4$ clique in the network. On the other hand, communication networks namely the Internet at the level of autonomous systems exhibit the underrepresented number of $m4\_1$ subgraphs. It was previously reported that the number of triangles in the communication networks is less than the expected value of the number of triangles for their randomized counterparts having the same degree distribution Maslov et al. (2004). The $m4\_1$ subgraph, namely $k=4$ clique, contains four triangles within itself. Therefore it is not illogical to say that the underrepresentation of triangles approximately equals to the underrepresentation of $k=4$ clique, which leads us to claim that our finding coincides with the previous study. This underrepresentation of $k=4$ clique may imply the underlying growing or construction mechanism of communication systems: the whole system needs to be connected in order for the Internet to work, but does not need too many connections among autonomous systems as the cost for connections should be minimized. For some network sub-domains, however, implications of the distinguishing features are unclear. Although protein interaction networks and connectome exhibit degree assortativity and $m4\_4$ motif, namely $4-$cycle, as the most distinguishing features, their structural variability which is displayed as the spread of points makes it infeasible to even hypothesize the relationship between those features and underlying mechanism of such networks. For metabolic networks, the underrepresentation of $m4\_6$ motif ($4$-path) is found to be the most distinguishing characteristic for the category. However, connecting this result with the possible mechanism which yields the underrepresentation of the motif in metabolic networks is yet under investigation. In this analysis we have successfully identified a set of distinguishing structural features for some network sub-domains, such as ecological food webs, online social networks and communication networks and these features seem to coincide with the results from previous studies of such sub-domains. We have also pointed out that the distinguishing features have some implications for the underlying mechanism of networks of sub-domains. Characterizing the structural diversity of different types of networks. Every network belongs to some sort of network domain or sub-domain that usually describes the properties and even structure of a network. As we have seen in the previous section, some networks of selected representative sub-domains exhibit structural uniqueness which makes them stand out among others in the feature space. However we can also observe the overlaps between networks of representative sub-domains and networks of other sub-domains in the Fig. 4, which leads to the question we have asked: Are there any sets of network categories that are inherently indistinguishable from each other based on network structure? In this section we explore the structural similarities of different kinds of network domains and sub-domains using machine learning techniques such as random forest and confusion matrix. Experimental settings. We derive structural similarity between network (sub-) domains from a confusion matrix that describes when a random forest classifier makes mistakes and when it does not. However, due to the nature of the classification algorithm and randomly splitting the data into training and testing sets, there involves some randomness in a confusion matrix every time one runs the analysis. Therefore, in order to remove the factor of randomness as much as possible, we run the analysis 1000 times and average the outcomes, namely confusion matrices. Averaging confusion matrices is done element-wise: an element of averaged confusion matrix, say $c$ is defined as $c=\frac{1}{1000}*\sum_{i=1}^{1000}c_{i}$, where $c_{i}$ is the corresponding element of an $i$th confusion matrix. In order to control and compare the impact of class imbalance problem, we use four sampling methods: (i) no-sampling, namely running the analysis on the original data set; (ii) random over-sampling in which minority classes are over-sampled to an extent where all classes have the same number of instances as the largest class; (iii) random under-sampling in which majority classes are under-sampled to an extent where all classes have the same number of instances as the smallest class; (iv) SMOTE in which all minority classes have synthesized new instances so that the number of data points equals to the one of the largest class. A set of features includes, as before, clustering coefficient, degree assortativity and $k=4$ connected network motifs which result in eight features. Distinguishing networks by domain. We first proceed to work on classification of network domains that include Biological, Social, Informational, Synthetic, Technological and Transportation. Fig. 6 shows the aggregated confusion matrices for each of the sampling strategies. In an aggregated confusion matrix, each cell represents the averaged frequency that an instance of class $i$ is classified as class $j$ in 1000 experiments. As the majority classes inherently contain a number of test data points that leads to a larger count in the confusion matrix, there needs to be some normalization so that each class of network domains becomes comparable with others. The normalization in this study is defined as the following: Let $c_{ij}$ be an element in a confusion matrix. We normalize this quantity by a summation of elements in a row $c_{ij}$ belongs to, namely $\sum_{j=1}^{N}c_{ij}$, where $N$ is the number of network domains. The diagonal elements of confusion matrices shown in Fig. 6 indicate the correct classifications. In every sampling method, instances of all network domains are relatively classified correctly, which is observable from the colors of the diagonal cells, except Informational networks for which we observe unsuccessful classifications. In spite of the fact that for random-under sampling there are only 26 instances for the training set of each domain, the confusion matrix for the sampling method still exhibits a strong diagonal pattern, which may imply that instances of various network domains are inherently quite separable in the feature space. Although it is a meaningful result that the instances of network domains may inherently be separable, our focus now moves on to the off-diagonal elements of confusion matrices. In all matrices, a number of instances of Informational networks are classified as other domains, such as Biological, Synthesized and Technological which is observable from elements of a row corresponding to “true Informational.” Also some instances of network domains are classified as Biological networks, observed in elements of a column corresponding to “predicted Biological.” These pieces of information imply the existence of underlying similarities within network domains. However, it is hard to perceive the structure of inter-domain similarity just by looking at the confusion matrix. Therefore, we construct a network of network domains from a similarity matrix (weighted undirected adjacency matrix) that is constructed based on a confusion matrix that is shown in Fig. 9. The operation for constructing the similarity matrix based on the normalized confusion matrix is straightforward: we symmetrize the matrix by taking the maximum of two elements in the matrix that are symmetric to each other with respect to a diagonal line. In all cases Biological, Informational and Technological domains are connected with wide edges together, indicating their structural similarities derived from confusion matrices are quite high. From the figure, it appears that there is no “winning” sampling method that produces an outstanding result of a domain network, which means that all of the sampling methods practically produce domain networks that are practically the same. This analysis which is based on network domains is informative in a sense that it consistently exhibits the well connected group of domains that includes Biological, Informational and Technological. However, each network domain includes sub-domains within itself and these network sub-domains are quite diverse in terms of network’s function. For example, neural networks in a brain and ecological food web, both in Biological domain, function very differently. Grouping sub-domains of different functions together as a single category may lose some information local to a specific sub-domain. Therefore, we proceed to analyze the networks on a more fine-grained setting, namely using network sub-domains as the class label in classification tasks. Distinguishing networks by subdomain. Here we continue the same analysis as we have done in the previous section, but using network sub-domains as class labels for classification tasks. Although in total we have 33 network sub-domains as shown in Fig. 2b, some sub-domains have only few instances, which makes it infeasible to proceed to classification tasks as they involve train-test split of data and SMOTE assumes a training set to have enough amount of instances for selecting $k$ nearest neighbors. Therefore we first exclude sub-domains that the number of instances within themselves are less than seven, which enables us to split the training and test sets in the ratio of $7:3$ while keeping the class distribution in both sets the same and SMOTE to successfully find $k=3$ nearest neighbors for any sub-domain. This filtering results in 22 network sub-domains for this analysis 222The excluded network subdomains are: Language, Relatedness, Legal, Commerce, Recommendations, Collaboration, Fiction, Relationships, Email, Power grid and Airport.. Same as the previous setting for network domain, we run classification tasks 1000 times for each sampling method, aggregate, normalize and symmetrize the resulting confusion matrices. Figure 8 shows the aggregated confusion matrices for each sampling method in which we can observe that all of the confusion matrices exhibit the diagonal pattern, an indication of underlying separability of network sub-domains. Note that, however, some sub-domains such as bayesian, web graph, offline social, water distribution and software dependency, are often not classified correctly which may be due to the few instances and/or the fact that network structures of their instances are actually quite similar to the ones of other sub-domains. It is obvious to notice that the aggregated confusion matrix for random under-sampling displays fewer white cells within itself, meaning that under this sampling method the classifier confused a number of sub-domains with other sub-domains. In order to visualize the underlying similarities within network sub-domain, we again construct networks of sub-domains based on a weighted undirected adjacency matrix derived from aggregated confusion matrices, shown in Fig. 9. From this figure, we can observe that the network domain is not necessarily a good indicator of sub-domain clustering: informational networks including web graph, bayesian and peer to peer network and biological networks, such as metabolic, fungal, food web, etc. are not clustered together, meaning that we do not observe the sub-domains having the same color forming a community together. This could be attributed to the fact that some network domains, such as biological networks, entail a broad spectrum of network sub-domains within itself whose instances are drastically different in terms of the physical size of the things nodes represent (from cells to animals) and the process of networks (from chemical reactions to prey-predator relationships). Then the question is: What could be a good indicator of similarity in networks of different kinds? In order to answer this question, we first need to discover the communities of subdomain networks which are groups of nodes within which the weighted edge density is high, but between which the weighted edge density is low. Fig. 10 shows the networks of sub-domains on which the colors of nodes correspond to the community membership found by a community detection algorithm proposed by Clauset et al. Clauset et al. (2004). We have used an implementation of the modified version of this algorithm for weighted network that is available in Python-igraph as a method community_fastgreedy() Csardi and Nepusz (2006). From Fig. 10, one may notice that the community structure across different sampling methods is almost consistent, meaning that the certain groups of sub-domains are always in the same community. For instance, for all sampling methods, offline social, connectome and affiliation networks are in the same community. Some network sub-domains, however, change the community membership for some subdomain networks. For instance, online social network and forest fire model networks join the community of off-line social network for no sampling, random-over sampling and random-under sampling method, but joins the community of software dependency (red color) for SMOTE sampling. This variation of community membership across all subdomain networks could be attributed to the effect of each sampling method on decision surfaces a classifier builds for classification. For instance, decision surfaces built by a classifier under the random under-sampling strategy should be simple shape-wise since the training set is very sparse in terms of the number of data points. On the other hand, decision surfaces built by a classifier for random over-sampling should be more toward complicated shape-wise and lead to over-fitting as the number of data points for training is larger and duplicated points force the classifier to adjust itself to those points. Therefore this variation of decision surfaces may lead some data points to be classified differently for different sampling method, which ultimately leads to the variation of community memberships. In order to facilitate analyzing the variation of community memberships, we have constructed a matrix, shown in Fig. 11 in which the frequency that two sub-domain share the same community membership corresponds to the color intensity. For instance offline social, connectome and affiliation networks are assigned to the same community four times. From this figure, one may notice that there appears to be three groups of sub-domains that are consistently assigned to the same community. The first one is a community of social networks including online and offline social networks as well as affiliation network and forest fire model with an exception of connectome, namely brain networks. This grouping of social networks could be attributed to the same underlying process, namely forest fire process: on online social networks, namely Facebook networks, one becomes a friend with someone, and finds other friends on the person’s friend list, sends them friend requests and recursively continues the process; on off-line social networks, a person introduces his/her friends to you, they also introduce their friends to you and the process recursively continues. The possible explanations why we observe connectome being in the same community as such social networks are as follows: (i) the lack of feature or dimension in the feature space that distinguishes brain networks from social networks; (ii) the underlying generative mechanism of brain network is actually similar to that of social networks. As far as we know, however, there is no previous study investigating commonality of processes of both social and brain networks. Therefore it is reasonable to speculate here that we lacked a set of distinguishing features for connectome, which resulted in the clustering of the network with social networks. The second group of sub-domains corresponds to the networks that have been claimed to have the power-law degree distribution which include scale-free network, metabolic network, web graph, etc. and some of these networks have been conjectured to grow according to a mechanism called preferential attachment. In this network generative process, newly added nodes in a network, for example newly created web page or software package, tend to connect to the popular or high-degree nodes, i.e. popular web sites or widely used software packages. The third and last group of sub-domains is the network of “flow” that includes electrical signal (digital circuit), information (bayesian), water (water distribution), nutrient (fungal), people/trains (subway) and cars (road). Also, if we look at the subdomain networks, digital circuit and bayesian networks are always tightly connected together. This may be due to the fact that they are both an input-output network as well as being a flow network. The rest of the flow networks have another common property, that is, physical embedding of the network. These networks have a strong constraint of the physical limitation. For instance, it is almost impossible for a node in a physically embedded network to have a thousand connections upon it. From these communities of sub-domains, one could hypothesize an idea that explains what governs the structure of complex networks: functionality, constraint and growing mechanism of networks may play an important role of the formation of a specific network structure. As we have seen, the sub-domains of networks that have the similar functionality, constraint or growing mechanism exhibit the similar structural pattern which is captured in confusion matrices and also communities of networks. However, there seems to be a case where we do not know why a network sub-domain is in a community and how the hypothesis would explain this: connectome, or brain network shares the same community with social networks. If our hypothesis is correct, then brain networks should have either the same function, constraint or growing mechanism as social networks. As far as we know, there is no study which compares connectome and social networks in terms of network structure and explains the similarity of their function, constraint and generative mechanism. One possible direction to take would be finding a set of features for which those sub-domains have similar values and investigate a possible generative process which yields networks having such structural features. Although there are some exceptional cases, our findings from this experiment provide some supporting evidence that the network structure may be influenced by the underlying function, constraint and growing mechanism of the network. Discussion In the previous section, we have found the distinguishing features for various network sub-domains with possible explanations for the underlying processes of networks, and the hidden similarities among network domains and sub-domains based on subdomain networks we have constructed. Here, we synthesize our findings and the previous studies together. In the investigation of distinguishing features and the separability of sub-domains, we have observed that some network sub-domains are hard to be separated in the high dimensional feature space, which can be seen in the AUC score. This finding has an interesting connection with the subdomain networks we have constructed later: the sub-domains having high separability, such as online social networks and ecological food-webs, tend to be at the periphery of the subdomain networks, whereas the subdomains having low separability, such as protein interaction networks and connectome, tend to be at the core of the subdomain networks. In the subdomain networks, the core of nodes essentially depicts the sub-domains that are frequently misclassified by a classifier due to their structural similarity and the periphery displays sub-domains that are dissimilar to other sub-domains in terms of network structure. Therefore, it is relatively reasonable to infer that the sub-domains at the core that were not studied for finding the distinguishing features, such as bayesian networks and web graphs, may also exhibit the low separability in the feature space. As we have seen in confusion matrices in Fig. 6, network domains such as Biological, Social, etc. are quite separable in the feature space. Also, confusion matrices of network sub-domains exhibit the strong diagonal patterns except for a few such as bayesian network and web graphs, indicating the separability of networks at the sub-domain level. The separability of networks in the different levels (domain and sub-domain) tells us something about where networks of different types occur within a manifold in the high dimensional feature space we have constructed: in the feature space at the domain level, points that correspond to individual networks in the same network domain occupy some space in an intricately shaped manifold; diagonal elements in the confusion matrices for network domain imply that instances of different network domains occupy different locations within the manifold with some overlap, which could be observed in off-diagonal elements in the confusion matrices. At the network sub-domain level, we see a somewhat different outcome. Some sub-domains in a network domain occupy regions in the feature space that are completely separated. They correspond to the non-overlapping sections of the manifold at the level of network domain. They include, for example, food webs for Biological, peer to peer for Informational, communication for Technological and roads for Transportation. Some network sub-domains, however, almost completely overlap in the manifold with other sub-domains and they correspond to the overlapping sections in the manifold at the network domain level. They include bayesian and web graphs for Informational and water distribution and software dependency for Technological. This idea of thinking networks as data points in a manifold of complex shape within some feature space has been explored previously. Corominas-Murtra et al. have studied the idea of Hierarchy in which each axis corresponds to some feature related to the structure of networks, such as tree-ness, feed-forwardness and order-ability Corominas-Murtra et al. (2013). In this study, it is shown that different kinds of networks, such as technological, language and neural networks occupy some regions in a feature space. One may notice in this study that some regions in the feature space are not occupied at all by any networks. This observation yields a question about the feature space: Are some regions of a feature space theoretically possible for networks to occupy? This question may be answered with the study done by Ugander et al. Ugander et al. (2013). They have studied a feature space in which each axis corresponds to the subgraph frequency of online social networks and mathematically proved that there are some regions in the feature space that are mathematically infeasible to be occupied. In other words, it is theoretically impossible for networks to have a structural property which corresponds to the region in the feature space. Interestingly, they observed that the real world networks, in this case Facebook networks, only occupy some sections of the theoretically feasible region. Taken together, these studies suggest that networks in the high dimensional feature occupy some regions of the entire possible space that is theoretically feasible. This phenomenon may be due to the fact that the space which is not occupied, yet theoretically feasible region, corresponds to an inefficient structure of a network. Many of the biological networks and technological networks are optimized for a functioning by either natural selection over the course of evolution or effort of designing by a number of engineers and they may push the networks into a certain region in the feature space. The convergence toward a certain region in the feature space seems to happen in both kinds of networks, namely biological and technological networks. This is supported by one of our findings that fungal networks, a kind of biological network developed by a biological process, and water distribution networks, a kind of network designed by engineers, are found to be structurally similar based on the results of confusion matrices. This finding may be due to the fact that their optimization is essentially for efficient flow on the network and cost-reduction of wiring in the networks. Conclusion In this paper, we have studied 986 real-world networks along with 575 synthesized networks in order to formulate a hypothesis about the structural diversity of complex networks across various domains and sub-domains. Our study successfully identified the distinguishing features for some network sub-domains including, metabolic, ecological food web, online social and communication networks and found out some of these features could naturally explain the process in the network of interest. There are sub-domains such as protein interactions and connectomes, however, that seem to be indistinguishable from others with a set of features we have utilized Using machine learning techniques such as random forest classifier, confusion matrix and network community detection algorithm, we have found that there are some categories of networks that are hard to be distinguished from each other by a classifier based solely on their structural features and these groups of structurally similar yet categorically different networks in fact seem to have some common properties, such as the same functionality, physical constraints and generative process of the networks. There are, however, some categories of networks that are found to be structurally similar, and yet our hypothesis seems to lack some theoretical basis for explaining the observed phenomenon. Nevertheless, our study sheds light on the direction to which we could uncover the underlying principles of network structure: the functionality, constraint and growing mechanism of network may play an important role for the construction of networks having certain structural properties. There is still some room for our study to be improved. The class imbalance problem, even though we have utilized sampling methods in order to alleviate the problem, is one of the main remaining concerns. We could possibly discover other hidden properties and relationships if more instances were added for sub-domains that are excluded from the analyses due to the lack of instances needed for classification tasks, such as language network, collaboration network, power grid network, etc. Another direction for future research is incorporating other scale-invariant structural features. In this study we have only used a set of eight features. It is possible, however, that adding other dimensions in the feature space may reveal other hidden properties that were not captured in our feature set. Lastly, our study could extend a work by Middendorf et al. in which they trained a machine learning classifier on instances of various network generative models, classified protein interaction networks of the Drosophila melanogaster using, a species of fly, and identified a generative mechanism that was most likely to produce the protein interaction networks Middendorf et al. (2005). Using a broad spectrum of networks of different categories, our study suggests a way to construct and validate a hypothesis regarding which network sub-domains have the common generative process. For example, one could train a classifier on only networks that are constructed by some network generative models, feed the classifier instances of real-world networks and observe which categories of networks are classified as which generative processes. References Kleinberg (2008) J. Kleinberg, Commun. ACM 51, 66 (2008). Bullmore and Sporns (2009) E. Bullmore and O. Sporns, Nat Rev Neurosci 10, 186 (2009). Lee et al. (2008) D.-S. Lee, J. Park, K. A. Kay, N. A. Christakis, Z. N. Oltvai,  and A.-L. Barabási, Proceedings of the National Academy of Sciences 105, 9880 (2008). Ings et al. (2009a) T. C. Ings, J. M. Montoya, J. Bascompte, N. Blüthgen, L. E. Brown, C. F. Dormann, F. K. Edwards, D. Figueroa, U. Jacob, J. I. Jones, R. B. Lauridsen, M. E. Ledger, H. M. Lewis, J. M. Olesen, F. F. van Veen, P. H. Warren,  and G. Woodward, Journal of Animal Ecology 78, 253 (2009a). Faloutsos et al. (1999) M. Faloutsos, P. Faloutsos,  and C. Faloutsos, SIGCOMM Comput. Commun. Rev. 29, 251 (1999). Hines et al. (2010) P. Hines, S. Blumsack, E. C. Sanchez,  and C. Barrows, in Proc. 43rd Hawaii Internat. Conf. on System Sciences, HICSS ’10 (IEEE Computer Society, Washington, DC, USA, 2010) pp. 1–10. Yazdani and Jeffrey (2011) A. Yazdani and P. Jeffrey, Chaos 21, 016111 (2011). Roth et al. (2012) C. Roth, S. M. Kang, M. Batty,  and M. Barthelemy, Journal of The Royal Society Interface  (2012), 10.1098/rsif.2012.0259. Strogatz (2001) S. H. Strogatz, Nature 410, 268 (2001). Newman (2003) M. E. J. Newman, SIAM Review 45, 167 (2003). Albert and Barabási (2002) R. Albert and A.-L. Barabási, Rev. Mod. Phys. 74, 47 (2002). Boccaletti et al. (2006) S. Boccaletti, V. Latora, Y. Moreno, M. Chavez,  and D.-U. Hwang, Physics Reports 424, 175 (2006). Barabasi and Albert (1999) A.-L. Barabasi and R. Albert, Science 286, 509 (1999). Clauset et al. (2009) A. Clauset, C. R. Shalizi,  and M. E. J. Newman, SIAM Rev. 51, 661 (2009). Albert et al. (2000) R. Albert, H. Jeong,  and A. Barabasi, Nature 406, 378 (2000). Milgram (1967) S. Milgram, Psychology Today 1, 61 (1967). Watts and Strogatz (1998) D. J. Watts and S. H. Strogatz, Nature 393, 409 (1998). Latora and Marchiori (2001) V. Latora and M. Marchiori, Phys. Rev. Lett. 87, 198701 (2001). Barahona and Pecora (2002) M. Barahona and L. M. Pecora, Phys. Rev. Lett. 89, 054101 (2002). Girvan and Newman (2002) M. Girvan and M. E. J. Newman, Proceedings of the National Academy of Sciences 99, 7821 (2002). Newman and Girvan (2004) M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004). Newman (2012) M. E. J. Newman, Nature Physics 8, 25 (2012). Newman (2002) M. E. J. Newman, Phys. Rev. Lett. 89, 208701 (2002). Newman and Park (2003) M. E. J. Newman and J. Park, Phys. Rev. E 68, 036122 (2003). Mislove et al. (2007) A. Mislove, M. Marcon, K. P. Gummadi, P. Druschel,  and B. Bhattacharjee, in Proc. 7th ACM SIGCOMM Conf. on Internet Measurement, IMC ’07 (ACM, New York, NY, USA, 2007) pp. 29–42. van den Heuvel et al. (2016) M. P. van den Heuvel, E. T. Bullmore,  and O. Sporns, Trends in Cognitive Sciences 20, 345 (2016). Bullmore and Sporns (2012) E. Bullmore and O. Sporns, Nature Reviews Neuroscience 13, 336 (2012). Crossley et al. (2013) N. A. Crossley, A. Mechelli, P. E. Vértes, T. T. Winton-Brown, A. X. Patel, C. E. Ginestet, P. McGuire,  and E. T. Bullmore, Proceedings of the National Academy of Sciences 110, 11583 (2013). Gastner and Newman (2006a) M. T. Gastner and M. E. Newman, The European Physical Journal B 49, 247 (2006a). Gastner and Newman (2006b) M. T. Gastner and M. E. J. Newman, Journal of Statistical Mechanics: Theory and Experiment 2006, P01015 (2006b). Faust and Skvoretz (2002) K. Faust and J. Skvoretz, Sociological Methodology 32, 267 (2002). Milo et al. (2002) R. Milo, S. Shen-Orr, S. Itzkovitz, N. Kashtan, D. Chklovskii,  and U. Alon, Science 298, 824 (2002). Milo et al. (2004) R. Milo, S. Itzkovitz, N. Kashtan, R. Levitt, S. Shen-Orr, I. Ayzenshtat, M. Sheffer,  and U. Alon, Science 303, 1538 (2004). Onnela et al. (2012) J.-P. Onnela, D. J. Fenn, S. Reid, M. A. Porter, P. J. Mucha, M. D. Fricker,  and N. S. Jones, Phys. Rev. E 86, 036104 (2012). Newman (2010) M. Newman, Networks: An Introduction (Oxford University Press, Inc., New York, NY, USA, 2010). da F. Costa et al. (2007) L. da F. Costa, F. A. Rodrigues, G. Travieso,  and P. R. V. Boas, Advances in Physics 56, 167 (2007). Fortunato and Barthélemy (2007) S. Fortunato and M. Barthélemy, Proceedings of the National Academy of Sciences 104, 36 (2007). Good et al. (2010) B. H. Good, Y.-A. de Montjoye,  and A. Clauset, Phys. Rev. E 81, 046106 (2010). Alon (2007) U. Alon, Nat. Rev. Genet. 8, 450 (2007). Sporns and Kotter (2004) O. Sporns and R. Kotter, PLoS Biology 2 (2004). Orr et al. (2002) S. S. S. Orr, R. Milo, S. Mangan,  and U. Alon, Nature Genetics 31, 64 (2002). Himsolt (1999) M. Himsolt, GML: A portable Graph File Format, Tech. Rep. (Universität Passau, 94030 Passau, Germany, 1999). Erdős and Rényi (1960) P. Erdős and A. Rényi, in Proc. Mathematical Institute of the Hungarian Academy of Sciences (1960) pp. 17–61. de Solla Price (1965) D. J. de Solla Price, Science 149, 510 (1965). Leskovec et al. (2007) J. Leskovec, J. Kleinberg,  and C. Faloutsos, ACM Trans. Knowl. Discov. Data 1 (2007), 10.1145/1217299.1217301. Csardi and Nepusz (2006) G. Csardi and T. Nepusz, InterJournal Complex Systems, 1695 (2006). Ahmed et al. (2015) N. K. Ahmed, J. Neville, R. A. Rossi,  and N. Duffield, in ICDM (2015) pp. 1–10. Tange (2011) O. Tange, ;login: The USENIX Magazine 36, 42 (2011). He and Garcia (2009) H. He and E. A. Garcia, IEEE Trans. on Knowl. and Data Eng. 21, 1263 (2009). Chawla et al. (2002) N. V. Chawla, K. W. Bowyer, L. O. Hall,  and W. P. Kegelmeyer, J. Artif. Int. Res. 16, 321 (2002). Breiman (2001) L. Breiman, Mach. Learn. 45, 5 (2001). (52) https://www.stat.berkeley.edu/~breiman/RandomForests/cc_home.htm#giniimp. Rip et al. (2010) J. M. K. Rip, K. S. McCann, D. H. Lynn,  and S. Fawcett, Proceedings of the Royal Society of London B: Biological Sciences 277, 1743 (2010). Maslov et al. (2004) S. Maslov, K. Sneppen,  and A. Zaliznyak, Physica A: Statistical and Theoretical Physics 333, 529 (2004). (55) The excluded network subdomains are: Language, Relatedness, Legal, Commerce, Recommendations, Collaboration, Fiction, Relationships, Email, Power grid and Airport. Clauset et al. (2004) A. Clauset, M. E. J. Newman,  and C. Moore, Phys. Rev. E 70, 066111 (2004). Corominas-Murtra et al. (2013) B. Corominas-Murtra, J. Goni, R. V. Solé,  and C. Rodríguez-Caso, Proceedings of the National Academy of Sciences 110, 13316 (2013). Ugander et al. (2013) J. Ugander, L. Backstrom,  and J. Kleinberg, in Proc. 22nd Internat. Conf. on World Wide Web, WWW ’13 (ACM, New York, NY, USA, 2013) pp. 1307–1318. Middendorf et al. (2005) M. Middendorf, E. Ziv,  and C. H. Wiggins, Proceedings of the National Academy of Sciences 102, 3192 (2005). Barabási and Oltvai (2004) A.-L. Barabási and Z. N. Oltvai, Nature Reviews Genetics 5, 101 (2004). Humphries and Gurney (2008) M. D. Humphries and K. Gurney, PLoS ONE 3, 1 (2008). Ings et al. (2009b) T. C. Ings, J. M. Montoya, J. Bascompte, N. Blüthgen, L. Brown, C. F. Dormann, F. Edwards, D. Figueroa, U. Jacob, J. I. Jones, R. B. Lauridsen, M. E. Ledger, H. M. Lewis, J. M. Olesen, F. F. Van Veen, P. H. Warren,  and G. Woodward, Journal of Animal Ecology 78, 253 (2009b). Blondel et al. (2008) V. D. Blondel, J.-L. Guillaume, R. Lambiotte,  and E. Lefebvre, Journal of Statistical Mechanics: Theory and Experiment 2008, P10008 (2008).
Detecting Planets Around Very Low Mass Stars with the Radial Velocity Method A. Reiners11affiliation: Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany, [email protected] 33affiliation: Emmy Noether Fellow , J. L. Bean11affiliation: Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany, [email protected] 44affiliation: Marie Curie International Incoming Fellow , K. F. Huber22affiliation: Hamburger Sternwarte, Gojenbergsweg 112, 21029 Hamburg, Germany , S. Dreizler11affiliation: Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany, [email protected] , A. Seifahrt11affiliation: Institut für Astrophysik, Georg-August-Universität, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany, [email protected] , & S. Czesla22affiliation: Hamburger Sternwarte, Gojenbergsweg 112, 21029 Hamburg, Germany Abstract The detection of planets around very low-mass stars with the radial velocity method is hampered by the fact that these stars are very faint at optical wavelengths where the most high-precision spectrometers operate. We investigate the precision that can be achieved in radial velocity measurements of low mass stars in the near infrared (nIR) $Y$-, $J$-, and $H$-bands, and we compare it to the precision achievable in the optical assuming comparable telescope and instrument efficiencies. For early-M stars, radial velocity measurements in the nIR offer no or only marginal advantage in comparison to optical measurements. Although they emit more flux in the nIR, the richness of spectral features in the optical outweighs the flux difference. We find that nIR measurement can be as precise than optical measurements in stars of spectral type $\sim$M4, and from there the nIR gains in precision towards cooler objects. We studied potential calibration strategies in the nIR finding that a stable spectrograph with a ThAr calibration can offer enough wavelength stability for m s${}^{-1}$ precision. Furthermore, we simulate the wavelength-dependent influence of activity (cool spots) on radial velocity measurements from optical to nIR wavelengths. Our spot simulations reveal that the radial velocity jitter does not decrease as dramatically towards longer wavelengths as often thought. The jitter strongly depends on the details of the spots, i.e., on spot temperature and the spectral appearance of the spot. At low temperature contrast ($\sim 200$ K), the jitter shows a decrease towards the nIR up to a factor of ten, but it decreases substantially less for larger temperature contrasts. Forthcoming nIR spectrographs will allow the search for planets with a particular advantage in mid- and late-M stars. Activity will remain an issue, but simultaneous observations at optical and nIR wavelengths can provide strong constraints on spot properties in active stars. Subject headings:stars: activity — stars: low-mass, brown dwarfs — stars: spots — techniques: radial velocities 1. Introduction The search for extrasolar planets with the radial velocity technique has led to close to 400 discoveries of planets around cool stars of spectral type F--M111http://www.exoplanet.eu. Fourteen years after the seminal discovery of 51 Peg b by Mayor & Queloz (1995), the radial velocity technique is still the most important technique to discover planetary systems, and radial velocity measurements are required to confirm planetary candidates found by photometric surveys including the satellite missions CoRoT and Kepler. The largest number of planets found around solar-type stars are approximately as massive as Jupiter, and are orbiting their parent star at around 1 AU or below. In order to find Earth-mass planets in orbit around a star, the radial velocity technique either has to achieve a precision on the order of 0.1 m s${}^{-1}$, or one has to search around less massive stars, which would show a larger effect due to the gravitational influence of a companion. Therefore, low-mass M dwarfs are a natural target for the search for low-mass planets with the radial velocity technique. In addition, there seems to be no general argument against the possibility of life on planets that are in close orbit around an M dwarf (inside the habitable zone; Tarter et al., 2007). So these stars are becoming primary targets for the search for habitable planets. So far, only a dozen M dwarfs are known to harbor one or more planets (e.g., Marcy et al., 1998; Udry et al., 2007). The problem with the detection of radial velocity variations in M dwarfs is that although they make up more than 70 % of the Galaxy including our nearest neighbors, they are also intrinsically so faint that the required data quality can not usually be obtained in a reasonable amount time, at least not in the spectral range most high resolution spectrographs operate at. M dwarfs have effective temperatures of 4000 K or less, and they emit the bulk of their spectral energy at wavelengths redward of 1 $\mu$m. The flux emitted by an M5 dwarf at a wavelength of 600 nm is about a factor of 3.5 lower than the flux emitted at 1000 nm. Thus, infrared spectroscopy can be expected to be much more efficient in measuring radial velocities of low-mass stars. A second limit on the achievable precision of radial velocity measurements is the presence of apparent radial velocity variations by corotating features and temporal variations of the stellar surface. Such features may influence the line profiles, and that can introduce a high noise level or be misinterpreted as radial velocity variations due to the presence of a planet. Flares on active M dwarfs might not pose a substantial problem to radial velocity measurements (Reiners, 2009), but corotating spots probably do. Desort et al. (2007) modeled the effect of a cool spot for observations of sun-like stars at optical wavelengths. Their results hint at a decrease of spot-induced radial velocity signals towards longer wavelengths. Martín et al. (2006) report the decrease of a radial velocity signal induced by a starspot on the very active M9 dwarf LP 944-20 ($v\,\sin{i}\approx 30$ km s${}^{-1}$); the amplitude they find is 3.5 km s${}^{-1}$ at optical wavelengths but only an rms dispersion of 0.36 km s${}^{-1}$ at 1.2 $\mu$m. Thus, observations at infrared wavelength regions may substantially reduce the effect of stellar activity on radial velocity measurements, which would allow the detection of low-mass planets around active stars. In this paper, we investigate the precision that can be reached in radial velocity measurements at infrared wavelength regions. The first goal of our work is to study the detectability of planets around low-mass stars using infrared spectrographs. We focus on the wavelength bands $Y,J$, and $H$ because these are the regions where spectrographs can be built at relatively low cost. Extending the wavelength coverage to the $K$-band imposes much larger costs because of severe cooling requirements and large gaps in the spectral format. We chose to exclude this case from the current paper. Our second motivation is to see to what extent the radial velocity signal of active regions can be expected to vanish at infrared wavelength regions. So far, only rough estimates based on contrast arguments are available, and no detailed simulation has been performed. The paper is organized as follows. In §2, we introduce the spectral characteristics of M dwarfs and compare model spectra used for our simulations to observations. In §3, we calculate radial velocity precisions that can be achieved at different wavelengths, and we investigate the influence of calibration methods. In §4, we simulate the effect of starspots on radial velocities in the infrared, and §5 summarizes our results. 2. Near infrared spectra of M dwarfs M dwarfs emit the bulk of their flux at near-infrared (nIR) wavelengths between 1 and 2 $\mu$m. However, high-resolution spectrographs operating in the nIR are not as ubiquitous as their counterparts in the optical. Therefore, our knowledge about M dwarf spectra past 1 $\mu$m is far behind what is known about the visual wavelength range. Another complication is that strong absorption bands of water from the Earth’s atmosphere cover large fractions of the nIR wavelength range. Only some discrete portions of the region 1 – 2 $\mu$m can be used for detailed spectroscopic work. For our investigation of radial velocity precision in M dwarfs, we concentrate on the three bands, $Y$, $J$, and $H$ between the major water absorption bands. We show in Fig. 1 the transmission spectrum of the Earth’s atmosphere together with band identification that we use in the following. We modeled the telluric features using the FASCODE algorithm (Clough et al., 1981, 1992), a line-by-line radiative transfer code for the Earth’s atmosphere, and HITRAN (Rothman et al., 2005) as a database for molecular transitions. Our model is based on a MIPAS222http://www-atm.physics.ox.ac.uk/RFM/atm/ nighttime model atmosphere with updated temperature, pressure and water vapor profiles for the troposphere and lower stratosphere based on GDAS333http://www.arl.noaa.gov/ready/cmet.html meteorological models for Cerro Paranal (see Seifahrt et al., submitted to A&A). Fig. 1 shows that the wavelength regions between $Y$, $J$, and $H$ are not useful for spectroscopic analysis of stars. Furthermore, it is important to note that the bands themselves are not entirely free of telluric absorption lines. While the $Y$-band shows relatively little telluric contamination, the $J$- and $H$-bands have regions of significant telluric absorption that must be taken into account when radial velocities are measured at these wavelengths. In our calculations, we mask out the regions that are affected by significant telluric absorption (see §3.1). Observed low-resolution ($R\approx 2000$) infrared spectra of cool stars and brown dwarfs were presented by McLean et al. (2003) and Cushing et al. (2005). High-resolution infrared spectra of M dwarfs are still very rare; McLean et al. (2007) and Zapatero-Osorio et al. (2007) show $J$-band spectra of M, L, and T dwarfs taken with NIRSPEC at $R\approx 20,000$. Hinkle et al. (2003) report measurements of rotational velocities from short portions of $R\approx 50,000$ $K$-band spectra taken with the Phoenix infrared spectrograph, and Wallace & Hinkle (1996) show $K$-band spectra of cool stars at $R\approx 45,000$. Short $H$-band spectra of cool stars with a focus on OH lines are presented by O’Neal et al. (2001), in particular they show that OH is present in M dwarfs but weaker than in giants and subgiants. The high-resolution spectrum that is probably closest to the appearance of an M dwarf spectrum, and that fully covers the wavelength range considered in this paper ($Y,J$, and $H$) is the spectrum of a sunspot. Wallace & Livingston (1992) and Wallace et al. (1998) presented spectra of a sunspot in the visual and nIR regions, and in the nIR up to 5.1 $\mu$m, respectively. However, the sunspot spectrum does not resemble the spectra of M dwarfs at high detail, and it cannot be used to investigate a range of temperatures among the M dwarfs. The sunspot spectrum is probably closest to an early-M dwarf with low gravity (Maltby et al., 1986), and we use it below only as a cross-check for the existence of absorption features predicted in our models. To investigate the radial velocity precision that can be reached using nIR spectra of M dwarfs, we used model spectra calculated with the PHOENIX code (e.g., Hauschildt et al., 1999; Allard et al., 2001). This strategy has the advantage that we can use the full wavelength range without any restrictions imposed by the unavailability of empirical data, and that we can model stars of any temperature and surface gravity. The caveat, however, is that the model spectra may not adequately resemble real stellar spectra, in particular at the high spectral resolution we require. We show in Fig. 2 three model spectra of cool dwarfs, spectral types are approximately M3, M6, and M9, i.e., effective temperatures of T = 3500, 2800, and 2600 K, respectively. We use models with a surface gravity of $\log{g}=4.75$ throughout this paper. The spectra reproduce the data shown by McLean et al. (2003) and Cushing et al. (2005) reasonably well when degraded to low-resolution. In Fig. 3, we show short parts of the model spectra at $Y$-, $J$-, and $H$-bands at higher detail for the case of an M3 star (T = 3500 K). The three spectral windows in Fig. 3 all cover the same number of resolution elements assuming the spectral resolution is the same in all bands. Obviously, the $Y$-band is very rich in structure. There are only a few deep lines in the $J$-band, and the number of sharp features in the $H$-band is lower than in the $Y$-band, but higher than in the $J$-band. The model shown in Fig. 3 is at the hot end of targets we are interested in. Comparison to Fig. 2 indicates that with lower temperature, the features in the $Y$-band become stronger. The same is true for features in the $J$-band, but the $H$-band becomes relatively featureless at late-M spectral class, which is mainly due to the disappearance of OH lines. In Fig. 3, we also show the sunspot spectrum in comparison to the M3 model spectrum. The FeH lines in the sunspot $Y$-band are somewhat weaker than in M dwarfs and are strongly affected by Zeeman broadening (Reiners & Basri, 2006), the latter leading to significantly reduced depths for many lines in the sunspot spectrum. In the $J$- and the $H$-bands, the main features and their depth are relatively well described, so that our model spectra likely reproduce the stellar spectrum relatively well, at least at this temperature. Another comparison of our models to an observed high-resolution spectrum of a mid-M dwarf is shown in Fig. 4, where we compare an observed spectrum of GJ 1002 (M5.5) to a model spectrum at a temperature of 3200 K. The observed spectrum was obtained by us using CRIRES (Käufl et al., 2006) at the Very Large Telescope (VLT) and reduced following standard infrared reduction procedures including bias- and sky-subtraction, flatfielding, and wavelength calibration using ThAr lines. The model lines (predominantly from FeH) provide a good fit to the observed spectrum of GJ 1002. Thus, we feel confident that our model spectra reproduce the FeH band in the $Y$-band in M dwarfs reasonably well. More generally, all these comparisons suggest that the PHOENIX model spectra are accurate enough for simulations of radial velocities measured from the nIR spectra of M dwarfs. 3. Radial velocity precision 3.1. Calculation of radial velocity precision The achievable precision of a radial velocity measurement from a spectrum of given quality was calculated by Connes (1985) and Butler et al. (1996). This value is a general limit for radial velocity measurements; it reflects the limited intrinsic information content of any observed spectrum. For example, if a star exhibits more lines in a certain wavelength range, this will lead to a higher precision compared to a star with fewer lines. Similarly, a set of shallow, perhaps blended, lines will put a weaker constraint on the radial velocity than a set of narrow, deep lines. In their Eq. 6, Butler et al. (1996) provide a formula for the radial velocity uncertainty as a function of intrinsic content, instrument resolution ($R$), and signal-to-noise ratio ($S/N$). This value is inversely proportional to the weighted sum of the spectrum derivative, which means that the precision is higher as the spectrum has more deep and sharp features. In the following, we first calculate the intrinsic radial velocity precision achievable in a stellar spectrum at given $R$ and $S/N$. In a second step, we ask how this precision is affected by the limited precision of the wavelength calibration (§3.3). In order to compare the potential of different wavelength bands, we take a model spectrum, assume a $S/N$ at one given wavelength, and calculate the $S/N$ at other wavelengths according to the spectral flux distribution. We assume constant instrument efficiency at all wavelengths, and let the signal quality vary according to the stellar flux distribution. We also assume constant spectral resolution and sampling at different wavelength ranges. Calculating the $S/N$ from the spectral flux distribution, it is important to use the number of photons per spectral bin instead of energy per spectral bin, which is provided by the PHOENIX model spectra. To convert energy flux into photon flux, we need to divide by the energy carried per photon, which is $hc/\lambda$. Neglecting the constants, this means we need to multiply the spectrum with $\lambda$. For our calculations, we assume an average $S/N$ of 100 at a resolving power of $R=60,000$ in the $Y$-band. To account for the spectral resolution, we apply an artificial broadening to the spectra so that the lines become wider, and we assume that the instrument collects a constant number of photons per wavelength interval, i.e., more photons are available per bin at lower spectral resolution. Note that this approach ignores the possibly higher losses from using a narrower slit or other design differences to achieve higher resolution that are likely in practice. That is, a real spectrograph would likely deliver more photons per wavelength interval when used in a lower resolution mode. As this effect is likely a lower-order consideration than the effect of varying dispersion, and is also difficult to predict in a general sense, we do not consider it here. 3.2. Impact of telluric lines Near infrared wavelength regions are more severely affected by telluric contamination than the optical range. The limiting effect of telluric contamination lies in the difficulty removing it to a level at which it does not affect the radial velocity measurement on a m s${}^{-1}$ level. For example, Bean et al. (2009) report a limit of 5 m s${}^{-1}$ that can be reached when spectral regions with telluric lines are included in the analysis. To reach higher precision, contaminated regions need to be excluded from the analysis. In our calculations, we mask the regions affected by telluric lines and do not use them for our analysis of radial velocity precision. We chose to ignore all spectral regions that fall in the vicinity of $\pm$30 km s${}^{-1}$ around a telluric absorption of 2 % or more, which is approximately the span introduced by maximum barycentric velocity differences. The telluric transmission spectrum was artificially broadened to a spectral resolving power of $R=100,000$ before the 2 % cutoff was applied. The exact regions that fulfill this criterion also depend on the atmospheric conditions. The wavelength bands together with the fractions ignored due to contamination with telluric lines are summarized in Table 1. The telluric contamination in the $V$-band is rather small (2 %), and does not badly affect the $Y$-band either ($<20$ %). On the other hand, roughly half of the spectrum in the $J$- and $H$-bands (55 % and 46 %, respectively) is affected by telluric contamination. The effect on the theoretical information content of the stellar spectrum hence is $\sim\sqrt{2}$, which is still not an order-of-magnitude effect but can no longer be neglected. At these wavelengths, one has to decide whether the RV precision is higher after discarding the contaminated spectral range, or if one should attempt to correct for the telluric contamination. We note that the same excercise for the $K$-band (2050–2400 nm) shows that about 80 % of this wavelength range is affected by significant telluric contamination. Clearly, radial velocity work in the $K$-band faces very severe limitations due to telluric lines. 3.3. Wavelength calibration methods A critical part of radial velocity measurements is the wavelength calibration. When considering such measurements in new spectral regimes the influence of available calibration precision must be considered in addition to the intrinsic information content of the spectra of the stars of interest. There are generally two types of wavelength standards that are being used very successfully at optical wavelengths for high-precision radial velocity work; 1) the calibration signal is injected in the spectrograph following a separate path, e.g., using a ThAr emission lamp (Pepe et al., 2002), and 2) the calibration signal is imposed on the stellar light using a gas absorption cell (e.g. I${}_{2}$, Butler et al., 1996). At nIR wavelengths, both techniques so far have not been used at the same level of efficiency as in the optical, mainly because no instruments are yet available that can provide comparable spectral resolving power and wavelength range as instruments operating at optical wavelengths. However, such spectrographs are foreseen for the future, and we estimate the precision of both techniques and their current applicability in the nIR. We cast these calculations in terms of equivalent radial velocity precision so that they may be compared directly to the estimated information content of the stellar spectra. For our purpose, we have not considered calibration using a laser comb (Steinmetz et al., 2008) or an etalon, which essentially follow the same principle as the ThAr calibration. A laser comb or etalon that cover the full desired wavelength range would largely solve the problems of inadequate wavelength coverage. Unfortunately, both are not yet available and we restrict the discussion to the ThAr and gas cell options. 3.3.1 ThAr lamp In the nIR, the ThAr method could in principle just be copied from the optical regime. Standard ThAr lamps produce fewer lines in the nIR, but that does not necessarily mean that the precision over large wavelength regions must be lower than at optical wavelengths. For example, Kerber et al. (2008) provide a list of ThAr lines that includes more than 2400 lines in the range 900 – 4500 nm. Lovis et al. (2006) found that the Ar lines produced by a ThAr lamp are unsuitable for high precision wavelength calibration because they show high intrinsic variability on the order of tens of m s${}^{-1}$ between different lamps. Nevertheless, Kerber et al. (2008) show that in the wavelength range we consider here, the fraction of Ar lines in the ThAr lamp spectrum is only on the order of $\sim 15$ %, and these authors also discuss that the pressure sensitivity only appears in the high-excitation lines of Ar i. So although there are fewer lines than in the optical, the still high number of possible lines suggests that a ThAr lamp should be evaluated as a possible wavelength calibration. To estimate the calibration precision that can be reached with a ThAr spectrum in a given wavelength interval, we count the number of lines contained in this interval in the list of Kerber et al. (2008), and we estimate the uncertainty in determining the position of every line (converted to radial velocity units) according to its intensity. We assume that we only take one exposure of the ThAr lamp, and we scale the line intensities to a given dynamic range. The range ultimately used for our calculations was selected to achieve a high number of useful lines while losing as few lines as possible due to detector saturation. We quadratically add the uncertainties of all lines to calculate the total uncertainty of a ThAr calibration at the chosen spectral region. “Saturated” lines are not taken into account, but we note that infrared detectors offer advantages over CCDs in this regard. One advantage of infrared detectors is that saturated pixels do not bleed into neighboring pixels like in CCDs. Therefore, although a particular calibration line may saturate some pixels, the problem would be localized only to the pixels the line falls on and the signals recorded for lines falling on neighboring pixels would be unaffected. In practice then, some lines may be allowed to saturate and thus be ignored during the wavelength calibration if a higher overall signal would result in a net gain of useful lines. A second issue is that individual pixels in infrared detectors can be read out at different rates. Therefore, there is the possibility of an increased dynamic range for a given exposure level. However, it is unclear what would be the influence of the differences in response and noise properties that pixels read out at different rates would have. So for this work we take the conservative approach that lines which would apparently saturate given our selected dynamic range can not be used for the wavelength calibration. Our estimation of the utility of a ThAr lamp for wavelength calibration in order to obtain high-precision radial velocities is based on the assumption that the calibration is only needed to track minor changes in the instrument response. Such is true for an isolated and stabilized instrument with a nearly constant pupil illumination (e.g. like HARPS, Mayor et al., 2003). The main reason is that the light from a ThAr lamp will not pass directly through the instrument in the exact same way and/or at the exact same time as the light from a star. Therefore, the utility of ThAr as a calibration for radial velocity measurements will be reduced from that discussed here for instruments that experience significant temporal variations. 3.3.2 Gas absorption cell The gas absorption technique requires a gas that provides a large number of sharp spectral lines. Currently, no gas has been identified that produces lines in the full nIR wavelength range at a density comparable to I${}_{2}$ spectrum in the optical (I${}_{2}$ only provides lines in the optical wavelength range), although there have been some investigations into gases suitable for small windows in this region. D’Amato et al. (2008) report on a gas absorption cell using halogen-hydrates, HCl, HBr, and HI, that has absorption lines between 1 and 2.4 $\mu$m, but these gases only produce very few lines so that a calibration of the wavelength solution and the instrumental profile can only be done over a small fraction of the spectrum. Mahadevan & Ge (2009) discuss various options for nIR gas cells and conclude that the gases H${}^{13}$C${}^{14}$N, ${}^{12}$C${}_{2}$H${}_{2}$, ${}^{12}$CO, and ${}^{13}$CO together could provide useful calibration in the $H$-band. Another gas that provides some utility in the nIR is ammonia (NH${}_{3}$), which exhibits a dense forest of spectral lines in the $K$-band. We are currently using an ammonia cell for a radial velocity planet search with CRIRES at the VLT. More details about the cell and these radial velocity measurements are contained in another paper (Bean et al., 2009). For an estimate of the calibration precision that could potentially be achieved over a broad wavelength range using a gas cell, we assume that a gas or combination of gases might be found with absorption lines similar to ammonia, but throughout the entire nIR region. We calculate the radial velocity precision from a section of an ammonia cell spectrum with various $S/N$ and R just as for the stellar spectra (i.e. using Eq. 6 in Butler et al., 1996). The basis for this calculation is a 50 nm section of a spectrum of our CRIRES ammonia cell (18 cm length and filled with 50 mb of ammonia) measured with an FTS at extremely high-resolution ($R\sim 700,000$). Convolved to a resolving power of $R=100,000$ and $S/N$ = 100, the calculated precision is 9 m s${}^{-1}$. We note that this value would change if a longer cell or higher gas pressure would be used, but this change would be relatively small for conceivable cells. To extrapolate the precision estimate to arbitrary wavelength regions we scale the calculated value by the corresponding size of the regions. For example, the uncertainty from a region of 100 nm would be a factor of $\sqrt{2}$ less than that calculated for the 50 nm region. We emphasize that our estimates on the performance of a gas cell in the nIR are purely hypothetical. Currently, in the wavelength range under consideration, we know of no real gas that shows as dense a forest of lines in the $Y$-, $J$-, and $H$-bands as ammonia does in the $K$-band. 3.4. Radial velocity precision in low-mass stars The precision that can be achieved using (model) spectra of stars at 3500 K (M3), 2800 K (M6), and 2400 K (M9) for different wavelength regions are summarized in Table 2 and shown in Fig. 5. For each case, we first calculated the intrinsic precision over the wavelength bin under consideration. As explained in §3, we assume $S/N$ of 100 at 1 $\mu$m at $R=60,000$ and scale the signal quality according to the spectral flux distribution and spectral resolution. The differences between radial velocity precisions at different wavelength bands are dominated by the differences between the $S/N$ and between the appearance of spectral features in these bands (see Figs. 2 and 3). A secondary effect is the length of the spectral range that differs between the bands, but it is not always the band with the largest coverage that yields the highest precision. We show the situation for three different values of $R=100,000$, 80,000, and 60,000. The $S/N$ given in Table 2 varies according to the number of photons per pixel, which decreases at higher spectral resolution because the wavelength bins per pixel become smaller. The $S/N$ is always comparable between the three nIR bands, but the optical wavelength range provides $S/N$ that is about a factor of two smaller in the M3, a factor of five smaller at M6, and a factor of ten smaller in the M9 star. In addition to the intrinsic precision, we show the precision achievable if an imperfect wavelength calibration is considered. The additional uncertainty due to ThAr or gas cell calibration (see §3.3) leads to somewhat higher limits that can be achieved in a real observation. We show no ThAr or gas cell values for the $V$-band because here the wavelength calibration is not the critical factor for the situations investigated in this paper. The question Fig. 5 tries to answer is what is the highest attainable precision of a radial velocity measurement if a given star is observed at different wavelength regions and spectral resolutions, under the assumption of the same exposure time, telescope size, and instrument throughput for all setups. For an early-M star (M3), the highest precision is still reached in the $V$-band, although the $Y$-band does not perform much worse. For the given choice of parameters, the highest obtainable precision in the $V$-band at $R=100,000$ is roughly 2.5 m s${}^{-1}$, and in the $Y$-band it is $\sim 3.8$ m s${}^{-1}$. The $J$- and $H$-bands are worse with only $\sim 16$ m s${}^{-1}$ and $8$ m s${}^{-1}$ precision, respectively, at the highest resolution. In general, in the absence of rotation, higher precision is obtained for higher spectral resolving power444As an approximation, precision scales linearly with $S/N$ but quadratically with $R$. If a constant number of photons is assumed, $S/N$ scales down with $\sqrt{R}$, and as a result, the precision scales approximately with $R^{3/2}$.. We discuss the limits to the precision in rotating stars in $\lx@sectionsign$3.5. A remarkable feature of our precision calculations for $T=3500$ K is that although the flux level in the visual wavelength range is much lower than the flux around 1 $\mu$m and redder, the radial velocity precision is not worse. This is because the optical spectrum of an early-M dwarf is extremely rich in narrow features. At nIR wavelengths, the number of features is much lower so that the attainable precision is lower, too. The same explanation holds for the comparison between the nIR $Y$-, $J$-, and $H$-bands. The low precision obtainable in the $J$-band is due to the lack of sharp and deep spectral features in that wavelength range (compare Figs. 2 and 3). At lower temperature, $T=2800$ K (M6), the overall precision (at the same $S/N$, that means in general after longer exposures) has gained in the nIR-bands in comparison to the optical, because now the $S/N$ is much higher at nIR regions. The $Y$- and $J$-band precisions improve a lot in comparison to the M3 star, which is also due to the appearance of FeH bands (see Cushing et al., 2005). The $H$-band precision at 2800 K is comparable to the 3500 K spectrum. Now, the $V$-band performs worse than the $Y$-band, but still it yields a remarkably high precision: Although the flux level in the $V$-band is about an order of magnitude below the flux at nIR wavelengts, the richness in sharp features can almost compensate for this. The $V$-band precision is only about 30 % worse than the $Y$-band precision, and it still yields much better precision than the $J$- and the $H$-bands. Finally, in the M9 star at $T=2400$ K, all three nIR bands outperform the $V$-band because of the high flux ratio between the nIR and the optical range. Still, the $Y$-band provides the highest precision about a factor of two better than the $J$- and the $H$-band. We consider now the effects of limited precision in the wavelength calibration using the ThAr lamp or a gas cell (shown in in Fig. 5 as open rhombs and crosses, respectively). The ThAr calibration apparently can provide a very reliable calibration that introduces only a few percent loss in precision. Of course, in a real spectrograph, effects like wavelength stability over time additionally limit the precision that can be reached (Lovis & Pepe, 2007). This effect has not been taken into account. Nevertheless, our calculations show that enough suitable ThAr lines are available and that a wavelength solution at nIR wavelengths ($Y$ – $H$-bands) based on ThAr lines is a reliable calibration that can in principle be expected to work almost as successfully as in the optical wavelength range. In contrast to that, the calibration using the virtual gas cell as a reference yields much worse a result, in particular at short wavelengths like in the $Y$-band. In order to make the gas-cell calibration provide the same accuracy as a calibration using a ThAr lamp (in a stabilized spectrograph), a gas is needed that provides more and deeper lines than NH${}_{3}$ provides in the $K$-band. So far, all gases known provide many fewer lines so that currently achievable precision turns out to be significantly below than what can be achieved with ThAr. We note that in order to make the gas cell calibration work, the spectrum must have a minimum $S/N$ allowing a reliable reconstruction of the instrument profile. This means a typical minimum $S/N$ of $\sim 100$ is required for the gas cell method. Thus, using a gas cell for the wavelength calibration, low-$S/N$ spectra as in the M6 and M9 cases in the $V$-band considered above could not be used. 3.5. The influence of rotation 3.5.1 Distribution of rotation in M dwarfs Higher resolution spectra only offer an advantage for radial velocity measurements if the stars exhibit sharp lines (see also Bouchy et al., 2001). Field mid- and late-M dwarfs can be very rapid rotators with broad and shallow spectral lines. For example, Reiners & Basri (2008) show that rapid rotation is more frequent in cooler M dwarfs. We have collected measurements on rotational velocities from Delfosse et al. (1998); Mohanty & Basri (2003); Reiners & Basri (2008) and Reiners & Basri (submitted to ApJ), and we show the cumulative distributions of $v\,\sin{i}$ for early-, mid-, and late-M stars in Fig. 6. All early-M dwarfs (M1 – M3) in the samples are rotating slower than $v\,\sin{i}=7$ km s${}^{-1}$, which means that early-M dwarfs are very good targets for high precision radial velocity surveys. Among the mid-M dwarfs (M4 – M6), about 20 % of the stars are rotating faster than $v\,\sin{i}=10$ km s${}^{-1}$, and this fraction goes up to about 50 % in the late-M dwarfs (M7 – M9). 3.5.2 Rotational broadening and radial velocity precision We show in Fig. 7 the $Y$-band precision of radial velocity measurements that can be achieved in a 3000 K star (M5) as a function of projected rotational velocity at different spectral resolutions ($R$ = 20,000, 60,000, 80,000, and 100,000). All assumptions about $S/N$ and $R$ are as explained above. As expected, at high rotation velocities ($v\,\sin{i}>30$ km ${}^{-1}$), the precision achieved with spectrographs operating at different $R$ are hardly distinguishable. Only if the line width of the rotating star is narrower than the instrumental profile does higher resolution yield higher precision. However, for $R>60,000$, the difference in precision is relatively small even at very slow rotation. At a rotation velocity of $v\,\sin{i}=10$ km s${}^{-1}$, the precision is roughly a factor of 3 lower than the precision in a star with $v\,\sin{i}=1$ km s${}^{-1}$, and $v\,\sin{i}=6$ km s${}^{-1}$ brings down the precision by about a factor of 2 compared to $v\,\sin{i}=1$ km s${}^{-1}$. 4. Radial velocity variations from starspots In this Section, we investigate the influence of starspots on the measurement of radial velocities. A similar study was performed by Desort et al. (2007), but for sun-like stars in the optical wavelength regime. We extend this study to cooler stars and nIR wavelengths. 4.1. Spot properties Radial velocity variations can be mimicked by surface features corotating with the star. We know from the Sun that magnetic activity can cause spots that are significantly cooler than the photosphere, and much larger spots are found on stars more active than the Sun (in general, rapidly rotating stars are more active). However, the temperature differences between starspots and the corresponding “quiet” photosphere remains a rather unknown parameter, especially for very active stars. The coolest temperatures in large sunspots can be $\sim\,2000$ K lower than the temperature in the surrounding photosphere (Solanki, 2003). Observed as a star, however, such differences would still be difficult to observe because sunspots cover only a very small fraction of the surface ($<1\%$). O’Neal et al. (2001, 2004) reported spot temperatures up to 1500 K below photospheric temperatures in active G- and K-type stars. These spots cover between 10 and 50 % of the stellar surface. Strassmeier & Rice (1998) reported spot temperatures in an active K dwarf that are $\sim 800$ K below the effective photospheric temperature based on Doppler imaging. From the available observations, no general prediction on the occurrence and properties of spots on stellar surfaces is possible. In particular, no observational information is available on spot distributions on M dwarfs, which have atmospheres significantly different than those of sun-like stars, and may also have different magnetic field topologies (Donati et al., 2008; Reiners & Basri, 2009). Therefore, we investigate a set of different photosphere and spot temperature combinations in the following. Before we investigate apparent radial velocity shifts using a detailed model (§4.2), we consider a “toy model” of an ideal spectral line composed of signal from two regions: the “quiet” surface of a rotating star, and a co-rotating, cool spot. To demonstrate the influence of the temperature contrast and differing spectral features for a spot, we generate the line profiles sampled over a complete rotational period and compute the apparent radial velocity shift for each rotational phase by fitting them with a Gaussian. An example radial velocity curve is given in Fig. 8. We report on the amplitude $K$ of the radial velocity curve occurring during a full rotation555Desort et al. (2007) report peak-to-peak amplitude, which is twice the value we are using.. 4.1.1 Contrast One crucial parameter for the influence of a starspot on a spectral line, and thus the star’s measured radial velocity, is the flux contrast between the quiet surface and the spot at the wavelength of interest. In general, one expects the radial velocity amplitude due to a spot to be smaller with lower contrast and vice versa. We illustrate the effect of a cool spot on the line profile and the measured radial velocity shift in Fig. 9, where the situation for a spot covering 2 % of the visible surface is shown for two different temperatures (contrasts). To investigate the influence of contrast over a wide range of parameters we used the toy model described above. We assumed a blackbody surface brightness according to a surface temperature $T_{0}$, and we subtracted the amount of flux that originates in a spot covering 2 % of the stellar surface. We then added the flux originating in the spot at a temperature $T_{1}$. This spot has the same line profile as the photosphere, which means that we assume a constant line profile for the whole star. For our example, we chose a rotational velocity of $v\,\sin{i}=2$ km s${}^{-1}$. In Fig. 10, we show the wavelength-dependent flux ratio between the quiet photosphere and the spot (the contrast, upper panel), and the resulting apparent radial velocity shift from the toy model calculations (lower panel). We show cases for three photosphere temperatures $T_{0}$ (5700 K, 3700 K, and 2800 K), for each $T_{0}$ we show cases with a small temperature difference, $T_{0}-T_{1}=200$ K, and with a larger temperature difference of $(T_{0}-T_{1})/T_{0}=0.35$. For small temperature differences (left panel of Fig. 10), the flux ratio between photosphere and spot decreases by approximately a factor of two in the range 500 – 1800 nm, while the radial velocity (RV) signal decreases by roughly a factor between two and three. The RV signal is higher for the lowest $T_{0}$ because the relative difference between $T_{1}$ and $T_{0}$ is much larger than in the case of $T_{0}=5700$ K. The cases of large temperature contrast (right panel in Fig. 10) produce flux ratios $>100$ in the coolest star at 500 nm. At 1800 nm the flux ratio decreases to a value around 5, i.e., a factor of 20 lower than at 500 nm. This is much larger than for the cases with small temperature contrast, where the flux ratio only decreases by a factor of two or less. On the other hand, the RV signal does not change as dramatically as the flux ratio. For large temperature contrasts, the absolute values of the RV signal are larger than in the case of low contrast, but the slope of RV with wavelength is much shallower; it is well below a factor of two in all three modeled stars. The explanation for this is that the large contrast in flux ratio implies that the spot does not contribute a significant amount to the full spectrum of the star at any of these wavelengths. Therefore, a relatively small change of the flux ratio with wavelength has no substantial effect on the RV signal. If, on the other hand, the flux ratio is on the order of two or less, a small decrease in the ratio with wavelength means that the larger contribution from the spot can substantially change the RV signal. Thus, a significant wavelength dependence for an RV signal induced by a cool spot can only be expected if the temperature difference between the quiet photosphere and the spot is not too large. 4.1.2 Line profile in the spot Line profile deviations that cause radial velocity variations do not solely depend on temperature contrast but also on the dependence of spectral features on temperature; effective temperature generally corresponds to a different spectrum and does not just introduce a scaling factor in the emitted flux. For example, a spectral line variation, and hence a radial velocity shift, can also appear at zero temperature contrast if the spectral line depths differ between the spot and the photosphere (as for example in hot stars with abundance spots; Piskunov & Rice, 1993). In Fig. 11, we consider a similar situation as in Fig. 9. Here, the temperature contrast between spot and photosphere is held constant but we show three cases in which the depths of the spectral line originating in the spot are different. The three spot profiles are chosen so that the line depths are 0.5, 1.0, and 1.5 times the line depth of the photospheric line. Fig. 11 illustrates that if spectral features are weaker in the spot than in the surrounding photosphere, the radial velocity shift (bottom panel) is larger than in the case of identical line strengths. If, on the other hand, the spectral features become stronger, as for example in some molecular absorption bands, the spot signature weakens and the radial velocity distortion is smaller. In our example of a stronger spot feature, this effect entirely cancels out the effect of the temperature contrast, so that the radial velocity signal of the spot is zero although a cool spot is present. 4.2. Spot simulations After considering the general effects of starspots in the last Section, we now discuss results of a more sophisticated simulation. Here, we calculate a full spectrum by integrating over the surface of a spotted star using spectra from a model-atmosphere code. The resulting radial velocity shift is estimated by cross-correlation against a spectrum unaffected by spots. 4.2.1 Line profile integration Our model spectra for a spotted star were calculated using a discrete surface with $500\times 500$ elements in longitude and latitude arranged to cover the same fraction of the surface each. All surface elements are characterized by a ‘local’ temperature; unspotted areas are assigned to the photospheric temperature, and spotted areas contribute spectra pertaining to atmospheres with lower temperatures. The associated spectra $f(\lambda,T)$ were generated with PHOENIX for all temperatures used. Depending on the rotational phase $p$, the visibility $A_{i}$ of each surface element $i$ is calculated, considering projection effects. We determine the radial velocity shift $v_{{\rm rad},i}$ for each surface element due to the stellar rotation. The resulting model spectrum $f_{p}(\lambda)$ for the spotted star is $$f_{p}(\lambda)=\frac{\sum\limits_{i=1}^{N}f(\lambda,T_{i},v_{{\rm rad},i})A_{i% }}{\sum\limits_{i=1}^{N}A_{i}},$$ (1) where $N$ is the total number of elements. In all of our calculations the model star has an inclination of $i\,=\,90\arcdeg$, and for simplicity we assume a linear limb darkening coefficient $\epsilon\,=\,0$ (no limb darkening). Using no limb darkening slightly overestimates the radial velocity signal but captures the qualitative behaviour that we are interested in. Stellar spots are considered to be circular and located at $0\arcdeg$ longitude and $+30\arcdeg$ latitude. We calculated the RV signal introduced by a temperature-spot on a rotating star. We chose the same star/spot temperature pairs as in the contrast calculations in the foregoing Section, but we used detailed atmosphere models and PHOENIX spectra to calculate the RV signal over the wavelength area 500 – 1800 nm. That means, our calculations include the effects of both the contrast and differences in the spectral features between atmospheres of different temperatures. 4.2.2 Results from spot simulations The results of our calculations are shown in Fig. 12. As in Fig. 10, we show six different stars, the temperature combinations are identical. The spot size is 1 % of the projected surface. We compute RV amplitudes for 50 nm wide sections. For each model star, we show four cases for rotational velocities of $v\,\sin{i}=2,5,10$, and 30 km s${}^{-1}$. In general, the trends seen in the detailed model are consistent with our results from the toy model of a single line taking into account only the flux ratio between photosphere and spot. As long as the flux ratio is relatively small (200 K, left panel in Fig. 12), the RV amplitude strongly depends on wavelength due to the wavelength dependency of the contrast. The strongest gradient in RV amplitude occurs between 500 and 800 nm; the RV signal decreases by almost a factor of 10 in the lowest temperature model where the flux gradient is particularly steep over this wavelength range. The decrease occurs at somewhat longer wavelengths in the cooler model stars than in the hotter models. In the model stars with a high flux ratio between photosphere and spot (right panel in Fig. 12), the variation in RV amplitude with wavelength is very small. The very coolest model shows a few regions of very low RV amplitude, but the general decline is not substantial. The absolute RV signal in a Sun-like star with $T_{0}=5700$ K at 500 nm comes out to be $\sim 40$ m s${}^{-1}$ at a spot temperature of $T_{1}=3700$ K on a star rotating at $v\,\sin{i}=5$ km s${}^{-1}$. This is consistent with the result of Desort et al. (2007), who reported a peak-to-peak amplitude of $\sim 100$ m s${}^{-1}$, i.e., an “amplitude” of 50 m s${}^{-1}$, in a similar star. Over the wavelength range they investigated (roughly 500 – 600 nm), Desort et al. found that the RV amplitude decreases by about 10 %. We have not calculated the RV amplitude over bins as small as the bins in their work and our calculation only has two bins in this narrow wavelength range. However, we find that the decrease Desort et al. (2007) reported between 500 nm and 600 nm, is not continued towards longer wavelengths. In our calculations, the RV amplitude does not decrease by more than $\sim 20\,\%$ between 500 and 1800 nm. Similar results apply for higher rotation velocities and lower photosphere temperatures. The models with lower flux ratio, $T_{0}-T_{1}=200$ K, show more of an effect with wavelength, although the absolute RV signal is of course smaller. The RV amplitude in a star with $T_{0}=3700$ K, a spot temperature of $T_{1}=3500\,$K, and $v\,\sin{i}=5$ km s${}^{-1}$ is slightly above 10 m s${}^{-1}$ at 500 nm and $\sim 4$ m s${}^{-1}$ at 1000 nm. Above 1000 nm, no further decrease in RV amplitude is observed. The behavior, again, is similar in other stars with low flux ratios and with different rotation velocities. For the cool models with large temperature differences, the individual RV amplitudes show relatively large scatter between individual wavelength bins. We attribute this to the presence of absorption features in some areas, while other areas are relatively free of absorption features. The temperature dependence of the depth of an absorption feature is important for the behavior of the spot signature in the spectral line. An example of this effect can be observed around 1100 nm, where the spectrum is dominated by absorption of molecular FeH that becomes stronger with lower temperature. 4.2.3 Comparison to LP 944-20 We can compare our simulations to the optical and nIR radial velocity measurements in LP 944-20 reported by Martín et al. (2006). At optical wavelengths, they found a periodical radial velocity variation of $K=3.5$ km s${}^{-1}$, while in the nIR they could not find any periodical variation and report an rms of 0.36 km s${}^{-1}$. The approximate effective temperature of an M9 dwarf like LP 944-20 is around 2400 K, we compare it to our coolest model that has a temperature of $T_{0}=2800$ K. The radial velocity amplitude of 3.5 km s${}^{-1}$ at visual wavelengths is much higher than the results of our simulations, but this can probably be accounted for by the different size of the surface inhomogeneities (only 1% in the simulations)666Using our “toy model” we estimate that a spot covering $\sim 10$ % of the surface can generate a velocity amplitude of a few km s${}^{-1}$.. The observations of Martín et al. (2006) suggest a ratio between optical and nIR radial velocity variations larger than a factor of 10. The largest ratio in our set of simulations in fact is on that order; the radial velocity jitter in our model with $T_{0}=2800$ K and $T_{1}=2600$ K diminishes by about a factor of ten between 600 nm and 1200 nm. Extrapolating from the results of the hotter models, in which the ratio between visual and nIR jitter is smaller, we cannot exclude that this ratio may become even larger in cooler stars. Our model with larger temperature contrast ($T_{0}=2800$ K, $T_{1}=1800$ K) produces a smaller ratio between visual and nIR jitter. Thus, our simulations are not in contradiction to the results reported by Martín et al. (2006). A ratio of ten or more in jitter between optical and nIR measurements seems possible if the temperature contrast is fairly small (100–200 K). We note that no radial velocity variations were actually detected in LP 944-20 in the nIR, which means that at the time of the nIR observations, it simply could have experienced a phase of lower activity and reduced spot coverage. In order to confirm the effects of a wavelength-dependent contrast on radial velocity measurements, observations carried out simultaneously at visual and nIR wavelenghts are required. Martín et al. (2006) propose that weather effects like variable cloud coverage may be the source for the radial velocity jitter in the visual and for the wavelength-dependent contrast. This would probably mean very small temperature contrast but a strong effect in wavelength regions with spectral features that are particularly sensitive to dust. Our simulations do not predict the wavelength dependence of pure dust clouds, but at this point we see no particular reason why the jitter from purely dust-related clouds should be much stronger in the visual than in the nIR. To model this, a separate simulation would be needed taking into account the effects of dust on the spectra of ultracool dwarfs. 5. Summary and Conclusions We have investigated the possibility of measuring radial velocity variations of low-mass stars at nIR wavelengths ($Y,J,H$). The spectral flux distribution of red stars favors long wavelengths because higher $S/N$ can be achieved in comparison to optical wavelengths. On the other hand, the spectral information content of the spectra (presence of sharp and strong spectral features) is lower at longer wavelengths, and the efficiency of calibration methods is not well established. For early M dwarfs, nIR radial velocities do not offer any advantage in terms of photon-limited precision. Indeed, the fact that measurement methods in the optical are much more advanced than those in the nIR means that there is not really any motivation of nIR radial velocities from this perspective. On the other hand, at mid-M spectral type, the achievable precision becomes higher in the nIR around spectral type M4–5; $Y$-band observations can be expected to achieve a radial velocity precision higher than observations at optical wavelengths. At late-M dwarfs, the $Y$-band outperforms the $V$-band precision by about a factor of 4–5. Observations in the $J$- and $H$-bands are a factor of 2–5 worse than the $Y$-band across the M star spectral types. They are only superior to the $V$-band observations in very-late-M dwarfs. Our investigation into the effects of activity on radial velocity measurements showed that a crucial parameter for the wavelength dependence of jitter is the temperature contrast between the spot and the photosphere. If the spot temperature is only a few hundred Kelvin below the photospheric temperature, the induced radial velocity signal is on the order of several m s${}^{-1}$ in the optical and becomes weaker towards longer wavelengths. Note that the absolute size of this effect depends on the size of the spot (1 % in our simulation) and will grow with larger spot size. High temperature contrast, on the other hand, causes a much larger radial velocity signal that only weakly depends on wavelength. For example, in M stars with spots only $\sim 200$ K cooler than the photosphere, the jitter at nIR wavelengths is roughly a factor of ten lower than at optical wavelengths, but it is smaller than a factor of two if the temperature contrast is 1000 K or higher. Unfortunately, not much is known about spot temperatures, particularly in low-mass stars, but our results show that simultaneous observations at optical and nIR wavelengths can provide useful constraints on the spot temperatures of active stars. Another important factor for the effect of active regions on radial velocity measurements are the differences between spectral features appearing in the photosphere and the spots. Conventional estimates usually assume that both are comparable, but given the perhaps relatively large temperature contrasts and the strong temperature dependence of the molecular features, this may not be the case. Thus, large differences in the radial velocity signal between different spectral regions can occur if spectral features of different temperature sensitivity appear. The radial velocity signal may not vanish as expected at nIR wavelengths, and it seems unlikely that strong radial velocity signals observed at optical wavelengths can vanish in the nIR, particularly in very active stars in which relatively large temperature contrast is expected. The advantage of a nIR spectrograph over an optical spectrograph becomes obvious in the late-M dwarfs. Our results point towards a spectrograph covering the wavelength range 500–1150 nm that captures the region where the RV precision is highest at all M spectral classes, and where the wavelength dependece of jitter shows the largest gradient in order to distingiush between orbital motion and spots. Such a spectrograph should be designed to be very stable and could use a ThAr lamp for calibration. In the future, other calibration strategies might become available (e.g. a laser frequency comb, Steinmetz et al., 2008), but the ThAr method can in principle provide the sensitivity required to detect Earth-mass planets around low mass stars. We thank Peter H. Hauschildt for promptly providing PHOENIX model spectra. A.R. acknowledges research funding from the DFG as an Emmy Noether fellow, A.R. and A.S. received support from the DFG under RE 1664/4-1. J.B. has received research funding from the European Commission’s Seventh Framework Programme as an International Incoming Fellow (PIFF-GA-2009-234866). References Allard et al. (2001) Allard, F., Hauschildt, P.H., Alexander, D.R., Tamanai, A., & Schweitzer, A., 2001, ApJ, 556, 357 D’Amato et al. (2008) D’Amato, F., et al., 2008, SPIE, 7014, 70143V Bean et al. (2009) Bean, J., Seifahrt, A., Hartmann, H., Nilsson, H., Wiedemann, G., Reiners, A., Dreizler, S., & Henry, T., submitted to ApJ, arXiv:0911.3148 Bouchy et al. (2001) Bouchy, F., Pepe, F., & Queloz, D., 2001, A&A, 374, 733 Butler et al. (1996) Butler, R.P., Marcy, G.W., Williams, E., McCarthy, C., Dosanjh, P., & Vogt, S.S., 1996, PASP, 108, 500 Connes (1985) Connes, P., 1985, Ap&SS, 110 211 Clough et al. (1981) Clough, S. A., Kneizys, F. X., Rothman, L. S., & Gallery, W. O. 1981, Proc. SPIE, 277, 152 Clough et al. (1992) Clough, S. A., Iacono, M. J., & Moncet, J.-L. 1992, J. Geophys. Res., 97, 15761 Cushing et al. (2005) Cushing, M.C., Rayner, J.T., & Vacca, W.D., 2005, ApJ, 623, 1115 Delfosse et al. (1998) Delfosse, X., Forveille, T., Perrier, C., & Mayor, M., 1998, A&A, 331, 581 Desort et al. (2007) Desort, M., Lagrange, A.-M., Galland, F., Udry, S., & Mayor, M., 2007, A&A, 473, 983 Donati et al. (2008) Donati, J.-F., Morin, J., Petit, P., et al., 2008, MNRAS, 390, 545 Hauschildt et al. (1999) Hauschildt, P.H., Allard, F., & Baron, E., 1999, ApJ, 512, 377 Hinkle et al. (2003) Hinkle, K.H., Wallace, L., Valenti, J., & Tsuji, T., 2003, IAU Symp. 215, 213 Käufl et al. (2006) Käufl, H. U., et al., 2006, Msngr, 126, 32 Kerber et al. (2008) Kerber, F., Nave, G., & Sansonetti, C. J., 2008, ApJS, 178, 374 Lovis et al. (2006) , Lovis., C., Pepe, F., Bouchy, F., et al., 2006, Proc. SPIE, Vol. 6269, 62690P Lovis & Pepe (2007) Lovis, C., & Pepe, F., 2007, A&A, 468, 1115 Mahadevan & Ge (2009) Mahadevan, S., & and Ge, J., 2009, ApJ, 692, 1590 Maltby et al. (1986) Maltby, P., Avrett, E.H., Carlsson, M., Kjeldseth-Moe, O., Kurucz, R.L., & Loeser, R., 1986, ApJ, 306, 284 Marcy et al. (1998) Marcy, G.W., Butler, R.P., Vogt, S.S., Fischer, D., Lissauer, J.J., 1998, ApJ, 505, L147 Martín et al. (2006) Martín, E.L., Guenther, E., Zapatero Osorio, M.R., Bouy, & Wainscoat, R., 2006, ApJ, 644, L75 Mayor & Queloz (1995) Mayor, M., & Queloz, D., 1995, Nature, 378, 355 Mayor et al. (2003) Mayor, M., et al., 2003, Msngr, 124, 20 McLean et al. (2003) McLean, I.S., McGovern, M.R., Burgasser, A.J., Kirkpatrick, J.D., Prato, L., & Kim, S.S., 2003, ApJ, 596, 561 McLean et al. (2007) McLean, I.S., Prato, L., McGovern, M.R., Burgasser, A.J., Kirkpatrick, J.D., Rice, E.L., & Kim, S.S., 2007, ApJ, 658, 1217 Mohanty & Basri (2003) Mohanty, S., & Basri, G., 2003, ApJ, 583, 451 O’Neal et al. (2001) O’Neal, D., Neff, J.E., Saar, S.H., & Mines, J.K., 2001, AJ, 122, 1954 O’Neal et al. (2004) O’Neal, D., Neff, J.E., Saar, S.H., & Cuntz, M., 2004, AJ, 128, 1802 Pepe et al. (2002) Pepe, F., Mayor, M., Galland, F., Naef, D., Queloz, D., Santos, N.C., Udry, S., & Burnet, M., 2002, A&A, 388, 632 Piskunov & Rice (1993) Piskunov, N.E., & Rice, J.B., 1993, PASP, 105, 1415 Reiners (2009) Reiners, A., 2009, A&A, 498, 853 Reiners & Basri (2006) Reiners, A., & Basri, G., 2006, ApJ, 644, 497 Reiners & Basri (2008) Reiners, A., & Basri, G., 2008, ApJ, 684, 1390 Reiners & Basri (2009) Reiners, A., & Basri, G., 2009, A&A, 496, 787 Reiners & Basri (submitted to ApJ) Reiners, A., & Basri, G., submitted to ApJ Rothman et al. (2005) Rothman, L. S., et al. 2005, Journal of Quantitative Spectroscopy and Radiative Transfer, 96, 139 Seifahrt et al. (submitted to A&A) Seifahrt, A., Käufl, H.U., Bean J., Richter, & Siebenmorgen, R., 2010, submitted to A&A Solanki (2003) Solanki, S.K., 2003, A&AR, 11, 153 Steinmetz et al. (2008) Steinmetz, T., Wilken, T., Araujo-Hauck, C., Holzwarth, R., Hänsch, T. W., Pasquini, L., Manescau, A., & et al. 2008, Science, 321, 1335 Strassmeier & Rice (1998) Strassmeier, K.G., & Rice, J.B., 1998, A&A, 330, 685 Tarter et al. (2007) Tarter, J.C., et al., 2007, Astrobiology, 7, 30 Udry et al. (2007) Udry, S., Bonfils, X., Delfosse, X., et al., 2007, A&A, 469, L43 Wallace & Livingston (1992) Wallace, L., & Livingston, W., 1992, N.S.O. Technical Report #92-001 Wallace & Hinkle (1996) Wallace, L., & Hinkle, K.H., 1996, ApJS, 107, 312 Wallace et al. (1998) Wallace, L., Livingston, W., Bernath, P.F., & Ram, R.S., 1998, N.S.O. Technical Report #1998-002 Zapatero-Osorio et al. (2007) Zapatero-Osorio M.R., Martín, E.L., Béjar, V.J., Bouy, H., Deshpande, R., & Wainscoat, R.J., 2007, ApJ, 666, 1205
Dielectric black hole analogues Ralf Schützhold    Günter Plunien    and Gerhard Soff Institut für Theoretische Physik, Technische Universität Dresden, D-01062 Dresden, Germany Electronic address: [email protected] (November 19, 2020) Abstract Alternative to the sonic black hole analogues we discuss a different scenario for modeling the Schwarzschild geometry in a laboratory – the dielectric black hole. The dielectric analogue of the horizon occurs if the velocity of the medium with a finite permittivity exceeds the speed of light in that medium. The relevance for experimental tests of the Hawking effect and possible implications are addressed. PACS: 04.70.Dy, 04.62.+v, 04.80.-y, 42.50.Gy. At present we know four fundamental interactions in physics: the strong and the weak interaction, electromagnetism and gravitation. The first three forces are described by quantum field theories whereas the fourth one is governed by the laws of general relativity – a purely classical theory. In view of the success – and the excellent agreement with experimental data – of the electro-weak standard model (unifying the electromagnetic and the weak force at high energies) it is conjectured that all four interactions can be described by an underlying unified theory above the Planck scale. This fundamental description is expected to incorporate the four forces as low-energy effective theories. Despite of various investigations during the last decades a satisfactory and explicit candidate for this underlying law is still missing. At present we can just consider consistently quantum fields, e.g. electromagnetism, in the presence of classical, i.e. externally prescribed, gravitational fields. This semi-classical treatment is expected to provide some insight into the structure of the underlying theory. One of the most striking consequences of this formalism is the Hawking effect [1] predicting the evaporation of black holes. However, this prediction is faced with a conceptual difficulty: Its derivation is based on the assumption that the semi-classical treatment is valid at arbitrary scales. But in contrast to this presumption the decomposition into a classical gravitational sector and a quantum field sector is valid at low energies only. The investigation of high-energy effects requires some knowledge about the underlying theory including quantum gravity. In order to elucidate this point Unruh [2] suggested a scenario which displays a close similarity to that of the Hawking effect while the underlying physical laws are completely understood – the sonic black hole analogue. These analogues are based on the remarkable observation that the propagation of sound waves in flowing fluids is under appropriate conditions equivalent to that of a scalar field in curved space-times: The dynamics of the fluid is governed by the non-linear Euler equation $\dot{\mbox{\boldmath$v$}}+(\mbox{\boldmath$v$}\nabla)\mbox{\boldmath$v$}+{% \nabla p}/{\varrho}=\mbox{\boldmath$f$}$, where $v$ denotes the local velocity of the liquid, $\varrho$ its density, $p$ the pressure, and $f$ the external force density, together with the equation of continuity $\dot{\varrho}+\nabla(\varrho\,\mbox{\boldmath$v$})=0$. Linearizing these equations around a given flow profile $\mbox{\boldmath$v$}_{0}$ via $\mbox{\boldmath$v$}=\mbox{\boldmath$v$}_{0}+\nabla\phi$ the scalar field $\phi$ of the small deviations (sound waves) satisfies the Klein-Fock-Gordon equation with an appropriate (acoustic) metric $g_{\rm ac}^{\mu\nu}(\mbox{\boldmath$v$}_{0},\varrho_{0})$, see e.g. [2, 3] $$\displaystyle\Box_{\rm ac}\phi=\frac{1}{\sqrt{-g_{\rm ac}}}\partial_{\mu}\left% (\sqrt{-g_{\rm ac}}\,g_{\rm ac}^{\mu\nu}\,\partial_{\nu}\phi\right)=0\,,$$ (1) with $g_{\rm ac}={\rm det}(g^{\rm ac}_{\mu\nu}$). The acoustic horizon occurs if the velocity of the fluid exceeds the speed of sound within the liquid. Many examinations have been devoted to this topic after the original proposal by Unruh [2], see e.g. Refs. [3, 4, 5, 6] as well as references therein. E.g., in [6] the possibility of realizing an acoustic horizon within a Bose-Einstein condensate is addressed. More generally, Ref. [4] discusses the simulation of phenomena of curved space-times within super-fluids. However, the sonic analogues suffer from certain conceptual difficulties and problems: In order to cast the wave equation of sound into the above form (1) it is necessary to neglect the viscosity of the liquid. This might be justified for super-fluids, but in general not for normal liquids. Since the Hawking effect is a pure quantum radiation phenomenon, it can be observed for the acoustic analogues if and only if the quantum description of sound waves is adequate. The presence of friction and the resulting decoherence destroys the quantum effects and so violates this presumption in general. In addition, the assumption of a stationary, regular, and laminar (irrotational) flow profile at the transition from subsonic to supersonic flow (which was used in the derivation) is questionable, see e.g. [5, 7]. Finally, we would like to emphasize that the sonic analogues incorporate the scalar (spin-zero) field $\phi$ and are not obviously generalizable to the electromagnetic (spin-one) field $A_{\mu}$. In the following we shall discuss an alternative scenario – where all these objections do not necessarily apply. Let us consider the following quantum system: Within the linear approximation the excitations of a medium are described by a bath of harmonic oscillators whose dynamics is governed by the Lagrangian density ${\cal L}_{\Phi}$. For reasons of simplicity we assume these degrees of freedom to be localized and not propagating. The linear excitations of the medium in its rest-frame are coupled to the microscopic electric field via ${\cal L}_{\mbox{\boldmath$\scriptstyle E$}\Phi}$. We might also consider a coupling to the magnetic field whereby the medium would possess a non-trivial permeability in addition to its permittivity. The dynamics of the microscopic electromagnetic field itself is governed by ${\cal L}_{\mbox{\boldmath$\scriptstyle E$}\mbox{\boldmath$\scriptstyle B$}}$. Hence the complete Lagrangian density $\cal L$ of the quantum system under consideration is determined by $$\displaystyle{\cal L}$$ $$\displaystyle=$$ $$\displaystyle{\cal L}_{\mbox{\boldmath$\scriptstyle E$}\mbox{\boldmath$% \scriptstyle B$}}+{\cal L}_{\mbox{\boldmath$\scriptstyle E$}\Phi}+{\cal L}_{% \Phi}=\frac{1}{2}\left(\mbox{\boldmath$E$}^{2}-\mbox{\boldmath$B$}^{2}\right)$$ (2) $$\displaystyle+\sum\limits_{\alpha}\mbox{\boldmath$E$}\cdot\mbox{\boldmath$\chi% $}_{\alpha}\,\Phi_{\alpha}+\frac{1}{2}\sum\limits_{\alpha}\left(\dot{\Phi}^{2}% _{\alpha}-\Omega^{2}_{\alpha}\Phi^{2}_{\alpha}\right)\,.$$ $\mbox{\boldmath$E$}(t,\mbox{\boldmath$r$})$ and $\mbox{\boldmath$B$}(t,\mbox{\boldmath$r$})$ denote the microscopic electric and magnetic fields, respectively. The excitations of the medium are described by $\Phi_{\alpha}(t,\mbox{\boldmath$r$})$ where $\alpha$ labels the different vibration modes. The fundamental frequencies of the medium are indicated by $\Omega_{\alpha}$. $\mbox{\boldmath$\chi$}_{\alpha}$ denote the vector-valued and possibly space-time dependent coupling parameters (e.g. dipole moments). Now we turn to a macroscopic description via averaging over the microscopic degrees of freedom associated to the medium, i.e. the fields $\mbox{\boldmath$\Phi$}=\{\Phi_{\alpha}\}$. Assuming that the microscopic quantum state of the medium is properly represented by the path-integral with the usual regular measure ${\mathfrak{D}}\mbox{\boldmath$\Phi$}$ we may integrate out the microscopic degrees of freedom $\Phi_{\alpha}$ within this formalism. As a result we obtain an effective action ${\cal A}_{\rm eff}$ (see e.g. [8]) accounting for the remaining macroscopic degrees of freedom $$\displaystyle\exp\left(i{\cal A}_{\rm eff}\right)=\frac{1}{Z_{\mbox{\boldmath$% \scriptstyle\Phi$}}}\int{\mathfrak{D}}\mbox{\boldmath$\Phi$}\,\exp\left(i{\cal A% }\right)\,.$$ (3) ${\cal A}$ denotes the original action described in Eq. (2) and $Z_{\mbox{\boldmath$\scriptstyle\Phi$}}$ is a constant normalization factor. After standard manipulations (linear substitution of $\Phi_{\alpha}$ and quadratic completion, see e.g. [8]) we may accomplish the path-integration (averaging over all field configurations) with respect to ${\mathfrak{D}}\mbox{\boldmath$\Phi$}$ explicitly, and finally we arrive at $$\displaystyle{\cal A}_{\rm eff}$$ $$\displaystyle=$$ $$\displaystyle{\cal A}_{\mbox{\boldmath$\scriptstyle E$}\mbox{\boldmath$% \scriptstyle B$}}$$ (4) $$\displaystyle+\frac{1}{2}\sum\limits_{\alpha}\int d^{4}x\;\mbox{\boldmath$E$}% \cdot\mbox{\boldmath$\chi$}_{\alpha}\left(\frac{\partial^{2}}{\partial t^{2}}+% \Omega^{2}_{\alpha}\right)^{-1}\mbox{\boldmath$\chi$}_{\alpha}\cdot\mbox{% \boldmath$E$}\,.$$ As expected from other examples of effective field theories [8], the effective action ${\cal A}_{\rm eff}$ is non-local owing to the occurrence of the inverse differential operator $(1+\partial^{2})^{-1}$. However, in analogy to the operator product expansion [8] we may expand this non-local quantity into a sum of local operators $(1+\partial^{2})^{-1}=\sum_{n}(-\partial^{2})^{n}$. For the propagation of photons with energies much smaller than the fundamental frequencies $\Omega_{\alpha}$ of the medium only the lowest term ($n=0$) of this asymptotic expansion yields significant contributions. Neglecting the higher oder terms we obtain the local and non-dispersive low-energy effective theory of the macroscopic electromagnetic field in analogy to a heavy-mass effective theory [8] $$\displaystyle{\cal L}_{\rm eff}={\cal L}_{\mbox{\boldmath$\scriptstyle E$}% \mbox{\boldmath$\scriptstyle B$}}+\frac{1}{2}\mbox{\boldmath$E$}\cdot(\mbox{% \boldmath$\varepsilon$}-\mbox{\boldmath$1$})\cdot\mbox{\boldmath$E$}\,.$$ (5) The second term contains the permittivity tensor $\varepsilon$ of the medium which is introduced via $\mbox{\boldmath$\varepsilon$}=\mbox{\boldmath$1$}+\sum_{\alpha}\mbox{\boldmath% $\chi$}_{\alpha}\otimes\mbox{\boldmath$\chi$}_{\alpha}/\Omega^{2}_{\alpha}$. In contrast to the microscopic fields in Eq. (2) here $\mbox{\boldmath$E$}(t,\mbox{\boldmath$r$})$ and $\mbox{\boldmath$B$}(t,\mbox{\boldmath$r$})$ are understood as the macroscopic electric and magnetic fields, respectively. For reasons of simplicity we assume the coupling parameters $\mbox{\boldmath$\chi$}_{\alpha}$ of the medium to be homogeneously and isotropically distributed resulting in a constant and scalar permittivity $\varepsilon$. Consequently the Lagrangian density assumes the simple form $$\displaystyle{\cal L}_{\rm eff}=\frac{1}{2}\left(\varepsilon\,\mbox{\boldmath$% E$}^{2}-\mbox{\boldmath$B$}^{2}\right)\,.$$ (6) As expected, this is exactly the Lagrangian of the macroscopic electromagnetic field within a dielectric medium at rest possessing a permittivity $\varepsilon$. Its dynamics is governed by the macroscopic source-free Maxwell equations $\mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$B$}=0$, $\mbox{\boldmath$\nabla$}\cdot\mbox{\boldmath$D$}=0$, $\mbox{\boldmath$\nabla$}\times\mbox{\boldmath$H$}=\dot{\mbox{\boldmath$D$}}$, and $\mbox{\boldmath$\nabla$}\times\mbox{\boldmath$E$}=-\dot{\mbox{\boldmath$B$}}$ with $\mbox{\boldmath$D$}=\varepsilon\,\mbox{\boldmath$E$}$ and $\mbox{\boldmath$H$}=\mbox{\boldmath$B$}$ (no permeability). Obviously the Lagrangian in Eq. (6) is not manifestly covariant. However, by virtue of Lorentz transformations it can be cast into a Poincaré invariant form via $$\displaystyle{\cal L}_{\rm eff}=-\frac{1}{4}\,F_{\mu\nu}\,F^{\mu\nu}-\frac{% \varepsilon-1}{2}\,F_{\mu\nu}\,u^{\nu}\,F^{\mu\lambda}\,u_{\lambda}\,.$$ (7) $F_{\mu\nu}$ symbolizes the electromagnetic field strength tensor containing the macroscopic electric and magnetic fields $E$ and $B$. The signature of the Minkowski metric $g_{\mu\nu}^{\rm M}$ is chosen according to $g^{\mu\nu}_{\rm M}={\rm diag}(+1,-1,-1,-1)$. $u^{\mu}$ denotes the four-velocity of the medium and is related to its three-velocity $\beta$ via $u^{\mu}=(1,\mbox{\boldmath$\beta$})/\sqrt{1-\mbox{\boldmath$\beta$}^{2}}$. If we now assume that the dielectric properties of the medium (strictly speaking, the coupling parameters $\mbox{\boldmath$\chi$}_{\alpha}$) do not change owing to a non-inertial motion (generating terms such as $\partial_{\mu}\,u_{\nu}$) we may generalize the above expression to arbitrarily space-time dependent four-velocities of the medium $u^{\mu}$, see e.g. [9]. As we shall demonstrate now, the propagation of the electromagnetic field in a flowing dielectric medium resembles many features of a curved space-time, see also [10, 11, 12]. Introducing the effective Gordon metric $$\displaystyle g^{\mu\nu}_{\rm eff}=g^{\mu\nu}_{\rm M}+(\varepsilon-1)\,u^{\mu}% \,u^{\nu}$$ (8) we may rewrite the Lagrangian in Eq. (7) into a form associated with a curved space-time $$\displaystyle{\cal L}_{\rm eff}=-\frac{1}{4}\,F_{\mu\nu}\,g^{\mu\rho}_{\rm eff% }\,g^{\nu\sigma}_{\rm eff}F_{\rho\sigma}\,.$$ (9) Inserting the expression (8) into the above equation (9) and exploiting the normalization of the four-velocity $u_{\mu}\,u^{\mu}=1$ together with the anti-symmetry of the tensor $F_{\mu\nu}=-F_{\nu\mu}$ we exactly recover the original formula (7). Note, that the representation of the field strength tensor via the lower indices is related to the four-vector potential $A_{\mu}$ by $F_{\mu\nu}=\partial_{\mu}\,A_{\nu}-\partial_{\nu}\,A_{\mu}$. The equation of motion assumes the compact form $\partial_{\mu}(\,g^{\mu\rho}_{\rm eff}\,g^{\nu\sigma}_{\rm eff}\,F_{\rho\sigma% })=0$. For a rigorous identification we have to consider the associated Jacobi determinant $\sqrt{-g_{\rm eff}}$ accounting for the four-volume element. Fortunately the determinant of the metric in Eq. (8) is simply given by $|\det(g^{\mu\nu}_{\rm eff})|=\varepsilon>1$. This positive and constant factor can be eliminated by a scale transformation of the distances $dx^{\mu}$ or – alternatively – by re-scaling the four-vector potential $A_{\mu}$. In order to discuss the general properties of the Gordon metric it is convenient to calculate its inverse $$\displaystyle g_{\mu\nu}^{\rm eff}=g_{\mu\nu}^{\rm M}-\frac{\varepsilon-1}{% \varepsilon}\,u_{\mu}\,u_{\nu}\,.$$ (10) The condition of an ergo-sphere $g_{00}^{\rm eff}=0$ (see e.g. [13]) is satisfied for $\mbox{\boldmath$\beta$}^{2}=1/\varepsilon$, i.e. if the velocity of medium equals the speed of light in that medium. For a radially inward/outward flowing medium, i.e. $\mbox{\boldmath$\beta$}=f(r)\mbox{\boldmath$r$}$ an analysis of the geodesics (e.g. light rays) reveals that the ergo-sphere represents an apparent horizon, cf. [13]. An inward flowing medium corresponds to a black hole (nothing can escape) while an outward flowing medium represents a white hole (nothing can invade), see e.g. [3, 14]. Assuming an eternally stationary flow the apparent horizon coincides with the event horizon, cf. [13]. For a stationary flow the effective metric in Eq. (10) describes a stationary space-time. However, this metric can be cast into a static form $ds^{2}_{\rm eff}=g_{00}d\tilde{t\,}^{2}-d\tilde{r\,}^{2}/g_{00}-r^{2}d\Omega^{2}$ by virtue of an appropriate coordinate transformation $dt\rightarrow d\tilde{t}=dt+g_{01}dr/g_{00}$ as well as $dr\rightarrow d\tilde{r}=\sqrt{g_{01}^{2}-g_{00}g_{11}}\,dr$. Whereas the former expression in Eq. (10) corresponds to the Painlevé-Gullstrand-Lemaître metric (see e.g. [3, 14]) the latter form is equivalent to the well-known Schwarzschild representation. Since the Schwarzschild metric is singular at the horizon ($g_{00}=0$) also the transformation $t\rightarrow\tilde{t}$ maintains this property. Thus the coordinate $\tilde{t}$ should be used with special care. Having derived the Schwarzschild representation of the dielectric black/white hole its surface gravity [13] can be calculated simply via $2\kappa=(\partial g_{00}/\partial\tilde{r})_{(g_{00}=0)}$ and yields $$\displaystyle\kappa=\frac{1}{1-{\beta}^{2}}\,\left(\frac{\partial\beta}{% \partial r}\right)_{\rm Horizon}\,.$$ (11) Independently of $\varepsilon$ it coincides – up to the relativistic factor $(1-\mbox{\boldmath$\beta$}^{2})^{-1}$ – with the surface gravity of the non-relativistic sonic analogues. The associated Hawking temperature [1] is (for $\beta\ll 1$) of the order of magnitude $$\displaystyle T_{\rm Hawking}=\frac{\kappa\,\hbar\,c}{4\pi\,k_{\rm B}}={\cal O% }\left(\frac{\hbar\,c}{k_{\rm B}\,n\,R}\right)\,,$$ (12) where $\hbar$ denotes Planck’s constant, $c$ the speed of light in the vacuum, and $k_{\rm B}$ Boltzmann’s constant. $n=\sqrt{\varepsilon}$ is the index of refraction of the medium and $R$ symbolizes the Schwarzschild radius of the dielectric black/white hole. Hence the Hawking temperature is proportional to the speed of light in the medium $c/n$ over the Schwarzschild radius $R$, i.e. the inverse characteristic time scale of a light ray propagating within the dielectric black/white hole. The dielectric analogues of the Schwarzschild geometry provide a conceptual clear scenario for studying the effects of quantum fields (${\cal L}_{\rm eff}$) under the influence of external conditions ($g_{\mu\nu}^{\rm eff}$) together with their relation to the underlying theory ${\cal L}$. The effective low-energy description ${\cal L}_{\rm eff}$ exhibits a horizon (one-way membrane [13]) and the related thermodynamical implications (Hawking radiation [1]). Moreover, it allows for investigating the effects of a finite cut-off (trans-Planckian problem) induced by the microscopic theory ${\cal L}$. By means of the analogy outlined below these considerations might shed some light on the structure of quantum gravity and its semi-classical limit since for the dielectric black holes the underlying physics is understood. Apart from the considerations above there have been already a few discussions in the literature concerning the simulation of black holes by means of dielectric properties: The solid-state analogues proposed in Ref. [12] identify the permittivity $\varepsilon^{ij}$ and the permeability $\mu^{ij}$ tensor directly with the (singular) Schwarzschild metric via $\varepsilon^{ij}=\mu^{ij}=\sqrt{-g}\,g^{ij}/g_{00}$. As a result the solid-state analogues require divergences of the material characteristics of the medium at rest in order to simulate a horizon ($g_{00}=0$) – in contrast to the model described in the present article. Even though such a singular behavior might be realized for the case of a phase transition the validity of the effective theory in the presence of these divergences is questionable. In addition, the genuine difference between the black hole and the white hole horizon (see e.g. [14]) cannot be reproduced by the scenario proposed in [12]. Apart from this model a flowing dielectric medium obeying a finite permittivity was suggested in Ref. [11] in order to represent a so-called optical black hole. However, the Aharonov-Bohm scenario under consideration in Ref. [11] – where a pure swirling of the medium is assumed – does not exhibit a horizon and therefore cannot be regarded as a model of a black/white hole. This objection has already been raised in [15], see also [16]. Consequently it is not possible to apply the concepts of surface gravity and Hawking temperature. In contrast to the simulation of a curved space-time by a dielectric medium it is also possible to consider the inverse identification: E.g., in Ref. [10] the propagation in a gravitational background was mapped to electromagnetism within a medium in flat space-time. However, the identification in [10] requires $g_{00}>0$ and thus does not incorporate space-times with a horizon. The experimental realization of the dielectric analogues might indeed become feasible in view of the recent experimental progresses in generating ultra-slow light rays, see e.g. [18, 19, 20, 21, 22]. These experiments exploit the phenomenon of the electromagnetically induced transparency in order to slow down the velocity of light in atomic vapor. Although the microscopic theory of this phenomenon is not properly described by the Lagrangian in Eq. (2) it generates similar macroscopic effects. If an appropriate control field (a laser) acts on the medium the group velocity of the signal field perpendicular to the control field reaches an order of magnitude of some meters per second – i.e. even below the speed of sound. In view of the non-destructive nature [22] of the propagation, i.e. the absence of loss, dissipation, and the resulting decoherence, one might imagine the observability of quantum field theoretical effects – e.g. the Hawking effect – in an advanced experiment. Very roughly, an experimental set-up for a dielectric black hole as depicted in the figure might be conceivable. Apart from the experimental challenge of simulating a black hole within a laboratory the scenario discussed in this article may help to understand the structure of the underlying theory including quantum gravity. References [1] S. W. Hawking, Nature 248, 30 (1974); Commun. Math. Phys. 43, 199 (1975). [2] W. G. Unruh, Phys. Rev. Lett. 46, 1351 (1981). [3] M. Visser, Class. Quant. Grav. 15, 1767 (1998); [4] G. E. Volovik, Superfluid analogies of cosmical phenomena, e-preprint: gr-qc/0005091, to appear in Phys. Rept. [5] S. Liberati, S. Sonego and M. Visser, Class. Quant. Grav. 17, 2903 (2000). [6] L. J. Garay, J. R. Anglin, J. I. Cirac, and P. Zoller, Phys. Rev. Lett. 85, 4643 (2000). [7] R. Schützhold, Phys. Rev. D 63, 024014 (2001). [8] K. G. Wilson, Phys. Rev. 179, 1499 (1969); T. Appelquist and J. Carazzone, Phys. Rev. D 11, 2856 (1975); T. Mannel, W. Roberts and Z. Ryzak, Nucl. Phys. B 368, 204 (1992); H. Georgi, Ann. Rev. Nucl. Part. Sci. 43, 209 (1993). [9] R. Schützhold, G. Plunien and G. Soff, Phys. Rev. A 58, 1783 (1998). [10] L. D. Landau and E. M. Lifschitz, Classical Field Theory, Course of Theoretical Physics Vol. 2 (Pergamon, London, 1959). [11] U. Leonhardt and P. Piwnicki, Phys. Rev. Lett. 84, 822 (2000). [12] B. Reznik, Phys. Rev. D 62, 044044 (2000). [13] S. W. Hawking and G. F. R. Ellis, The Large Scale Structure of Space-time, (Cambridge University Press, Cambridge, England, 1973); M. Visser, Lorentzian Wormholes: From Einstein to Hawking, (AIP Press, Woodbury, NY, 1995). [14] R. Schützhold, On the Hawking effect, e-preprint: gr-qc/0011047, to appear in Phys. Rev. D. [15] M. Visser, Phys. Rev. Lett. 85, 5252 (2000); [16] U. Leonhardt and P. Piwnicki, Phys. Rev. Lett. 85, 5253 (2000). [17] L. V. Hau, S. E. Harris, Z. Dutton, and C. H. Behroozi, Nature (London) 397, 594 (1999). [18] M. M. Kash, V. A. Sautenkov, A. S. Zibrov, L. Hollberg, G. E. Welch, M. D. Lukin, Y. Rostovtsev, E. S. Fry, M. O. Scully, Phys. Rev. Lett. 82, 5229 (1999). [19] D. Budker, D. F. Kimball, S. M. Rochester, and V. V. Yashchuk, Phys. Rev. Lett. 83, 1767 (1999). [20] M. Fleischhauer and M. D. Lukin, Phys. Rev. Lett. 84, 5094 (2000). [21] O. Kocharovskaya, Y. Rostovtsev, and M. O. Scully, Phys. Rev. Lett. 86, 628 (2001). [22] D. F. Phillips, A. Fleischhauer, A. Mair, R. L. Walsworth, and M. D. Lukin, Phys. Rev. Lett. 86, 783 (2001).
Simplified bond-hyperpolarizability model of second-harmonic-generation in Si(111): theory and experiment Hendradi Hardhienata111on leave from Theoretical Physics Division, Bogor Agricultural University, Jl. Meranti S, Darmaga, Indonesia Center for surface and nanoanalytics (ZONA), Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria    Andrii Prylepa Christian Doppler laboratory for microscopic and spectroscopic material characterization, Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria Center for surface and nanoanalytics (ZONA), Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria    David Stifter Christian Doppler laboratory for microscopic and spectroscopic material characterization, Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria Center for surface and nanoanalytics (ZONA), Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria    Kurt Hingerl Center for surface and nanoanalytics (ZONA), Johannes Kepler University, Altenbergerstr. 69, 4040 Linz, Austria [email protected] Abstract Second-harmonic-generation (SHG) in centrosymmetric material such as Si(111) is usually understood either from the phenomenology theory or more recently using the simplified bond-hyperpolarizability model (SBHM) [G. D. Powell, J. F. Wang, and D. E. Aspnes, Phys. Rev. B 65, 205320/1 (2002)]. Although SBHM is derived from a classical point of view, it has the advantage over the former that it gives especially for lower symmetry systems a clear physical picture and a more efficient explanation of how nonlinearity is generated. In this paper we provide a step-by-step description of the SBHM in Si(111) for the linear and second harmonic case. We present a SHG experiment of Si(111) and show how it can be modelled by summing up the contribution of the fields produced by the anharmonic motion of the charges along the bonds. 1 Introduction Nonlinear optics is one of the most versatile tools to investigate surface properties, especially for a material class which posseses inversion symmetry. Serious theoretical efforts to explain nonlinearity in such systems has significantly progressed especially since the pioneering work of Bloembergen $et.al.$ [1]. A nonlinear optical probe has several advantages over other surface probes because the material damage and contamination associated with charged particle beams is eliminated, all pressure ranges are accessible, insulators can be studied without the problem of charging effects, and buried interfaces are accessible owing to the large penetration depth of the optical radiation [2]. In centrosymmetric material such as silicon (Si) the bulk and surface nonlinear contributions are comparable [3] and changes in the surface properties such as surface deposition will significantly alter the nonlinear intensity profile [4]. Because centrosymmetric materials is important to present day technology (e.g. silicon semiconductor, silicon thin film sensors) understanding the surface property and the physical mechanism behind is critical, with much effort both experimental and theoretical having been invented. Perhaps, the most notable development in understanding the creation of an outgoing $2\omega$ wave or second-harmonic-generation (SHG) from centrosymmetric material surfaces and interfaces in the viewpoint of the phenomenological theory was performed by the group of Sipe $et.al.$ [5] and further by Luepke $et.al.$ [6]. The later group, calculated the nonlinear susceptibility for vicinal Si(111) by fitting a fourier series to reproduce the four polarization SHG intensity ($p$-incoming $p$-outgoing, $ps$, $sp$, and $ss$) obtained from their experiments. Unfortunately, already for a single atom with a relatively low symmetry such as silicon, the phenomenological theory requires many -seemingly non independent- variables to be fitted thus blurring the physical insights and mechanism of SHG inside the material. As a response to this inefficiency, Aspnes $et.al.$ [7] developed an ingenious method termed the simplified bond-hyperpolarizability model (SBHM) based on the somehow forgotten classical Ewald-Oseen extinction theory [8,9]. The theorem states that the electric field transmission and reflection can be obtained by direct superposition of dipoles inside the matherial rather than from macroscopic boundary calculation [10]. SBHM extends this classical view to cope with nonlinear optics by assuming that the SHG signal originates from the anharmonic motions of the charges along the bonds. Suprisingly, if the nonlinear field is far smaller than the linear field the calculation turns out to be much simpler because it does not lead to into an integral equation hence requiring self consistency check [11]. In this case it is possible to differentiate between the incoming driving fundamental field ($\omega$) and the outgoing SHG field ($2\omega$). As a result, the steps to obtain the far field contribution from the anharmonic bond radiation can be performed independently. Moreover, experimental data, e.g.Luepke’s Si(111) experiment, can be very well fitted using SBHM by only a handful parameters rather than the cumbersome fitting of various fresnel parameters using phenomenology theory. 2 Basic Theory An excellent discussion of how to calculate the far field intensity for centrosymmtric material such as Si(111) and Si(001) by assuming charge motion along their bonds already exists for vicinal Si sytems [6,12] and the interested reader is therefore suggested to refer to these papers for further information. However, for the sake of clarity, we will -although only briefly- show how SHG can be understood in the framework of the SBHM. In classical electrodynamics one can obtain the far field expression as a result of dipole (or oscillating charge) radiation. The vector potential $\vec{A}(\vec{r},t)$, which satisfy the Lorenz gauge [13]: $$\nabla\cdot\vec{A}=-\frac{1}{c}\frac{\partial\phi}{\partial t}$$ (1) can be expressed in the form: $$\vec{A}(\vec{r},t)=\frac{\mu_{0}}{4\pi}\int dt^{\prime}d^{3}dt^{\prime}\vec{r^% {\prime}}\frac{\vec{j}(\vec{r^{\prime}},t^{\prime}-\frac{\left|\vec{r}-\vec{r^% {\prime}}\right|}{c}-t)}{\left|\vec{r}-\vec{r^{\prime}}\right|}$$ (2) Assuming that the localized current $j(r)$ and charge density $\rho(r)$ as well as the vector potential $\vec{A}(\vec{r},t)$ can be written in harmonic form we can write eq.(2) in the far field approximation as: $$\vec{A}(\vec{r^{\prime}},t)=\frac{\mu_{0}}{4\pi}\frac{e^{ikr}}{r}\int d^{3}% \vec{r^{\prime}}\vec{j}(\vec{r^{\prime}})e^{-i\omega t}=-\frac{\mu_{0}}{4\pi}% \frac{e^{ikr}}{r}\int d^{3}\vec{r^{\prime}}\vec{r^{\prime}}\left|\nabla\cdot% \vec{j}(\vec{r^{\prime}})\right|$$ (3) Using the continuity equation: $$\nabla\cdot\vec{j}(\vec{r^{\prime}})=i\omega\rho(\vec{r})$$ (4) we have for the spatial dependence vector potential the expression: $$\vec{A}(\vec{r})=-\frac{i\omega\mu_{0}}{4\pi}\frac{e^{ikr}}{r}\vec{p}$$ (5) where $\vec{p}$ is the dipole moment: $$\vec{p}=\int d^{3}\vec{r^{\prime}}\vec{r^{\prime}}\rho(\vec{r})$$ (6) The expressions for the fields according to the defined vector potential are: $$\vec{E}(\vec{r})=\sqrt{\frac{\mu_{0}}{\epsilon_{0}}}\frac{i}{k}\nabla\times% \vec{H}(\vec{r})$$ (7) $$\vec{H}(\vec{r})=\frac{1}{\mu_{0}}\nabla\times\vec{A(}\vec{r})$$ (8) The far field can thus be calculated by inserting eq.(5) in eq.(8) and then inserting eq.(8) in eq.(7). With the help of the identity $(\hat{n}\times\vec{p})\times\hat{n}=\vec{p}-(\hat{n}\hat{n})\cdot\vec{p}$ we have: $$\vec{E}(\vec{r})=\frac{ke^{ikr}}{4\pi\epsilon_{0}r}\left[(\overline{I}-\hat{n}% \hat{n})\cdot\vec{p}\right]$$ (9) here $\overline{I}$ is the identity tensor. The next step will be finding an expression for the classical microscopic dipole per unit volume (polarization) $\vec{p}$. In this work we will use eq. (9) to calculate the reflected intensity of the linear and second-harmonic-generation. 3 SBHM for Si(111) In the Maxwell classical approach all the microscopic details in the inhomogenous nonlinear wave equation are included and hidden in the electric susceptibility. In the classical optic approach this susceptibilty can be estimated by constructing a Lorentz oscillator model with damping to obtain the nonlinear susceptibility terms [14]. It is therefore not suprising -but ingenious- to use a similar approach microscopically, by starting from a force equation to find expressions for the polarization. Furthermore it has been shown that even with the simplification that the nonlinear polarization source only occur along the bond direction, the second-harmonic far field from the anharmonic charge radiation is in very good agreement with experimental result for vicinal Si(111). Here, we briefly follow the approach given by Ref. [7]. For a single Si(111) atom there are four bonds which we denote by $\vec{b}_{1},\vec{b}_{2},\vec{b}_{3}$ and $\vec{b}_{4}$. In the bulk, the orientation of the bonds can be expressed in cartesian coordinates as: $$b_{1}=\left(\begin{array}[]{c}0\\ 0\\ 1\end{array}\right)\;\quad b_{2}=\left(\begin{array}[]{c}\cos\left(\beta-\frac% {\pi}{2}\right)\\ 0\\ -\sin(\beta-\frac{\pi}{2})\end{array}\right)$$ $$b_{3}=\left(\begin{array}[]{c}-\cos\left(\beta-\frac{\pi}{2}\right)\sin(\frac{% \pi}{6})\\ \cos\left(\beta-\frac{\pi}{2}\right)\cos(\frac{\pi}{6})\\ -\sin(\beta-\frac{\pi}{2})\end{array}\right)\;b_{4}=\left(\begin{array}[]{c}-% \cos\left(\beta-\frac{\pi}{2}\right)\sin(\frac{\pi}{6})\\ -\cos\left(\beta-\frac{\pi}{2}\right)\cos(\frac{\pi}{6})\\ -\sin(\beta-\frac{\pi}{2})\end{array}\right)$$ (10) with $\beta=109.47^{\circ}$ is the angle between two bonds. Fig.1 depicts a graphical image of the bond direction and the definition of the coordinate. Note that the four Si(111) bonds are drawn as the red lines. Although the orientation is valid for the bulk, the even nonlinear harmonic dipole contribution is generally considered small because it is forbidden by inversion symmetry or parity argument. However we believe that the incoming field decay effects due to absorption may break the symmetry and produce dipolar effects even inside the bulk (spatial dispersion). At the interface, a dipole contribution is allowed because of symmetry breaking, but the orientation of the bonds may not be arranged as homogenously as in the bulk. Nonetheless, we follow the assumption in Ref. [7] that, based on statistical averaging the majority of the bond vector at the interface can be treated similar as within the bulk, thus the bond model of the bulk also holds for the interface. To model the reflected intensity vs rotation along the azimuthal direction (rotation in the $x$-$y$ plane) we allow the bonds as a function of rotation via: $$b_{j}(\phi)=b_{j}\cdot R_{\phi}=b_{j}\cdot\left(\begin{array}[]{ccc}\cos\phi&-% \sin\phi&0\\ \sin\phi&\cos\phi&0\\ 0&0&1\end{array}\right)$$ (11) where the subscript $j$ runs from 1 to 4. Having defined the bond vectors, we now analyze the motion of the charges. The force equation which describes this anharmonic motion along the bonds can be written in the form [7] taking the equilibrium position as zero: $$F=q_{j}\vec{E}\hat{b_{j}}e^{-i\omega t}-\kappa_{1}x-\kappa_{2}x^{2}-b\dot{x}=m% \ddot{x}$$ (12) where $q,m,x$ are the electron charge, mass, and its displacement from equilibrium, respectively and $\kappa_{1}$ and $\kappa_{2}$ are the harmonic and anharmonic spring constants, and the term $-b\dot{x}$ is the common frictional loss in oscillation. Solving for $\triangle x_{1}$ and $\triangle x_{2}$ by using the assumption that $x$ can be written as $x=x_{0}+\triangle x_{1}e^{-i\omega t}\triangle x_{2}e^{-i2\omega t}$ gives for the lowest order of approximation: $$\triangle x_{1}=\frac{\vec{E}\cdot\hat{b_{j}}}{\kappa_{1}-m\omega^{2}-ib\omega}$$ $$\triangle x_{2}=\frac{\kappa_{2}}{\kappa_{1}-4m\omega^{2}-ib2\omega}\left(% \frac{\vec{E}\cdot\hat{b_{j}}}{\kappa_{1}-m\omega^{2}-ib\omega}\right)^{2}$$ (13) therefore the linear polarization produced by each bond $b_{j}$ is $$p_{1j}=q_{j}\triangle x_{1}=\alpha_{1j}\left(\hat{b}_{j}\cdot\vec{E}\right)$$ (14) whereas we have for the SHG the nonlinear polarization: $$p_{2j}=q_{j}\triangle x_{2}=\alpha_{2j}\left(\hat{b}_{j}\cdot E\right)^{2}$$ (15) where $\alpha_{1}$ and $\alpha_{2}$ are the microscopic polarizability and second order hyperpolarizability given by: $$\alpha_{1j}=\frac{1}{\kappa_{1}-m\omega^{2}-ib\omega}$$ $$\alpha_{2j}=\frac{\kappa_{2}}{\kappa_{1}-4m\omega^{2}-ib2\omega}\left(\frac{1}% {\kappa_{1}-m\omega^{2}-ib\omega}\right)^{2}$$ (16) In Si(111) surface, $\alpha_{j=1}$ is denoted as the ”up” polarizability/hyperpolarizability $\alpha_{u}$ whereas the three remaining have the same value due to symmetry and are called the ”down” polarizability/hyperpolarizability $\alpha_{d}$. The total linear and second harmonic polarization produced by all the four bonds in Si(111) considering azimuthal rotation are thus : $$\vec{P_{1}}=\frac{1}{V}\left(p_{11}+p_{12}+p_{13}+p_{14}\right)=\frac{1}{V}{% \sum\limits_{j=1}^{4}}\alpha_{1j}\hat{b}_{j}\left(\hat{b}_{j}(\phi)\cdot\vec{E% }\right)$$ (17) and $$\vec{P_{2}}=\frac{1}{V}\left(p_{21}+p_{22}+p_{23}+p_{24}\right)=\frac{1}{V}{% \sum\limits_{j=1}^{4}}\alpha_{2j}\hat{b}_{j}\left(\hat{b}_{j}(\phi)\cdot\vec{E% }\right)^{2}$$ (18) In the following chapters, we will show how the SBHM can reproduce experimental results of second-harmonic-generation in Si(111). 4 SHG Experiment of Si(111) The setup for the azimuthal SHG measurements is shown in Fig. 2. As a source of a radiation a compact femtosecond fiber laser system, working at a wavelength of 1560 nm and producing pulses with duration of 86 fs and with a repetition rate of 80 MHz, was used. After a variable attenuator for adjusting the power, a half-wave plate (HWP) determines the state of polarisation for the incident beam, which is then filtered by a band-pass filter (F), transmitting only the fundamental wavelength. The incidence radiation is focused onto the sample by the objective lens L1 with a minimal focal waist down to 10 $\mu$m. The incidence angle for the radiation was set to $45^{\circ}$. The sample itself is mounted on a motorized rotation stage. SH and fundamental radiation from the sample are collected by the lens L2 and are directed to a rotatable Glan-Taylor calcite polarizer that selects the desired polarisation direction. A Pellin Broca prism and a slit are used for spatially separating the SH radiation, which is focused onto a cooled photomultiplier tube (Hamamatsu R911P). For the acquisition of the SHG signal photon counting was used, with a dark count rate of only 30 photons per minute. As a sample, a Si(111) wafer with native oxide was used. In order to obtain a detectable SHG signal and not to damage the surface of the sample, the beam was slightly defocused, with the fiber laser operating at full power (350 mW). 5 Si(111) SHG experimental results and SBHM simulations Second harmonic generation from a Si(111) face was obtained experimentally by plotting the intensity for various azimuthal angles (Si(111) rotated by $360^{\circ}$ in the $x$-$y$ plane). All four intensity profiles ($pp,ps,sp,ss$) are aquired with the results depicted in Fig. 3(b). To obtain the total field using SBHM we insert expressions for the total polarization in eq. (17) and eq. (18) into the far field formula eq. (9), with $\hat{n}=\widehat{k}$, where $\widehat{k}$ is the outgoing wave vector (the unit vector in the direction of the observer) and is given by $\hat{k}=-\cos\theta_{0}\hat{x}+\sin\theta_{0}\hat{z}$. The linear and second harmonic far field can now be written as: $$\vec{E}_{\omega g}(r)=\frac{ke^{ikr}}{4\pi\epsilon_{0}rV}\left[(\overline{I}-% \hat{k}\hat{k})\cdot{\sum\limits_{j=1}^{4}}\alpha_{1j}\hat{b}_{j}\left(\hat{b}% _{j}(\phi)\cdot\vec{E_{g}}\right)\right]$$ (19) $$\vec{E}_{2\omega g}(r)=\frac{ke^{ikr}}{4\pi\epsilon_{0}rV}\left[(\overline{I}-% \hat{k}\hat{k})\cdot{\sum\limits_{j=1}^{4}}\alpha_{2j}\hat{b}_{j}\left(\hat{b}% _{j}(\phi)\cdot\vec{E_{g}}\right)^{2}\right]$$ (20) The term outside the paranthesis includes only constants and because SHG experiments are usually performed in arbitrary units the important terms are those given inside the paranthesis. Here $g$ is the symbol for the polarization of the incoming wave and can be either $p$ or $s$. We next consider first eq. (17) and evaluate the intensity for four polarization direction: $pp,ps,sp$, and $ss$. The first letter in $pp$ denotes the incoming and the second the outgoing wave. The far field from all four bonds in Si(111) for each polarization can be obtained using eq.(10). For the linear case we have: $$\displaystyle\vec{E}_{\omega,pp}=\frac{1}{2}\left(\cos\left(\theta_{\ddot{o}}% \right)\hat{x}+\sin\left(\theta_{\ddot{o}}\right)\hat{z}\right)[(-3\text{$% \alpha_{d}$}\cos\left(\theta_{i}\right)\cos\left(\theta_{o}\right)\sin^{2}% \left(\beta\right)+3\text{$\alpha_{d}\text{$\cos\left(2\beta\right)$}\sin\left% (\theta_{i}\right)\sin\left(\theta_{o}\right))]$}$$ (21) $$\displaystyle\vec{E}_{\omega,sp}=0$$ $$\displaystyle\vec{E}_{\omega,sp}=0$$ $$\displaystyle\vec{E}_{\omega,ss}=\frac{3}{2}\alpha_{d}\sin^{2}\left(\beta% \right)\hat{y}$$ An interesting result from this calculation is that the total linear fields for all polarization is independent of the azimuthal rotation $\phi$, eventhough the field produced by each of the bond may depend on $\phi$. This can be seen in Fig. 3(a) where we take the $pp$-case as an example. Similar results are obtained for the $ss$-polarization. Therefore, orientation of the bonds due to rotation in the $x$-$y$ plane does not effect the reflected linear intensity. More interesting results are obtained if we consider the second harmonic case. The total nonlinear polarization for all polarization modes are -in contrary to the linear case- a function of the azimuthal rotation of the crystal $\phi$. Fig. 3(b) shows our experimental result. The $pp,ps$, and $ss$-polarization (red line) showsa 6-fold dependence, with peaks at every $60^{\circ}$ interval. The $ps$ and $ss$-polarization have the same azimuthal behaviour with a peak shift of $30^{\circ}$ for the $pp$-polarization. The $ss$-polarization, notably has a high (DC) offset value and broad peak full-wave-half-maximum (FWHM), but peaks still being detected. Also the height of the $pp$ and $sp$-polarization peaks are quite similar with the sp polarization having a 3-fold symmetry. Remarkably, this seemingly complicated experimental result can be well explained using the SBHM, more explicitly by applying eq. (19) with the result depicted in Fig. 3(c). The formulas for the 4-polarization SHG far field from a Si(111) structure can be seen in Ref. [7]: $$\displaystyle\vec{E}_{2\omega,sp}=\frac{3}{4}\alpha_{d}\sin^{2}\beta\left[\cos% \theta_{o}\cos 3\phi\sin\beta-2\cos\beta\sin\theta_{o}\right]\left[\hat{x}\cos% \theta+\hat{z}\sin\theta\right]$$ (22) $$\displaystyle\vec{E}_{2\omega,ps}=\frac{3}{4}\alpha_{d}\sin^{3}\beta\cos^{2}% \theta_{i}\sin 3\phi\,\hat{y}$$ $$\displaystyle\vec{E}_{2\omega,ss}=-\frac{3}{4}\alpha_{d}\sin^{3}\beta\sin 3% \phi\,\hat{y}$$ with a slightly different result obtained for the pp-polarization: $$\displaystyle\vec{E}_{2\omega,pp}=\left(\alpha_{u}+3\alpha_{d}\cos^{3}\beta)% \sin^{2}\theta_{i}\sin\theta_{o}-3\alpha_{d}\cos\beta\cos\theta_{i}\cos\theta_% {o}\sin^{2}\beta\sin\theta_{i}\right.$$ $$\displaystyle+\frac{3}{4}\alpha_{d}\cos^{2}\theta_{i}\sin^{2}\beta(\cos\theta_% {o}\cos 3\phi\sin\beta+2\cos\beta\sin\theta_{o})$$ (23) By setting an arbitrary DC offsets which might be related to experimental noise, the model showed good agreement with the experiment, as is evident from the reproducable symmetry pattern with correct azimuthal position. In the simulation we have -for the sake of simplicity- set the values for the hyperpolarizabilities equal to unity and use for the SiO-Si interface incoming and outgoing angle of $29.5^{\circ}$ (via snellius). The 3-fold $sp$ symmetry (green line) is perfectly matching the experimental result as well as the $ps$-polarization blue line) with the correct relative peak difference between them ($30^{\circ}$). The intensity profiles can be easily explained as an effect of anharmonic charge radiation along the bond direction. One can easily check this by calculating the field produced by the individual bond contribution. For example, the $6$-fold symmetry from the $ss$-polarization can be explained by each down bond contributing twice to the $s$-fundamental driving field when rotated by $360^{\circ}$. However, it has to be stated that for the $pp$-polarization SBHM gives a 3-fold symmetry and is in contradiction with the 6-fold symmetry given by experiment. We attribute this difference due to possible bulk dipole contribution because inside the bulk the incident angle is closer to the normal, and here SBHM also predicts a sixfold symmetry. This bulk dipole is still controversial and requires symmetry breaking in the form of a decaying field (absorption). Nevertheless, our simulation has shown that SHG measurement with SBHM can be used to investigate bond orientation at the surface of centrosymmetric materials in a more simple fashion without going to the complicated susceptibility tensor analysis (e.g. group theory). 6 Summary Using the simplified bond hyperpolarizability model we show that the reflected linear intensity for all 4-polarization modes from a Si(111) structure is independent of the azimuthal rotation on the $x$-$y$ plane. For the second harmonic intensity, SBHM gives a good agreement with experiment, sucessfully predicting the correct azimuthal profiles and symmetries of the surfaces except for the $pp$-polarization which we believe requires SBHM to be extended to cope with bulk effects. This shows that SBHM can be used as a simple model in predicting the bond orientation of a Si(111) wafer at the surface/interface and reconfirms second harmonic generation measurements as a sensitive non destructive method to investigate surface structures in material with inversion symmetry. Acknowledegements: The authors would like to thank financial support by the Austrian Federal Ministry of Economy, the Austrian Family and Youth and the Austrian National Foundation for Research, Technology and Development. References References [1] N. Bloembergen, R. K. Chang, S. S. Jha, and C. H. Lee, Phys. Rev. 174, 813 (1968).. [2] J. F. McGilp, Journal of Physics D: Appl. Phys. 29, 1812 (20071996). [3] P. Guyot-Sionnest and Y. R. Shen, Phys. Rev. B 38, 7985 (1988). [4] B. Gokce, D. E. D. B. Dougherty, and K. Gundogdu, J. Vac. Sci. Technol. A 30 (2012). [5] V. Mizrahi and J. E. Sipe, J. Op. Soc. Am. B. 5, 233402 (1988). [6] G. Luepke, D. J. Bottomley, and H. M. van Driel, J. Opt. Soc. Am. B 11, 33 (1994). [7] G. D. Powell, J.-F. Wang, and D. E. Aspnes, Phys. Rev. B 65, 205320 (2002). [8] P.Ewald, Ann. Phys. 49, 1 (1916). [9] C. W. Oseen, Ann. Phys. 48, 1 (1990). [10] Fearn, James, and Milonni] H. Fearn, D. F. V. James, and P. W. Milonni, Am. J. Phys. 64, 986 (1996). [11] D. Aspnes, Phys. Status Solidi B 2 247, 1873 (2010). [12] J. Kwon, M. C. Downer, and B. S. Mendoza, Phys. Rev. B 73, 195330 (2006). [13] J. D. Jackson, Classical Electrodynamics, 3rd. Ed p. 411 (1998). [14] R. W. Boyd, Nonlinear Optics (Academic Press, 2003), ISBN 0-12-121682-9.
Asymptotic Optimal Portfolio in Fast Mean-reverting Stochastic Environments Ruimeng Hu Ruimeng Hu is with the Department of Statistics and Applied Probability, University of California, Santa Barbara, CA 93106-3110, USA [email protected]. The author was partially supported by the National Science Foundation under DMS-1409434. Abstract This paper studies the portfolio optimization problem when the investor’s utility is general and the return and volatility of the risky asset are fast mean-reverting, which are important to capture the fast-time scale in the modeling of stock price volatility. Motivated by the heuristic derivation in [J.-P. Fouque, R. Sircar and T. Zariphopoulou, Mathematical Finance, 2016], we propose a zeroth order strategy, and show its asymptotic optimality within a specific (smaller) family of admissible strategies under proper assumptions. This optimality result is achieved by establishing a first order approximation of the problem value associated to this proposed strategy using singular perturbation method, and estimating the risk-tolerance functions. The results are natural extensions of our previous work on portfolio optimization in a slowly varying stochastic environment [J.-P. Fouque and R. Hu, SIAM Journal on Control and Optimization, 2017], and together they form a whole picture of analyzing portfolio optimization in both fast and slow environments. Stochastic optimal control, asset allocation, stochastic volatility, singular perturbation, asymptotic optimality. I Introduction The portfolio optimization problem in continuous time, also known as the Merton problem, was firstly studied in [16, 17]. In his original work, explicit solutions on how to allocate money between risky and risk-less assets and/or how to consume wealth are provided so that the investor’s expected utility is maximized, when the risky assets follows the Black–Scholes (BS) model and the utility is of Constant Relative Risk Aversion (CRRA) type. Since these seminal works, lots of research has been done to relax the original model assumptions, for example, to allow transaction cost [15], [10], drawdown constraints [9], [4], [5], price impact [3], and stochastic volatility [18], [2], [8] and [14]. Our work extends Merton’s model by allowing more general utility, and by modeling the return and volatility of the risky asset $S_{t}$ by a fast mean-reverting process $Y_{t}$: $$\displaystyle\,\mathrm{d}S_{t}=\mu(Y_{t})S_{t}\,\mathrm{d}t+\sigma(Y_{t})S_{t}% \,\mathrm{d}W_{t},$$ (1) $$\displaystyle\,\mathrm{d}Y_{t}=\frac{1}{\epsilon}b(Y_{t})\,\mathrm{d}t+\frac{1% }{\sqrt{\epsilon}}a(Y_{t})\,\mathrm{d}W_{t}^{Y}.$$ (2) The two standard Brownian motion (Bm) are imperfectly correlated: $\,\mathrm{d}\left\langle W,W^{Y}\right\rangle=\rho\,\mathrm{d}t,\;\rho\in(-1,1)$. We are interested in the terminal utility maximization problem $$V^{\epsilon}(t,x,y)\equiv\sup_{\pi\in\mathcal{A}^{\epsilon}}\mathbb{E}[U(X_{T}% ^{\pi})|X_{t}^{\pi}=x,Y_{t}=y],$$ (3) where $X_{t}^{\pi}$ is the wealth associated to self-financing $\pi$: $$\,\mathrm{d}X_{t}^{\pi}=\pi(t,X_{t}^{\pi},Y_{t})\mu(Y_{t})\,\mathrm{d}t+\pi(t,% X_{t}^{\pi},Y_{t})\sigma(Y_{t})\,\mathrm{d}W_{t},$$ (4) (assume the risk-free interest rate varnishes $r=0$) and $\mathcal{A}^{\epsilon}$ is the set of strategies that $X_{t}^{\pi}$ stays nonnegative. Using singular perturbation technique, our work provide an asymptotic optimal strategy ${\pi^{(0)}}$ within a specific class of admissible strategies $\mathcal{A}_{0}^{\epsilon}$ that satisfies certain assumptions: $$\mathcal{A}_{0}^{\epsilon}\left[\widetilde{\pi}^{0},\widetilde{\pi}^{1},\alpha% \right]=\left\{\widetilde{\pi}^{0}+\epsilon^{\alpha}\widetilde{\pi}^{1}\right% \}_{0\leq\epsilon\leq 1}.$$ (5) I-A Motivation and Related Literature The reason to study the proposed problem is threefold. Firstly, in the direction of asset modeling (1)-(2), the well-known implied volatility smile/smirk phenomenon leads us to employ a BS-like stochastic volatility model. Empirical studies have identified scales in stock price volatility: both fast-time scale on the order of days and slow-scale on the order of months [7]. This results in putting a parameter $\epsilon$ in (2). The slow-scale case (corresponding to large $\epsilon$ in (2)), which is particularly important in long-term investments, has been studied in our previous work [6]. An asymptotic optimality strategy is proposed therein using regular perturbation techniques. This makes it natural to extend the study to fast-varying regime, where one needs to use singular perturbation techniques. Secondly, in the direction of utility modeling, apparently not everyone’s utility is of CRRA type [1], therefore it is important to consider to work under more general utility functions. Thirdly, although it is natural to consider multiscale factor models for risky assets, with a slow factor and a fast factor as in [8], more involved technical calculation and proof are required in combining them, and thus, we leave it to another paper in preparation [11]. Our proposed strategy ${\pi^{(0)}}$ is motivated by the heuristic derivation in [8], where a singular perturbation is performed to the PDE satisfied by $V^{\epsilon}$. This gave a formal approximation $V^{\epsilon}=v^{(0)}+\sqrt{\epsilon}{v^{(1)}}+\epsilon v^{(2)}+\cdots$. They then conjectured that the zeroth order strategy $${\pi^{(0)}}(t,x,y)=-\frac{\lambda(y)}{\sigma(y)}\frac{v^{(0)}_{x}(t,x,y)}{v^{(% 0)}_{xx}(t,x,y)},$$ (6) reproduces the optimal value up to the first order $v^{(0)}+\sqrt{\epsilon}v^{(1)}$; see Section II-B for the formulation of $v^{(0)}$ and $v^{(1)}$. I-B Main Theorem and Organization of the Paper Let $V^{{\pi^{(0)}},\epsilon}$ (resp. $\widetilde{V}^{\epsilon}$) be the expected utility of terminal wealth associated to ${\pi^{(0)}}$ (resp. $\pi\in\mathcal{A}_{0}^{\epsilon}$) $$V^{{\pi^{(0)}},\epsilon}:=\mathbb{E}[U(X_{t}^{\pi^{(0)}})|X_{t}^{\pi^{(0)}}=x,% Y_{t}=y],$$ and $X_{t}^{\pi^{(0)}}$ be the wealth process given by (4) with $\pi={\pi^{(0)}}$ (resp. $\pi$ in $\mathcal{A}_{0}^{\epsilon}$). By comparing $V^{{\pi^{(0)}},\epsilon}$ and $\widetilde{V}^{\epsilon}$, we claim that ${\pi^{(0)}}$ performs asymptotically better up to order $\sqrt{\epsilon}$ than the family $\left\{\widetilde{\pi}^{0}+\epsilon^{\alpha}\widetilde{\pi}^{1}\right\}$. Mathematically, this is formulated as: Theorem I.1 Under paper assumptions, for any family of trading strategies $\mathcal{A}_{0}^{\epsilon}\left[\widetilde{\pi}^{0},\widetilde{\pi}^{1},\alpha\right]$, the following limit exists and satisfies $$\ell:=\lim_{\epsilon\to 0}\frac{\widetilde{V}^{\epsilon}(t,x,y)-V^{{\pi^{(0)}}% ,\epsilon}(t,x,y)}{\sqrt{\epsilon}}\leq 0.$$ (7) Proof will be given in Section IV as well as the interpretation of this inequality according to different $\alpha$’s. The rest of the paper is organized as follows. Section II introduces some preliminaries of the Merton problem and lists the assumptions needed in this paper. Section III rigorously gives $V^{{\pi^{(0)}},\epsilon}$’s first order approximation $v^{(0)}+\sqrt{\epsilon}{v^{(1)}}$. Section IV is dedicated to the proof of Theorem I.1, where the expansion of $\widetilde{V}^{\epsilon}$ is analyzed first. The precise derivations are provided, but the detailed technical assumption is referred to our recent work [6]. II Preliminaries and Assumptions In this section, we firstly review the classical Merton problem, and the notation of risk tolerance function $R(t,x;\lambda)$. Then heuristic expansion results of $V^{\epsilon}$ in [8] are summarized. Standing assumptions of this paper are listed, as well as some estimation regarding $R(t,x;\lambda)$ and $v^{(0)}$. We refer to our recent work [6, Section 2, 3] for proofs of all these results. II-A Merton problem with constant coefficients We shall first consider the case of constant $\mu$ and $\sigma$ in (1). This is the classical Merton problem, which plays a crucial role in interpreting the leading order term $v^{(0)}$ and analyzing the singular perturbation. This problem has been widely studied extensively, for instance, see [13]. Let $X_{t}$ be the wealth process in this case. Using the notation in [8], we denote by $M(t,x;\lambda)$ the problem value. In Merton’s original work, closed-form $M(t,x;\lambda)$ was obtained when the utility $U(\cdot)$ is of power type. In general, one has the following results, with all proofs found in [6, Section 2.1] or the references therein. Proposition II.1 Assume that the utility function $U(x)$ is $C^{2}(0,\infty)$, strictly increasing, strictly concave, such that $U(0+)$ is finite, and satisfies the Inada and Asymptotic Elasticity conditions: $U^{\prime}(0+)=\infty$, $U^{\prime}(\infty)=0$, $\text{AE}[U]:=\lim_{x\rightarrow\infty}x\frac{U^{\prime}(x)}{U(x)}<1$, then, the Merton value function $M(t,x;\lambda)$ is strictly increasing, strictly concave in the wealth variable $x$, and decreasing in the time variable $t$. It is $C^{1,2}([0,T]\times\mathbb{R}^{+})$ and is the unique solution to the HJB equation $$\displaystyle M_{t}+\sup_{\pi}\left\{\frac{\sigma^{2}}{2}\pi^{2}M_{xx}+\mu\pi M% _{x}\right\}=0,$$ (8) $$\displaystyle M(T,x;\lambda)=U(x),$$ where $\lambda=\mu/\sigma$ is the constant Sharpe ratio. It is $C^{1}$ with respect to $\lambda$, and the optimal strategy is given by $$\pi^{\star}(t,x;\lambda)=-\frac{\lambda}{\sigma}\frac{M_{x}(t,x;\lambda)}{M_{% xx}(t,x;\lambda)}.$$ We next define the risk-tolerance function $$R(t,x;\lambda)=-M_{x}(t,x;\lambda)/M_{xx}(t,x,;\lambda),$$ and two operators, following the notations in [8], $$\displaystyle D_{k}$$ $$\displaystyle=R(t,x;\lambda)^{k}\partial_{x}^{k},\qquad k=1,2,\cdots,$$ (9) $$\displaystyle\mathcal{L}_{t,x}(\lambda)$$ $$\displaystyle=\partial_{t}+\frac{1}{2}\lambda^{2}D_{2}+\lambda^{2}D_{1}.$$ (10) By the regularity and concavity of $M(t,x;\lambda)$, $R(t,x;\lambda)$ is continuous and strictly positive. Further properties are presented in Section II-D. Using the relation $D_{1}M=-D_{2}M$, the nonlinear Merton PDE (8) can be re-written in a “linear” way: $\mathcal{L}_{t,x}(\lambda)M(t,x;\lambda)=0$. We now mention an uniqueness result to this PDE, which will be used repeatedly in Sections III. Proposition II.2 Let $\mathcal{L}_{t,x}(\lambda)$ be the operator defined in (10), and assume that the utility function $U(x)$ satisfies the conditions in Proposition II.1, then $$\mathcal{L}_{t,x}(\lambda)u(t,x;\lambda)=0,\quad u(T,x;\lambda)=U(x),$$ (11) has a unique nonnegative solution. II-B Existing Results in [8] In this subsection, we review the formal expansion results of $V^{\epsilon}$ derived in [8]. To apply singular perturbation technique, we assume that the process $Y^{(1)}_{t}\stackrel{{\scriptstyle\mathcal{D}}}{{=}}Y_{t\epsilon}$ is ergodic and equipped with a unique invariant distribution $\Phi$. We use the notation $\left\langle\cdot\right\rangle$ for averaging with respect to $\Phi$, namely, $\left\langle f\right\rangle=\int f\,\mathrm{d}\Phi$. Let $\mathcal{L}_{0}$ be the infinitesimal generator of $Y^{(1)}$: $$\mathcal{L}_{0}=\frac{1}{2}a^{2}(y)\partial_{y}^{2}+b(y)\partial_{y}.$$ Then, by dynamic programming principle, the value function $V^{\epsilon}$ formally solves the Hamilton-Jacobi-Bellman (HJB) equation: $$\displaystyle V^{\epsilon}_{t}+\frac{1}{\epsilon}\mathcal{L}_{0}V^{\epsilon}+% \max_{\pi\in\mathcal{A}^{\epsilon}}\Bigl{(}\sigma(y)^{2}\pi^{2}V^{\epsilon}_{% xx}/2\\ \displaystyle+\pi\left(\mu(y)V^{\epsilon}_{x}+\rho a(y)\sigma(y)V^{\epsilon}_{% xy}/\sqrt{\epsilon}\right)\Bigr{)}=0.$$ In general, the value function’s regularity is not clear, and is the solution to this HJB equation only in the viscosity sense. In [8], a unique classical solution is assumed, so that heuristic derivations can be performed. However, in this paper, such assumption is not needed, as we focus on the quantity $V^{{\pi^{(0)}},\epsilon}$ defined in (18). It corresponds to a linear PDE, which classical solutions exist. Under the assumptions in [8], the optimizer to this nonlinear PDE is of feedback form: $$\pi^{\ast}=-\frac{\lambda(y)V^{\epsilon}_{x}}{\sigma(y)V^{\epsilon}_{xx}}-% \frac{\rho a(y)V^{\epsilon}_{xy}}{\sqrt{\epsilon}\sigma(y)V^{\epsilon}_{xx}},$$ and the simplified HJB equation reads: $$V^{\epsilon}_{t}+\frac{1}{\epsilon}\mathcal{L}_{0}V^{\epsilon}-\left(\lambda(y% )V^{\epsilon}_{x}+\frac{1}{\sqrt{\epsilon}}\rho a(y)V^{\epsilon}_{xy}\right)^{% 2}/(2V^{\epsilon}_{xx})=0,$$ for $(t,x,y)\in[0,T]\times\mathbb{R}^{+}\times\mathbb{R}$. The equation is fully nonlinear and is only explicitly solvable in some cases; see [2] for instance. The heuristic expansion results overcome this by providing approximations to $V^{\epsilon}$. This is done by the so-called singular perturbation method, as often seen in homogenization theory. To be specific, one substitutes the expansion $V^{\epsilon}=v^{(0)}+\sqrt{\epsilon}v^{(1)}+\epsilon v^{(2)}+\cdots$ into the above equation, establishes equations about $v^{(k)}$ by collecting terms of different orders. In [8, Section 2], this is performed for $k=0,1$ and we list their results as follows: (i) The leading order term $v^{(0)}(t,x)$ is defined as the solution to the Merton PDE associated with the averaged Sharpe ratio $\overline{\lambda}=\sqrt{\left\langle\lambda^{2}\right\rangle}$: $$v^{(0)}_{t}-\frac{1}{2}\overline{\lambda}^{2}\frac{\left(v^{(0)}_{x}\right)^{2% }}{v^{(0)}_{xx}}=0,\quad v^{(0)}(T,x)=U(x),$$ (12) and by the uniqueness discussed in Proposition II.1, $v^{(0)}$ is identified as: $$v^{(0)}(t,x)=M\bigl{(}t,x;\overline{\lambda}\bigr{)}.$$ (13) (ii) The first order correction $v^{(1)}$ is identified as the solution to the linear PDE: $$v^{(1)}_{t}+\frac{1}{2}\overline{\lambda}^{2}\left(\frac{v^{(0)}_{x}}{v^{(0)}_% {xx}}\right)^{2}v^{(1)}_{xx}-\overline{\lambda}^{2}\frac{v^{(0)}_{x}}{v^{(0)}_% {xx}}v^{(1)}_{x}=\frac{1}{2}\rho BD_{1}^{2}v^{(0)},$$ (14) with $v^{(1)}(T,x)=0$. The constant $B$ is $B=\left\langle\lambda a\theta^{\prime}\right\rangle$, and $\theta(y)$ solves $\mathcal{L}_{0}\theta(y)=\lambda^{2}(y)-\overline{\lambda}^{2}.$ Rewrite equation (14) in terms of the operators in (9)-(10), $v^{(1)}$ solves the following PDE which admits a unique solution: $$\mathcal{L}_{t,x}(\overline{\lambda})v^{(1)}=\frac{1}{2}\rho BD_{1}^{2}v^{(0)}% ,\quad v^{(1)}(T,x)=0.$$ (15) (iii) $v^{(1)}$ is explicitly given in term of $v^{(0)}$ by $$v^{(1)}(t,x)=-\frac{1}{2}(T-t)\rho BD_{1}^{2}v^{(0)}(t,x).$$ (16) II-C Assumptions Basically, we work under the same set of assumptions on the utility $U(\cdot)$ and on the state processes $(S_{t},X_{t}^{\pi^{(0)}},Y_{t})$ as in [6]. We restate them here for readers’ convenience. Further discussion and remarks are referred to [6, Section 2] Assumption II.3 Throughout the paper, we make the following assumptions on the utility $U(x)$: (i) $U(x)$ is $C^{7}(0,\infty)$, strictly increasing, strictly concave and satisfying the following conditions (Inada and Asymptotic Elasticity): $U^{\prime}(0+)=\infty,\quad U^{\prime}(\infty)=0,\quad\text{AE}[U]:=\lim_{x% \rightarrow\infty}x\frac{U^{\prime}(x)}{U(x)}<1.$ (ii) $U(0+)$ is finite. Without loss of generality, we assume $U(0+)=0$. (iii) Denote by $R(x)$ the risk tolerance, $R(x):=-\frac{U^{\prime}(x)}{U^{\prime\prime}(x)}.$ Assume that $R(0)=0$, $R(x)$ is strictly increasing and $R^{\prime}(x)<\infty$ on $[0,\infty)$, and there exists $K\in\mathbb{R}^{+}$, such that for $x\geq 0$, and $2\leq i\leq 5$, $$\left|\partial_{x}^{i}R^{i}(x)\right|\leq K.$$ (17) (iv) Define the inverse function of the marginal utility $U^{\prime}(x)$ as $I:\mathbb{R}^{+}\to\mathbb{R}^{+}$, $I(y)=U^{\prime(-1)}(y)$, and assume that, for some positive $\alpha$, $I(y)$ satisfies the polynomial growth condition: $I(y)\leq\alpha+\kappa y^{-\alpha}.$ Note that Assumption II.3(ii) is a sufficient condition, and rules out the case $U(x)=\frac{x^{\gamma}}{\gamma}$, for $\gamma<0$, and $U(x)=\log(x)$. However, all theorems in this paper still hold under minor modifications to the proof. Next are the model assumptions. Assumption II.4 We make the following assumptions on the state processes $(S_{t},X_{t}^{\pi^{(0)}},Y_{t})$: (i) For any starting points $(s,y)$ and fixed $\epsilon$, the system of SDEs (1)–(2) has a unique strong solution $(S_{t},Y_{t})$. Moreover, the functions $\lambda(y)$ and $a(y)$ are at most polynomially growing. (ii) The process $Y^{(1)}$ with infinitesimal generator $\mathcal{L}_{0}$ is ergodic with a unique invariant distribution, and admits moments of any order uniformly in $t\leq T$: $\sup_{t\leq T}\left\{\mathbb{E}\left|Y_{t}^{(1)}\right|^{k}\right\}\leq C(T,k)$. The solution $\phi(y)$ of the Poisson equation $\mathcal{L}_{0}\phi=g$ is assumed to be polynomial for polynomial functions $g$. (iii) The wealth process $X_{\cdot}^{\pi^{(0)}}$ is in $L^{2}([0,T]\times\Omega)$ uniformly in $\epsilon$ , i.e., $\mathbb{E}_{(0,x,y)}\left[\int_{0}^{T}\left(X_{s}^{\pi^{(0)}}\right)^{2}\,% \mathrm{d}s\right]\leq C_{2}(T,x,y)$, where $C_{2}(T,x,y)$ is independent of $\epsilon$ and $\mathbb{E}_{(0,x,y)}[\cdot]=\mathbb{E}[\cdot|X_{0}=x,Y_{0}=y]$. II-D Existing estimates on $R(t,x;\lambda)$ and $v^{(0)}$ In this subsection, we state several estimations of the risk tolerance function $R(t,x;\lambda)$ and the zeroth order value function $v^{(0)}$, which are crucial in the proof of Theorem III.1. By Proposition II.1 and the relation (13), $v^{(0)}$ is concave in the wealth variable $x$, and decreasing in the time variable $t$, therefore has a linear upper bound, for $(t,x)\in[0,T]\times\mathbb{R}^{+}$: $v^{(0)}(t,x)\leq v^{(0)}(0,x)\leq c+x$, for some constant $c$. Combining it with Assumption II.4(iii), we deduce: Lemma II.5 Under Assumption II.3 and II.4, the process $v^{(0)}(\cdot,X_{\cdot}^{\pi^{(0)}})$ is in $L^{2}([0,T]\times\Omega)$ uniformly in $\epsilon$, i.e. $\forall(t,x)\in[0,T]\times\mathbb{R}^{+}$: $\mathbb{E}_{(t,x)}\left[\int_{t}^{T}\left(v^{(0)}(s,X_{s}^{\pi^{(0)}})\right)^% {2}\,\mathrm{d}s\right]\leq C_{3}(T,x),$ where $v^{(0)}(t,x)$ is defined in Section II-B and satisfies equation (12). Proposition II.6 Suppose the risk tolerance $R(x)=-\frac{U^{\prime}(x)}{U^{\prime\prime}(x)}$ is strictly increasing for all $x$ in $[0,\infty)$ (this is part of Assumption II.3 (iii)), then, for each $t\in[0,T)$, $R(t,x;\lambda)$ is strictly increasing in the wealth variable $x$. Proposition II.7 Under Assumption II.3, the risk tolerance function $R(t,x;\lambda)$ satisfies: $\forall 0\leq j\leq 4$, $\exists K_{j}>0$, such that $\forall(t,x)\in[0,T)\times\mathbb{R}^{+}$, $\left|R^{j}(t,x;\lambda)\left(\partial_{x}^{j+1}R(t,x;\lambda)\right)\right|% \leq K_{j}.$ Or equivalently, $\forall 1\leq j\leq 5$, there exists $\widetilde{K}_{j}>0$, such that $\left|\partial_{x}^{j}R^{j}(t,x;\lambda)\right|\leq\widetilde{K}_{j}.$ Moreover, one has $R(t,x;\lambda)\leq K_{0}x.$ III Portfolio performance of a given strategy Recall the strategy ${\pi^{(0)}}$ defined in (6), and assume ${\pi^{(0)}}$ is admissible. In this section, we are interested in study its performance. That is, to give approximation results of the value function associated to ${\pi^{(0)}}$, which we denote by $V^{{\pi^{(0)}},\epsilon}$: $$V^{{\pi^{(0)}},\epsilon}(t,x,y)=\mathbb{E}\left\{U(X_{T}^{\pi^{(0)}})|X_{t}^{% \pi^{(0)}}=x,Y_{t}=y\right\},$$ (18) where $U(\cdot)$ is a general utility function satisfying Assumption II.3, $X_{t}^{\pi^{(0)}}$ is the wealth process associated to the strategy ${\pi^{(0)}}$ and $Y_{t}$ is the fast factor. Our main result of this section is the following, with the proof delayed in Section III-B. Theorem III.1 Under assumptions II.3 and II.4, the residual function $E(t,x,y)$ defined by $E(t,x,y):=V^{{\pi^{(0)}},\epsilon}(t,x,y)-v^{(0)}(t,x)-\sqrt{\epsilon}v^{(1)}(% t,x),$ is of order $\epsilon$. In other words, $\forall(t,x,y)\in[0,T]\times\mathbb{R}^{+}\times\mathbb{R}$, there exists a constant $C$, such that $E(t,x,y)\leq C\epsilon$, where $C$ may depend on $(t,x,y)$ but not on $\epsilon$. We recall that the usual “big O” notation: $f^{\epsilon}(t,x,y)\sim\mathcal{O}(\epsilon^{k})$ means that the function $f^{\epsilon}(t,x,y)$ is of order $\epsilon^{k}$, that is, for all $(t,x,y)\in[0,T]\times\mathbb{R}^{+}\times\mathbb{R}$, there exists $C$ such that $\left|f^{\epsilon}(t,x,y)\right|\leq C\epsilon^{k}$, where $C$ may depned on $(t,x,y)$, but not on $\epsilon$. Similarly, we denote $f^{\epsilon}(t,x,y)\sim o(\epsilon^{k})$, if $\limsup_{\epsilon\to 0}|f^{\epsilon}(t,x,y)|/\epsilon^{k}=0$. Corollary III.2 In the case of power utility $U(x)=\frac{x^{\gamma}}{\gamma}$, ${\pi^{(0)}}$ is asymptotically optimal in $\mathcal{A}^{\epsilon}(t,x,y)$ up to order $\sqrt{\epsilon}$. Proof: This is obtained by straightly comparing the expansion of $V^{\epsilon}$ given in [8, Corollary 6.8], and the one of $V^{{\pi^{(0)}},\epsilon}$ from the above Theorem. Since both quantities have the approximation $v^{(0)}+\sqrt{\epsilon}v^{(1)}$ at order $\sqrt{\epsilon}$, we have the desired result. ∎ III-A Formal expansion of $V^{{\pi^{(0)}},\epsilon}$ In the following derivation, to condense the notation, we systematically use $R$ for the risk tolerance function $R(t,x;\overline{\lambda})$, and ${\pi^{(0)}}$ for the zeroth order strategy ${\pi^{(0)}}(t,x,y)$ given in (6). By the martingality, $V^{{\pi^{(0)}},\epsilon}$ solves the following linear PDE: $V^{{\pi^{(0)}},\epsilon}_{t}+\frac{1}{\epsilon}\mathcal{L}_{0}V^{{\pi^{(0)}},% \epsilon}+\frac{1}{2}\sigma^{2}(y)\left({\pi^{(0)}}\right)^{2}V^{{\pi^{(0)}},% \epsilon}_{xx}+{\pi^{(0)}}\left(\mu(y)V^{{\pi^{(0)}},\epsilon}_{x}+\frac{1}{% \sqrt{\epsilon}}\rho a(y)\sigma(y)V^{{\pi^{(0)}},\epsilon}_{xy}\right)=0$, with terminal condition $V^{{\pi^{(0)}},\epsilon}(T,x,y)=U(x)$. Define two operators $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$ by $\mathcal{L}_{1}=\rho a(y)\sigma(y){\pi^{(0)}}\partial_{xy}=\rho a(y)\lambda(y)% R(t,x;\overline{\lambda})\partial_{xy}$, and $\mathcal{L}_{2}=\partial_{t}+\frac{1}{2}\sigma^{2}(y)\left({\pi^{(0)}}\right)^% {2}\partial_{x}^{2}+\mu(y){\pi^{(0)}}\partial_{x}=\partial_{t}+\frac{1}{2}% \lambda^{2}(y)D_{2}+\lambda^{2}(y)D_{1}$ respectively, then this linear PDE can be rewritten as: $$\left(\mathcal{L}_{2}+\frac{1}{\sqrt{\epsilon}}\mathcal{L}_{1}+\frac{1}{% \epsilon}\mathcal{L}_{0}\right)V^{{\pi^{(0)}},\epsilon}=0.$$ (19) We look for an expansion of $V^{{\pi^{(0)}},\epsilon}$ of the form $$V^{{\pi^{(0)}},\epsilon}=v^{{\pi^{(0)}},(0)}+\sqrt{\epsilon}v^{{\pi^{(0)}},(1)% }+\epsilon v^{{\pi^{(0)}},(2)}+\cdots,$$ while the terminal condition for each term is: $v^{{\pi^{(0)}},(0)}(T,x,y)=U(x),\quad v^{{\pi^{(0)}},(k)}(T,x,y)=0,\text{ for % }k\geq 1.$ Inserting the above expansion of $V^{{\pi^{(0)}},\epsilon}$ into (19), and collecting terms of $\mathcal{O}(\frac{1}{\epsilon})$ and $\mathcal{O}(\frac{1}{\sqrt{\epsilon}})$ give: $\mathcal{L}_{0}v^{{\pi^{(0)}},(0)}=0,\quad\mathcal{L}_{0}v^{{\pi^{(0)}},(1)}+% \mathcal{L}_{1}v^{{\pi^{(0)}},(0)}=0.$ Since $\mathcal{L}_{0}$ and $\mathcal{L}_{1}$ are operators taking derivatives in $y$, we make the choice that $v^{{\pi^{(0)}},(0)}$ and $v^{{\pi^{(0)}},(1)}$ are free of $y$. Next, collecting terms of $\mathcal{O}(1)$ yields $\mathcal{L}_{0}v^{{\pi^{(0)}},(2)}+\mathcal{L}_{2}v^{{\pi^{(0)}},(0)}=0,$ whose solvability condition (Fredholm Alternative) requires that $\left\langle\mathcal{L}_{2}v^{{\pi^{(0)}},(0)}\right\rangle=0$. This leads to a PDE satisfied by $v^{{\pi^{(0)}},(0)}$: $v^{{\pi^{(0)}},(0)}_{t}+\frac{1}{2}\overline{\lambda}^{2}\left(R\right)^{2}v^{% {\pi^{(0)}},(0)}_{xx}+\overline{\lambda}^{2}Rv^{{\pi^{(0)}},(0)}_{x}=0,\quad v% ^{{\pi^{(0)}},(0)}(T,x)=U(x),$ which has a unique solution (see Proposition II.2). And since $v^{(0)}$ also solves this equation, we deduce that $$v^{{\pi^{(0)}},(0)}(t,x)\equiv v^{(0)}(t,x)=M(t,x;\overline{\lambda})$$ and $v^{{\pi^{(0)}},(2)}$ admits a solution: $$v^{{\pi^{(0)}},(2)}(t,x,y)=-\frac{1}{2}\theta(y)D_{1}v^{(0)}+C_{1}(t,x),$$ where $\theta(y)$ is defined in Section II-B and $D_{k}$ is in (9). Then, collecting terms of order $\sqrt{\epsilon}$ yields $\mathcal{L}_{2}v^{{\pi^{(0)}},(1)}+\mathcal{L}_{1}v^{{\pi^{(0)}},(2)}+\mathcal% {L}_{0}v^{{\pi^{(0)}},(3)}=0,$ and the solvability condition reads $\left\langle\mathcal{L}_{2}v^{{\pi^{(0)}},(1)}+\mathcal{L}_{1}v^{{\pi^{(0)}},(% 2)}\right\rangle=0$. This gives an equation satisfied by $v^{{\pi^{(0)}},(1)}$: $v^{{\pi^{(0)}},(1)}_{t}+\frac{1}{2}\overline{\lambda}^{2}\left(R\right)^{2}v^{% {\pi^{(0)}},(1)}_{xx}+\overline{\lambda}^{2}Rv^{{\pi^{(0)}},(1)}_{x}-\frac{1}{% 2}\rho BD_{1}^{2}v^{(0)}=0,\quad v^{{\pi^{(0)}},(1)}(T,x)=0$, which is exactly equation (14). This equation is uniquely solved by $v^{(1)}$ (see (15)). Thus, we obtain $$v^{{\pi^{(0)}},(1)}\equiv v^{(1)}=-\frac{1}{2}(T-t)\rho BD_{1}^{2}v^{(0)}.$$ (20) Using the solution of $v^{{\pi^{(0)}},(1)}$ and $v^{{\pi^{(0)}},(2)}$ we just identified, one deduces an expression for $v^{{\pi^{(0)}},(3)}$: $v^{{\pi^{(0)}},(3)}=\frac{1}{2}(T-t)\theta(y)\rho B\left(\frac{1}{2}D_{2}+D_{1% }\right)D_{1}^{2}v^{(0)}+\frac{1}{2}\rho\theta_{1}(y)D_{1}^{2}v^{(0)}+C_{2}(t,% x),$ where $\theta_{1}(y)$ is the solution to the Poisson equation: $\mathcal{L}_{0}\theta_{1}(y)=a(y)\lambda(y)\theta^{\prime}(y)-\left\langle a% \lambda\theta^{\prime}\right\rangle.$ III-B First order accuracy: proof of Theorem III.1 This section is dedicated to the proof of Theorem III.1, which is to show the residual function $E(t,x,y)$ is of order $\epsilon$. To this end, we define the auxiliary residual function $\widetilde{E}(t,x,y)$ by $$\widetilde{E}=V^{{\pi^{(0)}},\epsilon}-(v^{(0)}+\epsilon^{1/2}v^{(1)}+\epsilon v% ^{{\pi^{(0)}},(2)}+\epsilon^{3/2}v^{{\pi^{(0)}},(3)}),$$ where we choose $C_{1}(t,x)=C_{2}(t,x)\equiv 0$ in the expression of $v^{{\pi^{(0)}},(2)}$ and $v^{{\pi^{(0)}},(3)}$. Since, by definition, $\left|E(t,x,y)-\widetilde{E}(t,x,y)\right|\leq C\epsilon$, it suffices to show that $\widetilde{E}$ is of order $\epsilon$. According to the derivation in Section III-A, the auxiliary residual function $\widetilde{E}$ solves $\left(\frac{1}{\epsilon}\mathcal{L}_{0}+\frac{1}{\sqrt{\epsilon}}\mathcal{L}_{% 1}+\mathcal{L}_{2}\right)\widetilde{E}+\epsilon(\mathcal{L}_{1}v^{{\pi^{(0)}},% (3)}+\mathcal{L}_{2}v^{{\pi^{(0)}},(2)})+\epsilon^{3/2}\mathcal{L}_{2}v^{{\pi^% {(0)}},(3)}=0,$ with a terminal condition $\widetilde{E}(T,x,y)=-\epsilon v^{{\pi^{(0)}},(2)}(T,x,y)-\epsilon^{3/2}v^{{% \pi^{(0)}},(3)}(T,x,y).$ Note that $\frac{1}{\epsilon}\mathcal{L}_{0}+\frac{1}{\sqrt{\epsilon}}\mathcal{L}_{1}+% \mathcal{L}_{2}$ is the infinitesimal generator of the processes $\left(X_{t}^{\pi^{(0)}},Y_{t}\right)$, one applies Feynman-Kac formula and deduces: $$\displaystyle\widetilde{E}(t,x,y)$$ $$\displaystyle=\epsilon\mathbb{E}_{(t,x,y)}\left[\int_{t}^{T}\mathcal{L}_{1}v^{% {\pi^{(0)}},(3)}(s,X_{s}^{\pi^{(0)}},Y_{s})\,\mathrm{d}s\right]$$ $$\displaystyle+\epsilon\mathbb{E}_{(t,x,y)}\left[\int_{t}^{T}\mathcal{L}_{2}v^{% {\pi^{(0)}},(2)}(s,X_{s}^{\pi^{(0)}},Y_{s})\,\mathrm{d}s\right]$$ $$\displaystyle+\epsilon^{3/2}\mathbb{E}_{(t,x,y)}\left[\int_{t}^{T}\mathcal{L}_% {2}v^{{\pi^{(0)}},(3)}(s,X_{s}^{\pi^{(0)}},Y_{s})\,\mathrm{d}s\right]$$ $$\displaystyle-\epsilon\mathbb{E}_{(t,x,y)}\left[v^{{\pi^{(0)}},(2)}(T,X_{T}^{% \pi^{(0)}},Y_{T})\right]$$ $$\displaystyle-\epsilon^{3/2}\mathbb{E}_{(t,x,y)}\left[v^{{\pi^{(0)}},(3)}(T,X_% {T}^{\pi^{(0)}},Y_{T})\right].$$ (21) The first three expectations come from the source terms while the last two come from the terminal condition. We shall prove that each expectation above is uniformly bounded in $\epsilon$. The idea is to relate them to the leading order term $v^{(0)}$ and the risk-tolerance function $R(t,x;\lambda)$, where some nice properties and estimates are already established (see Section II-D). For the source terms, straightforward but tedious computations give: $$\displaystyle\mathcal{L}_{2}v^{{\pi^{(0)}},(2)}=-\frac{1}{4}\theta(y)\left(% \lambda^{2}(y)-\overline{\lambda}^{2}\right)D_{1}^{2}v^{(0)},$$ (22) $$\displaystyle\mathcal{L}_{1}v^{{\pi^{(0)}},(3)}=\frac{1}{2}\rho^{2}a(y)\lambda% (y)\theta_{1}^{\prime}(y)D_{1}^{3}v^{(0)}+$$ (23) $$\displaystyle\quad\frac{1}{2}(T-t)\rho^{2}Ba(y)\lambda(y)\theta^{\prime}(y)D_{% 1}\left[\frac{1}{2}D_{2}+D_{1}\right]D_{1}^{2}v^{(0)},$$ (24) $$\displaystyle\mathcal{L}_{2}v^{{\pi^{(0)}},(3)}=\frac{1}{4}\rho\theta_{1}(y)% \left(\lambda^{2}(y)-\overline{\lambda}^{2}\right)D_{1}^{3}v^{(0)}$$ (25) $$\displaystyle\quad+\frac{1}{2}\theta(y)\rho B\left\{-\left[\frac{1}{2}D_{2}+D_% {1}\right]D_{1}^{2}v^{(0)}+\frac{1}{2}(T-t)\left(\lambda^{2}(y)-\overline{% \lambda}^{2}\right)D_{1}^{4}v^{(0)}\right\}$$ (26) $$\displaystyle\quad+\frac{1}{4}\theta(y)\rho B(T-t)\left[\frac{1}{2}\left(% \lambda^{2}(y)-\overline{\lambda}^{2}\right)D_{2}D_{1}^{3}v^{(0)}-\lambda^{2}(% y)RR_{xx}(D_{2}+D_{1})D_{1}^{2}v^{(0)}\right],$$ (27) where in the computation of $\mathcal{L}_{2}v^{{\pi^{(0)}},(3)}$, we use the commutator between operators $D_{2}$ and $\mathcal{L}_{2}$: $[\mathcal{L}_{2},D_{2}]w=\mathcal{L}_{2}D_{2}w-D_{2}\mathcal{L}_{2}w=-\lambda^% {2}(y)R^{2}R_{xx}(Rw_{xx}+w_{x}).$ At terminal time $t=T$, they becomes $v^{{\pi^{(0)}},(2)}(T,x,y)=-\frac{1}{2}\theta(y)D_{1}v^{(0)}(T,x)$ and $v^{{\pi^{(0)}},(3)}(T,x,y)=\frac{1}{2}\rho\theta_{1}(y)BD_{1}^{2}v^{(0)}(T,x)$. Note that the quantity $RR_{xx}(t,x;\overline{\lambda})$ is bounded by a constant $K$. This is proved for $(t,x;\overline{\lambda})\in[0,T)\times\mathbb{R}^{+}\times\mathbb{R}$ in Proposition II.7, and guaranteed by Assumption II.3(iii) for $t=T$, since by definition $R(T,x;\overline{\lambda})=R(x)$. Therefore, the expectations related to the source terms in (21) are sum of terms of the following form: $$\mathbb{E}_{(t,x,y)}\left[\int_{t}^{T}h(Y_{s})\mathcal{D}v^{(0)}(s,X_{s}^{\pi^% {(0)}})\,\mathrm{d}s\right],$$ (28) where $h(y)$ is at most polynomially growing, and $\mathcal{D}v^{(0)}$ is one of the following: $D_{1}^{2}v^{(0)}$, $D_{1}^{3}v^{(0)}$, $D_{1}^{4}v^{(0)}$, $D_{1}D_{2}D_{1}^{2}v^{(0)}$, $D_{2}D_{1}^{2}v^{(0)}$, $D_{2}D_{1}^{3}v^{(0)}$. Applying Cauchy-Schwartz inequality, it becomes $$\mathbb{E}_{(t,y)}^{1/2}\left[\int_{t}^{T}h^{2}(Y_{s})\,\mathrm{d}s\right]% \mathbb{E}_{(t,x,y)}^{1/2}\left[\int_{t}^{T}\left(\mathcal{D}v^{(0)}(s,X_{s}^{% \pi^{(0)}})\right)^{2}\,\mathrm{d}s\right].$$ (29) The first part is uniformly bounded in $\epsilon$ since $Y_{t}$ admits bounded moments at any order (cf. Assumption II.4(ii)). It remains to show the second part is also uniformly bounded in $\epsilon$. The proof consists a repeated use of the concavity of $v^{(0)}$ and the results in Proposition II.7 and Lemma II.5. For the sake of simplicity, we shall only detail the proof when $\mathcal{D}v^{(0)}=D_{1}^{2}v^{(0)}$ and omit the rest. Since $\left|D_{1}^{2}v^{(0)}\right|=\left|RR_{x}v^{(0)}_{x}-Rv^{(0)}_{x}\right|\leq(% K_{0}+1)Rv^{(0)}_{x}\leq(K_{0}+1)K_{0}xv^{(0)}_{x}\leq K_{0}(K_{0}+1)v^{(0)}$, we conclude $$\displaystyle\mathbb{E}_{(t,x,y)}\left[\int_{t}^{T}\left(D_{1}^{2}v^{(0)}(s,X_% {s}^{\pi^{(0)}},Y_{s})\right)^{2}\,\mathrm{d}s\right]\\ \displaystyle\leq K_{0}^{2}(K_{0}+1)^{2}\mathbb{E}_{(t,x,y)}\left[\int_{t}^{T}% \left(v^{(0)}(s,X_{s}^{\pi^{(0)}})\right)^{2}\,\mathrm{d}s\right]$$ is uniformly bounded in $\epsilon$ by Lemma II.5. Straightforward but tedious computations show that the rest terms in (28) are also bounded by multiples of $Rv^{(0)}$, then the boundedness is again ensured by the relation $R(t,x;\overline{\lambda})\leq K_{0}x$, the concavity of $v^{(0)}$, and Lemma II.5. The last two expectations in (21) are treated similarly by using Assumption II.3 (17) and the concavity of $U(x)$. Therefore we have shown that $\left|\widetilde{E}(t,x,y)\right|\leq\widetilde{C}\epsilon$. By the inequality $\left|E(t,x,y)\right|\leq\widetilde{C}\epsilon+\epsilon v^{{\pi^{(0)}},(2)}(t,% x,y)+\epsilon^{3/2}v^{{\pi^{(0)}},(3)}(t,x,y)\leq C\epsilon,$ we obtain the desired result. IV The Asymptotic Optimality of ${\pi^{(0)}}$ The goal of this section is to show that the strategy ${\pi^{(0)}}$ defined in (6) asymptotically outperforms every family $\mathcal{A}_{0}^{\epsilon}\left[\widetilde{\pi}^{0},\widetilde{\pi}^{1},\alpha\right]$ as precisely stated in our main Theorem I.1 in Section I. For a fixed choice of $(\widetilde{\pi}^{0}$, $\widetilde{\pi}^{1})$ and positive $\alpha$, recall the definition of $\mathcal{A}_{0}^{\epsilon}\left[\widetilde{\pi}^{0},\widetilde{\pi}^{1},\alpha\right]$ in (5). The reason to consider the perturbation around $\widetilde{\pi}^{0}$ at any positive order ($\alpha>0$) is $\mathcal{A}_{0}^{\epsilon}\left[\widetilde{\pi}^{0},\widetilde{\pi}^{1},0% \right]=\mathcal{A}_{0}^{\epsilon}\left[\widetilde{\pi}^{0}+\widetilde{\pi}^{1% },0,\alpha\right]$. Assumption IV.1 For the triplet $(\widetilde{\pi}^{0}$, $\widetilde{\pi}^{1}$, $\alpha)$, we require: (i) The whole family (in $\epsilon$) of strategies $\{\widetilde{\pi}^{0}+\epsilon^{\alpha}\widetilde{\pi}^{1}\}$ is admissible; (ii) Let $(\widetilde{X}_{s}^{t,x})_{t\leq s\leq T}$ be the solution to: $$\,\mathrm{d}\widetilde{X}_{s}=\left\langle\mu(\cdot)\widetilde{\pi}^{0}(s,% \widetilde{X}_{s},\cdot)\right\rangle\,\mathrm{d}s+\sqrt{\left\langle\sigma^{2% }(\cdot)\widetilde{\pi}^{0}(s,\widetilde{X}_{s},\cdot)^{2}\right\rangle}\,% \mathrm{d}W_{s},$$ (30) starting at $x$ at time $t$. By (i), $\widetilde{X}_{s}^{t,x}$ is nonnegative and we further assume that it has full support $\mathbb{R}^{+}$ for any $t<s\leq T$. Remark IV.2 Part (ii) is motivated from the following observation. Let $\widehat{X}_{s}^{t,x}$ be the solution to $$\,\mathrm{d}\widehat{X}_{s}=\left\langle\mu(\cdot){\pi^{(0)}}(s,\widehat{X}_{s% },\cdot)\right\rangle\,\mathrm{d}s+\sqrt{\left\langle\sigma^{2}(\cdot){\pi^{(0% )}}(s,\widehat{X}_{s},\cdot)^{2}\right\rangle}\,\mathrm{d}W_{s}.$$ Noticing that $\left\langle\mu(\cdot){\pi^{(0)}}(t,x,\cdot)\right\rangle=\overline{\lambda}^{% 2}R(t,x;\overline{\lambda})$ and $\sqrt{\left\langle\sigma^{2}(\cdot){\pi^{(0)}}(t,x,\cdot)^{2}\right\rangle}=% \overline{\lambda}R(t,x;\overline{\lambda})$, then $\widehat{X}_{s}$ can be interpreted as the optimal wealth process of the classical Merton problem with averaged Sharpe-ratio $\overline{\lambda}$. From [12, Proposition 7], one has $\widehat{X}_{s}^{t,x}=H\left(H^{-1}(x,t,\overline{\lambda})+\overline{\lambda}% ^{2}(s-t)+\overline{\lambda}(W_{s}-W_{t}),s,\overline{\lambda}\right),$ where $H:\mathbb{R}\times[0,T]\times\mathbb{R}\to\mathbb{R}^{+}$ solves the heat equation $H_{t}+\frac{1}{2}\overline{\lambda}^{2}H_{xx}=0$, and is of full range in $x$. Consequently, $\widehat{X}_{s}^{t,x}$ has full support $\mathbb{R}^{+}$, and thus, it is natural to require that $\widetilde{X}_{s}^{t,x}$ has full support $\mathbb{R}^{+}$. Denote by $\widetilde{V}^{\epsilon}$ the value function associated to the trading strategy $\pi:=\widetilde{\pi}^{0}+\epsilon^{\alpha}\widetilde{\pi}^{1}\in\mathcal{A}_{0% }^{\epsilon}\left[\widetilde{\pi}^{0},\widetilde{\pi}^{1},\alpha\right]$: $$\widetilde{V}^{\epsilon}(t,x,y)=\mathbb{E}\left[U(X_{T}^{\pi})|X_{t}^{\pi}=x,Y% _{t}=y\right],$$ (31) where $X_{t}^{\pi}$ is the wealth process following the strategy $\pi\in\mathcal{A}_{0}^{\epsilon}$, and $Y_{t}$ is fast mean-reverting with the same $\epsilon$. The idea is to compare $\widetilde{V}^{\epsilon}$ with $V^{{\pi^{(0)}},\epsilon}$ defined in (18), for which a rigorous first order approximation $v^{(0)}+\sqrt{\epsilon}v^{(1)}$ has been established in Theorem III.1. After finding the expansion of $\widetilde{V}^{\epsilon}$, the comparison is done asymptotically in $\epsilon$ up to order $\sqrt{\epsilon}$. Section IV-A focus on the approximation of $\widetilde{V}^{\epsilon}$, while Section IV-B gives the proof of Theorem I.1. IV-A Approximations of the Value Function $\widetilde{V}^{\epsilon}$ Denote by $\mathcal{L}$ the infinitesimal generator of the state processes $(X_{t}^{\pi},Y_{t})$: $\mathcal{L}:=\frac{1}{\epsilon}\mathcal{L}_{0}+\frac{1}{2}\sigma^{2}(y)\left(% \widetilde{\pi}^{0}+\epsilon^{\alpha}\widetilde{\pi}^{1}\right)^{2}\partial_{% xx}+\left(\widetilde{\pi}^{0}+\epsilon^{\alpha}\widetilde{\pi}^{1}\right)\mu(y% )\partial_{x}+\frac{1}{\sqrt{\epsilon}}\rho a(y)\sigma(y)\left(\widetilde{\pi}% ^{0}+\epsilon^{\alpha}\widetilde{\pi}^{1}\right)\partial_{xy},$ then by the martingale property, the value function $\widetilde{V}^{\epsilon}$ defined in (31) satisfies $$\partial_{t}\widetilde{V}^{\epsilon}+\mathcal{L}\widetilde{V}^{\epsilon}=0,% \qquad\widetilde{V}^{\epsilon}(T,x,y)=U(x).$$ (32) Motivate by the fact that the first order in the operator $\mathcal{L}$ is $\epsilon^{\alpha}$, we propose the following expansion form for $\widetilde{V}^{\epsilon}$ $$\widetilde{V}^{\epsilon}=\widetilde{v}^{(0)}+\epsilon^{\alpha}\widetilde{v}^{1% \alpha}+\epsilon^{2\alpha}\widetilde{v}^{2\alpha}+\cdots+\epsilon^{n\alpha}% \widetilde{v}^{n\alpha}+\sqrt{\epsilon}\,\widetilde{v}^{(1)}+\cdots,$$ (33) where $n$ is the largest integer such that $n\alpha<1/2$, and for the case $\alpha>1/2$, $n$ is simply zero. In the derivation, we aim at identifying the zeroth order term $\widetilde{v}^{(0)}$ and the first non-zero term up to order $\sqrt{\epsilon}$. Apparently, the term following $\widetilde{v}^{(0)}$ will depend on the value of $\alpha$. To further simplify the notation, we decompose $\partial_{t}+\mathcal{L}$ according to different powers of $\epsilon$ as follows: $$\partial_{t}+\mathcal{L}=\frac{1}{\epsilon}\mathcal{L}_{0}+\frac{1}{\sqrt{% \epsilon}}\widetilde{\mathcal{L}}_{1}+\widetilde{\mathcal{L}}_{2}+\epsilon^{% \alpha}\widetilde{\mathcal{L}}_{3}+\epsilon^{2\alpha}\widetilde{\mathcal{L}}_{% 4}+\epsilon^{\alpha-1/2}\widetilde{\mathcal{L}}_{5},$$ (34) where the operators $\widetilde{\mathcal{L}}_{i}$ are defined by: $\widetilde{\mathcal{L}}_{1}=\widetilde{\pi}^{0}\rho_{1}a(y)\sigma(y)\partial_{xy}$, $\widetilde{\mathcal{L}}_{2}=\partial_{t}+\frac{1}{2}\sigma^{2}(y)\left(% \widetilde{\pi}^{0}\right)^{2}\partial_{xx}+\widetilde{\pi}^{0}\mu(y)\partial_% {x}$, $\widetilde{\mathcal{L}}_{3}=\sigma^{2}(y)\widetilde{\pi}^{0}\widetilde{\pi}^{1% }\partial_{xx}+\widetilde{\pi}^{1}\mu(y)\partial_{x}$, $\widetilde{\mathcal{L}}_{4}=\frac{1}{2}\sigma^{2}(y)\left(\widetilde{\pi}^{1}% \right)^{2}\partial_{xx}$ and $\widetilde{\mathcal{L}}_{5}=\widetilde{\pi}^{1}\rho_{1}a(y)\sigma(y)\partial_{xy}$. In all cases, we first collect terms of order $\epsilon^{\beta}$ in (32) with $\beta\in[-1,0)$. Noticing that $\mathcal{L}_{0}$ and $\widetilde{\mathcal{L}}_{1}$ (also $\widetilde{\mathcal{L}}_{5}$ when $\alpha<1/2$) take derivatives in $y$, we are able to make the choice that the approximation of $\widetilde{V}^{\epsilon}$ up to order $\epsilon^{\beta^{\prime}}$ is free of $y$, for $\beta^{\prime}<1$. In the following derivation, this choice is made for every case, and consequently, we will not repeat this again and will start the argument by collecting terms of order 1. Different order of approximations are obtained depending on $\widetilde{\pi}^{0}$ being identical to ${\pi^{(0)}}$ or not. IV-A1 Case $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$ We first analyze the case $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$, in which $\widetilde{\mathcal{L}}_{1}$ and $\widetilde{\mathcal{L}}_{2}$ coincide with $\mathcal{L}_{1}$ and $\mathcal{L}_{2}$, and $\widetilde{\mathcal{L}}_{3}v^{(0)}=0$. The terms of order 1 form a Poisson equation for $\widetilde{v}^{(2)}$ $$\displaystyle\mathcal{L}_{0}\widetilde{v}^{(2)}+\mathcal{L}_{2}\widetilde{v}^{% (0)}=0,\quad\widetilde{v}^{(0)}(T,x)=U(x).$$ (35) For different values of $\alpha$, there might be extra terms which are eventually zero, thus are not included in the above equation: $\mathcal{L}_{1}\widetilde{v}^{(1)}$ (all cases), $\widetilde{\mathcal{L}}_{5}\widetilde{v}^{(0)}$ when $\alpha=1/2$, and $\widetilde{\mathcal{L}}_{5}\widetilde{v}^{k\alpha}$ when $(k+1)\alpha=1/2$. By the solvability condition, $\widetilde{v}^{(0)}$ solves (11), which possesses a unique solution $v^{(0)}$. Therefore, we deduce $$\widetilde{v}^{(0)}\equiv v^{(0)},\text{ and }\quad\widetilde{v}^{(2)}\equiv v% ^{{\pi^{(0)}},(2)}.$$ (36) (i) $\alpha=1/2$. We then collect terms of order $\epsilon^{1/2}$: $$\displaystyle\mathcal{L}_{0}\widetilde{v}^{(3)}+\mathcal{L}_{2}\widetilde{v}^{% (1)}+\mathcal{L}_{1}\widetilde{v}^{(2)}+\widetilde{\mathcal{L}}_{3}\widetilde{% v}^{(0)}+\widetilde{\mathcal{L}}_{5}\widetilde{v}^{(1)}=0.$$ (37) This is a Poisson equation for $\widetilde{v}^{(3)}$, for which the solvability condition gives that $\widetilde{v}^{(1)}$ solves equation (15). Here we have used $\widetilde{\mathcal{L}}_{3}\widetilde{v}^{(0)}=\widetilde{\mathcal{L}}_{3}v^{(% 0)}=0$, $\widetilde{v}^{(2)}=v^{{\pi^{(0)}},(2)}$ and $\widetilde{\mathcal{L}}_{5}\widetilde{v}^{(1)}=0$. Since this equation is uniquely solved, one deduces: $$\widetilde{v}^{(1)}=v^{(1)},\text{ and }\quad\widetilde{v}^{(3)}\equiv v^{{\pi% ^{(0)}},(3)}.$$ (38) (ii) $\alpha>1/2$. Collecting terms of order $\sqrt{\epsilon}$ yields a Poisson equation for $\widetilde{v}^{(3)}$: $$\mathcal{L}_{0}\widetilde{v}^{(3)}+\mathcal{L}_{2}\widetilde{v}^{(1)}+\mathcal% {L}_{1}\widetilde{v}^{(2)}+\widetilde{\mathcal{L}}_{5}\widetilde{v}^{(0)}=0,$$ (39) where the term $\widetilde{\mathcal{L}}_{5}\widetilde{v}^{(0)}$ only exists when $\alpha=1$ (but anyway $\mathcal{L}_{1}\widetilde{v}^{(1)}$ and $\widetilde{\mathcal{L}}_{5}\widetilde{v}^{(0)}$ disappear due to their independence of y). Arguments similar to the case $\alpha=1/2$ give that $$\widetilde{v}^{(1)}=v^{(1)},\text{and}\quad\widetilde{v}^{(3)}=v^{{\pi^{(0)}},% (3)}.$$ (40) (iii) $\alpha<1/2$. The next order is $\epsilon^{\alpha}$, $$\mathcal{L}_{0}\widetilde{v}^{\alpha+1}+\mathcal{L}_{2}\widetilde{v}^{1\alpha}% +\widetilde{\mathcal{L}}_{3}\widetilde{v}^{(0)}+\mathcal{L}_{1}\widetilde{v}^{% \alpha+1/2}+\widetilde{\mathcal{L}}_{5}\widetilde{v}^{(1)}=0.$$ (41) Again the last three terms disappear due to the fact $\widetilde{\mathcal{L}}_{3}\widetilde{v}^{(0)}=\widetilde{\mathcal{L}}_{3}v^{(% 0)}=0$, and $\widetilde{v}^{\alpha+1/2}$ and $\widetilde{v}^{(1)}$’s independence of $y$. Then using solvability condition, $\widetilde{v}^{1\alpha}$ solves $$\mathcal{L}_{t,x}(\overline{\lambda})\widetilde{v}^{1\alpha}(t,x;\overline{% \lambda})=0,\quad\widetilde{v}^{1\alpha}(T,x)=0,$$ which only has the trivial solution $\widetilde{v}^{1\alpha}\equiv 0$. Consequently, we need to identify the next non-zero term. • $1/4<\alpha<1/2$. The next order is $\sqrt{\epsilon}$, which gives the following PDE $$\mathcal{L}_{0}\widetilde{v}^{(3)}+\mathcal{L}_{2}\widetilde{v}^{(1)}+\mathcal% {L}_{1}\widetilde{v}^{(2)}=0.$$ (42) It coincides with (15) after using the solvability condition, and we deduce $\widetilde{v}^{(1)}\equiv v^{(1)}$ and $\widetilde{v}^{(3)}\equiv v^{{\pi^{(0)}},(3)}$. • $\alpha=1/4$. The next order is $\sqrt{\epsilon}$, and the Poisson equation for $\widetilde{v}^{(3)}$ becomes $$\mathcal{L}_{0}\widetilde{v}^{(3)}+\mathcal{L}_{2}\widetilde{v}^{(1)}+\mathcal% {L}_{1}\widetilde{v}^{(2)}+\widetilde{\mathcal{L}}_{4}\widetilde{v}^{(0)}=0.$$ (43) The solvability condition reads $\mathcal{L}_{t,x}(\overline{\lambda})\widetilde{v}^{(1)}-\frac{1}{2}\rho BD_{1% }^{2}v^{(0)}-\frac{1}{2}\overline{\lambda}^{2}D_{1}v^{(0)}=0$. Comparing this equation with (15) and using the concavity of $v^{(0)}$, one deduces $\widetilde{v}^{(1)}\leq v^{(1)}$. • $\alpha<1/4$. The next order is $\epsilon^{2\alpha}$ since $2\alpha<1/2$, and $\mathcal{L}_{0}\widetilde{v}^{2\alpha+1}+\mathcal{L}_{2}\widetilde{v}^{2\alpha% }+\mathcal{L}_{1}\widetilde{v}^{2\alpha+1/2}+\widetilde{\mathcal{L}}_{3}% \widetilde{v}^{1\alpha}+\widetilde{\mathcal{L}}_{4}\widetilde{v}^{(0)}+% \widetilde{\mathcal{L}}_{5}\widetilde{v}^{\alpha+1/2}=0,\quad\widetilde{v}^{2% \alpha}(T,x)=0.$ The third, fourth and sixth term varnish since $\widetilde{v}^{1\alpha}\equiv 0$, and $\widetilde{v}^{2\alpha+1/2}$ and $\widetilde{v}^{\alpha+1/2}$ are free of $y$. Using the solvability conditon, one has $$\widetilde{v}^{2\alpha}_{t}+\frac{1}{2}\overline{\lambda}^{2}R^{2}\widetilde{v% }^{2\alpha}_{xx}+\overline{\lambda}R\widetilde{v}^{2\alpha}_{x}+\frac{1}{2}% \left\langle\sigma^{2}(\cdot)\left(\widetilde{\pi}^{1}(t,x,\cdot)\right)^{2}% \right\rangle v^{(0)}_{xx}=0.$$ (44) Assuming that $\widetilde{\pi}^{1}$ is not identically zero, we claim $\widetilde{v}^{2\alpha}<0$ by the strict concavity of $v^{(0)}$. This complete the discussion for case $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$, and we continue with case $\widetilde{\pi}^{0}\not\equiv{\pi^{(0)}}$. IV-A2 Case $\widetilde{\pi}^{0}\not\equiv{\pi^{(0)}}$ In this case, after collecting terms of order 1, and using the solvability condition, one has the following PDE for $\widetilde{v}^{(0)}$ $$\widetilde{v}^{(0)}_{t}+\frac{1}{2}\left\langle\sigma^{2}(\cdot)\widetilde{\pi% }^{0}(t,x,\cdot)^{2}\right\rangle\widetilde{v}^{(0)}_{xx}+\left\langle% \widetilde{\pi}^{0}(t,x,\cdot)\mu(\cdot)\right\rangle\widetilde{v}^{(0)}_{x}=0.$$ (45) To compare $\widetilde{v}^{(0)}$ to $v^{(0)}$, we rewrite (12) in the same pattern: $v^{(0)}_{t}+\frac{1}{2}\left\langle\sigma^{2}(\cdot)\widetilde{\pi}^{0}(t,x,% \cdot)^{2}\right\rangle v^{(0)}_{xx}+\left\langle\widetilde{\pi}^{0}(t,x,\cdot% )\mu(\cdot)\right\rangle v^{(0)}_{x}-\frac{1}{2}\left\langle\sigma^{2}(\cdot)% \left(\widetilde{\pi}^{0}-{\pi^{(0)}}\right)^{2}(t,x,\cdot)\right\rangle v^{(0% )}_{xx}=0$, via the relation $-\left\langle\sigma^{2}(y)(\widetilde{\pi}^{0}-{\pi^{(0)}}){\pi^{(0)}}\right% \rangle v^{(0)}_{xx}=\left\langle(\widetilde{\pi}^{0}-{\pi^{(0)}})\mu(y)\right% \rangle v^{(0)}_{x}.$ Again by the strict concavity of $v^{(0)}$ and Feynman–Kac formula, we obtain $\widetilde{v}^{(0)}<v^{(0)}$. To fully justify the above expansions, additional assumptions similar to [6, Appendix C] are needed. They are technical uniform (in $\epsilon$) integrability conditions on the strategies $\mathcal{A}_{0}^{\epsilon}[\widetilde{\pi}^{0},\widetilde{\pi}^{1},\alpha]$. For the sake of simplicity, we omit the conditions here and refer to Appendix C for further details. Now we summarize the above derivation as follows. Proposition IV.3 Under paper assumptions, we obtain the following accuracy results: where the accuracy column gives the order of the difference between $\widetilde{V}^{\epsilon}$ and its approximation. Moreover, in the case $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$, we have the relation $\widetilde{v}^{(1)}\leq v^{(1)}$ when $\alpha=1/4$, and $\widetilde{v}^{2\alpha}<0$ when $\alpha<1/4$; otherwise if $\widetilde{\pi}^{0}\not\equiv{\pi^{(0)}}$, then $\widetilde{v}^{(0)}<v^{(0)}$. IV-B Asymptotic Optimality: Proof of Theorem I.1 This section contains the proof of Theorem I.1. Specifically, we shall compare the first order approximation $v^{(0)}+\sqrt{\epsilon}{v^{(1)}}$ of $V^{{\pi^{(0)}},\epsilon}$ obtained in Theorem III.1, and the approximation of $\widetilde{V}^{\epsilon}$ summarized in Tab. I. In the case that the approximation of $\widetilde{V}^{\epsilon}$ is $v^{(0)}+\sqrt{\epsilon}v^{(1)}$, the limit is easily verified to be zero. When the approximation of $\widetilde{V}^{\epsilon}$ is $v^{(0)}+\sqrt{\epsilon}\widetilde{v}^{(1)}$, the limit $\ell$ is non-positive but stay finite, by the fact $\widetilde{v}^{(1)}\leq v^{(1)}$. If $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$ and $\alpha<1/4$, the limit $\ell$ is computed as $$\ell=\lim_{\epsilon\to 0}\frac{\epsilon^{2\alpha}\widetilde{v}^{2\alpha}-\sqrt% {\epsilon}v^{(1)}+\mathcal{O}(\epsilon^{3\alpha\wedge 1/2})}{\sqrt{\epsilon}}=% -\infty,$$ since $\widetilde{v}^{2\alpha}<0$. The similar argument also apply to the case $\widetilde{\pi}^{0}\not\equiv{\pi^{(0)}}$, and leads to $\ell=-\infty$. Thus we complete the proof. In fact, this limit can be understood according to the following four cases: (i) $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$ and $\ell=0$: $\widetilde{V}^{\epsilon}=V^{{\pi^{(0)}},\epsilon}+o(\sqrt{\epsilon})$; (ii) $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$ and $-\infty<\ell<0$: $\widetilde{V}^{\epsilon}=V^{{\pi^{(0)}},\epsilon}+\mathcal{O}(\sqrt{\epsilon})$ with $\mathcal{O}(\sqrt{\epsilon})<0$; (iii) $\widetilde{\pi}^{0}\equiv{\pi^{(0)}}$ and $\ell=-\infty$: $\widetilde{V}^{\epsilon}=V^{{\pi^{(0)}},\epsilon}+\mathcal{O}(\epsilon^{2% \alpha})$ with $\mathcal{O}(\epsilon^{2\alpha})<0$ and $2\alpha<1/2$; (iv) $\widetilde{\pi}^{0}\not\equiv{\pi^{(0)}}$: $\displaystyle\lim_{\epsilon\to 0}\widetilde{V}^{\epsilon}(t,x,z)<\lim_{% \epsilon\to 0}V^{{\pi^{(0)}},\epsilon}(t,x,z).$ References [1] M. K. Brunnermeier and S. Nagel. Do wealth fluctuations generate time-varying risk aversion? micro-evidence on individuals¡¯ asset allocation (digest summary). American Economic Review, 98(3):713–736, 2008. [2] G. Chacko and L. M. Viceira. Dynamic consumption and portfolio choice with stochastic volatility in incomplete markets. Review of Financial Studies, 18(4):1369–1402, 2005. [3] D. Cuoco and J. Cvitanić. Optimal consumption choices for a ‘large’ investor. Journal of Economic Dynamics and Control, 22(3):401–436, 1998. [4] J. Cvitanić and I. Karatzas. On portfolio optimization under “drawdown” constraints. IMA volumes in mathematics and its applications, 65:35–35, 1995. [5] R. Elie and N. Touzi. Optimal lifetime consumption and investment under a drawdown constraint. Finance and Stochastics, 12:299–330, 2008. [6] J.-P. Fouque and R. Hu. Asymptotic optimal strategy for portfolio optimization in a slowly varying stochastic environment. SIAM Journal on Control and Optimization, 5(3), 2017. [7] J.-P. Fouque, G. Papanicolaou, R. Sircar, and K. Sølna. Multiscale Stochatic Volatility for Equity, Interest-Rate and Credit Derivatives. Cambridge University Press, 2011. [8] J.-P. Fouque, R. Sircar, and T. Zariphopoulou. Portfolio optimization & stochastic volatility asymptotics. Mathematical Finance, 2015. [9] S. J. Grossman and Z. Zhou. Optimal investment strategies for controlling drawdowns. Mathematical Finance, 3:241–276, 1993. [10] P. Guasoni and J. Muhle-Karbe. Portfolio choice with transaction costs: a user¡¯s guide. In Paris-Princeton Lectures on Mathematical Finance 2013, pages 169–201. Springer, 2013. [11] R. Hu. Asymptotic methods for portfolio optimization problem in multiscale stochastic environments, 2017. In preparation. [12] S. Källblad and T. Zariphopoulou. Qualitative analysis of optimal investment strategies in log-normal markets. Available at SSRN 2373587, 2014. [13] I. Karatzas and S. E. Shreve. Methods of Mathematical Finance. Springer Science & Business Media, 1998. [14] M. Lorig and R. Sircar. Portfolio optimization under local-stochastic volatility: Coefficient taylor series approximations and implied sharpe ratio. SIAM Journal on Financial Mathematics, 7(1):418–447, 2016. [15] M. J. Magill and G. M. Constantinides. Portfolio selection with transactions costs. Journal of Economic Theory, 13:245–263, 1976. [16] R. C. Merton. Lifetime portfolio selection under uncertainty: The continuous-time case. Review of Economics and statistics, 51:247–257, 1969. [17] R. C. Merton. Optimum consumption and portfolio rules in a continuous-time model. Journal of economic theory, 3(4):373–413, 1971. [18] T. Zariphopoulou. Optimal investment and consumption models with non-linear stock dynamics. Mathematical Methods of Operations Research, 50(2):271–296, 1999.
A high order projection method for low Mach number flows A. Bermúdez, S. Busto, J.L. Ferrín, M.E. Vázquez-Cendón Abstract The high order projection hybrid finite volume/finite element method presented in [BFSVC14] and [BFTVC18] is extended to solve compressible low Mach number flows. The main differences with the incompressible method are highlighted. Detailed analysis of the flux term is presented aiming at correcting the spurious oscillations originated by changes in density. As a result a new upwind term is added to the numerical flux function. Moreover, LADER methodology is applied so a second order in space and time scheme is obtained. The mass conservation equation is used in order to compute the extrapolated and half-step in time evolved values of density. The proposed modifications on the methodology are endorsed by the study of the scalar advection-diffusion-reaction equation with time and space dependent coefficients. The formal order of accuracy of the schemes is assessed for various test problems. Keywords: compressible low Mach number flows, projection method, finite volume methods, finite element methods, high order schemes. A high order projection method for low Mach number flows A. Bermúdez${}^{1,2,3}$, S. Busto${}^{1}$, J.L. Ferrín${}^{1,2,3}$, M.E. Vázquez-Cendón${}^{1,2,3}$††E-mail addresses: [email protected] (A. Bermúdez), [email protected] (S. Busto), [email protected] (J.L. Ferrín), [email protected] (M.E. Vázquez-Cendón)†† February 2018 ${}^{1}$Departamento de Matemática Aplicada, Universidade de Santiago de Compostela, Facultad de Matemáticas. 15782 Santiago de Compostela, Spain. ${}^{2}$Instituto de Matemáticas, Universidade de Santiago de Compostela, Facultad de Matemáticas. 15782 Santiago de Compostela, Spain. ${}^{3}$ITMATI, Campus Vida 15782 Santiago de Compostela, Spain. 1 Introduction Low Mach number flows are involved in a wide variety of natural phenomena and industrial processes. Therefore, an increasing number of researchers have focused, during the last decades, on better understanding their behaviour. Classical approaches to solve this kind of flows comprise mixed and discontinuous Galerkin finite element methods (see [BF91], [HFB86], [CKS00], [CS01] and [TD17]) and finite volume methods (see [Tor09] and references therein). It has been shown that, among low Mach number flows, the difference compressible/incompressible deeply changes the physics and also the numerics. While for compressible flows pressure is directly related to other flow variables as density and energy via a state equation, for incompressible flows pressure acts as a Lagrange multiplier that adapts itself to ensure that the velocity satisfies the incompressibility condition. In order to handle the latter situation, the typical explicit stage of finite volume methods has to be complemented with the so-called projection stage where a pressure correction is computed in order to get a divergence-free velocity. Many papers exist in the literature devoted to introduce and analyse projection finite volume methods for incompressible Navier-Stokes equations (see, for instance, [BFSVC14], [BFTVC18], [BDA${}^{+}$02] or [Col90]). These projection methods, initially introduced for incompressible flows can be easily adapted to low-Mach number flows. In fact, the main difference is that the divergence-free condition for the velocity is replaced by an equation prescribing the divergence of the linear momentum density which is a conservative variable. In order to get stability, staggered grids have been used to discretize velocity and pressure. While this can be done straightforwardly in the context of structured meshes, the adaptation to unstructured meshes is more challenging (see [BDDVC98], [GGHL08], [GLL12], [PBH04], [TA07]). The scope of this paper is to extend the high order projection hybrid FV/FE method introduced in [BFSVC14] and [BFTVC18] to simulate incompressible flows, to the case of compressible low Mach number flows. To this end we will assume that temperature and composition of the mixture are known (maybe by solving energy and species balance equations), so that the system of equations to be solved reduces to mass and momentum conservation equations and equation of state. Starting from a 3D tetrahedral finite element mesh of the computational domain, the equation of the transport-diffusion stage is discretized by a finite volume method associated with a dual finite volume mesh where the nodes of the volumes are the barycentre of the faces of the initial tetrahedra. These volumes, which allow us for an easy implementation of flux boundary conditions, have already been used, among others, for the 2D shallow water equation (see [BDDVC98]), for solving conservative and non conservative systems in 2D and 3D (see [THD09] and [DHC${}^{+}$10]) and for DG schemes employed to solve compressible Navier-Stokes equations (see [TD17]). Further, for time discretization we use the explicit Euler scheme. One of the main challenges regarding compressible Navier-Stokes equations is the time and space dependency of density. The use of classical upwind numerical flux functions like the ones involved in the $Q$-scheme of van Leer or the Rusanov scheme may produce spurious oscillations on the solution. In order to get insight in study their causes we simplify the problem by posing the scalar advection equation with variable advection coefficient. Analysing the flux term we observe that this bad behaviour may be caused by a lack of upwinding related to the spatial derivative of the normal flux with respect to the density (see [BLVC17] for a detailed analysis on multi-component Euler equations). Therefore, we propose a modification of the flux function aiming to ensure the stability of the resulting scheme. Seeking to attain a second order in time and space scheme we apply LADER methodology, a modification of ADER methods (see [BFTVC18], [TMN01] and [Tor09]). This fully discrete approach relies on non-linear reconstructions and the resolution of generalized Riemann problems to produce schemes arbitrarily accurate in both space and time. To develop a high order numerical scheme for the compressible Navier-Stokes equations we will perform the non-linear reconstructions not only in the conservative unknowns but also in the density. Moreover, we construct, analyse and asses LADER scheme for the scalar advection-diffusion-reaction equation with variable coefficients to confirm the necessity of reconstructing the advection coefficient. Concerning the projection stage, the pressure correction is computed by continuous piecewise linear finite elements associated with the initial tetrahedral mesh. The derivative of the density with respect to time is needed to define the source term involved in the projection stage. Therefore, a pre-projection stage, in which the state equation is used to compute the density from the provided mixture temperature and composition, has been introduced. The use of the staggered meshes together with a simple specific way of passing the information from the transport-diffusion stage to the projection one and vice versa leads to a stable scheme. The former is done by redefining the conservative variable (i.e. the momentum density) constant per tetrahedron. Conversely, the finite element pressure correction is redefined to constant values on the faces of the finite volumes and then used in the transport-diffusion stage. The outline of the paper is as follows. In Section 2 we recall the system of equations modelling low Mach number flows. Section 3 is devoted to the discretization of the equations and the description of the algorithm stages. In Section 4, the transport-diffusion stage is analysed highlighting the main differences with the incompressible case. A modification on the flux function is proposed to avoid the spurious oscillations due to the space dependency of density. Moreover, LADER methodology is applied providing a second order accuracy scheme. The computation of the source term for the projection stage is detailed in Section 5. The projection and post-projection stages are depicted in Section 6. Finally, the treatment of the boundary conditions is described in Section 7 and some of the numerical results obtained are shown in Section 8. The appendix includes the resolution of the scalar advection-diffusion-reaction equation with time and space dependent advection coefficient. 2 Governing equations In this section, the system of equations to be solved is introduced. The model for compressible low Mach number flows is recalled following [Ber05]. The underlying assumption is that the Mach number, $M$, is sufficiently small so that the pressure $p$ (N/m${}^{2}$) can be decomposed into a constant function $\overline{\pi}$ and a small perturbation $\pi$, $$p\left(x,y,z,t\right)=\overline{\pi}\left(t\right)+\pi\left(x,y,z,t\right),% \quad\frac{\pi}{\overline{\pi}}=O\left(M^{-2}\right),$$ (1) where $\overline{\pi}\left(t\right)$ is a data. The perturbation will be neglected in the state equation but it has to be retained in the momentum equation. Assuming that the mass fractions of the species and the temperature of the mixture are given, the system of equations to be solved becomes $$\displaystyle\frac{\partial\rho}{\partial t}+\mathrm{div}\mathbf{w}_{\mathbf{u% }}=0,$$ (2) $$\displaystyle\frac{\partial\mathbf{w}_{\mathbf{u}}}{\partial t}+\mathrm{div}% \mathcal{F}^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{w}_{\mathbf{u}},\rho\right)% +\mathrm{grad}\pi-\mathrm{div}\tau=0,$$ (3) $$\displaystyle\tau=\mu\left(\mathrm{grad}\mathbf{u}+\mathrm{grad}\mathbf{u}^{T}% \right)-\frac{2}{3}\mu\mathrm{div}\mathbf{u}I,$$ (4) $$\displaystyle\rho=\frac{\overline{\pi}}{R\theta},$$ (5) where standard notations are used: • $\rho$ is density (kg/m${}^{3}$), • $\mathbf{u}=(u_{1},u_{2},u_{3})^{t}$ is velocity (m/s), • $\mathbf{w}_{\mathbf{u}}:=\rho\mathbf{u}$ is the vector of conservative variables related to velocity (kg/m${}^{2}\>$s), • $\mathbf{{\cal F}}^{\mathbf{w_{u}}}$ is the flux tensor: $$\mathbf{\cal F}_{i}^{\mathbf{w}_{\mathbf{u}}}(\mathbf{w}_{\mathbf{u}},\rho)=% \frac{1}{\rho}\mathbf{w}_{\mathbf{u},\,i}\mathbf{w}_{\mathbf{u}}=u_{i}\mathbf{% w}_{\mathbf{u}},\quad i=1,2,3,$$ • $\tau$ is the viscous part of the Cauchy stress tensor (Pa), • $\mu$ is the dynamic viscosity (kg/ms), • $\theta$ is the temperature (K), • $R$ is the gas constant (J/kgK), $$R=\mathcal{R}\sum_{l=1}^{N_{e}}\frac{\mathrm{y}_{l}}{\mathcal{M}_{l}}$$ with $\mathcal{R}$ the universal gas constant (8.314 J/molK), $\mathcal{M}_{l}$ the molar mass of the $l$-th species, $\mathrm{y}_{l}$ (g/mol) its mass fraction and $N_{e}$ the number of species of the mixture. • $\mathbf{f}_{\mathbf{u}}$ is a generic source term used, for instance, for manufactured test problems (N/m${}^{3}$). In case the data provided is not the temperature but the specific enthalpy, $h$ (J/kg), we can recover the temperature by taking into account the standard enthalpy formation, $h_{0}$, and the specific heat at constant pressure, $c_{\pi}$ (J/kgK): $$h(\theta)=h_{0}+\int_{\theta_{0}}^{\theta}c_{\pi}(r)dr,$$ (6) where $\theta_{0}$ denotes the standard temperature, usually 273.15 K. 3 Numerical discretization The numerical discretization of the complete system is performed by extending the projection method first put forward in [BFSVC14]. The methodology proposed decouples the computation of the linear momentum density (from now on, the conservative velocity) and the pressure perturbation. At each time step, equation (3) is solved with a finite volume method (FVM) and so an approximation of $\mathbf{w}_{\mathbf{u}}$ is obtained. Next, the projection step is applied to system (2)-(3). The pressure correction is provided by a piecewise linear finite element method (FEM). In the post-projection step, an approximation of $\mathbf{w}_{\mathbf{u}}$ satisfying the divergence condition, (2), is obtained. We start by considering a two-stage in time discretization scheme: in order to get the solution at time $t^{n+1}$, we use the previously obtained approximations $\mathbf{W}_{\mathbf{u}}^{n}$ of conservative velocities $\mathbf{w}_{\mathbf{u}}(x,y,z,t^{n})$, $\mathbf{U}^{n}$ of velocity $\mathbf{u}(x,y,z,t^{n})$, $\rho^{n}$ of density $\rho\left(x,y,z,t^{n}\right)$ and $\pi^{n}$ of pressure perturbation $\pi(x,y,z,t^{n})$, and compute $\mathbf{W}_{\mathbf{u}}^{n+1}$, $\rho^{n+1}$ and $\pi^{n+1}$ from the following system of equations: $$\displaystyle\frac{1}{\Delta t}\left(\widetilde{\mathbf{W}}_{\mathbf{u}}^{n+1}% -\mathbf{W}_{\mathbf{u}}^{n}\right)+\mathrm{div}\,\mathcal{F}^{\mathbf{W}_{% \mathbf{u}}}\left(\mathbf{W}_{\mathbf{u}}^{n},\rho^{n}\right)+\mathrm{grad}\,% \pi^{n}-\mathrm{div}\,\tau^{n}=0,$$ (7) $$\displaystyle\rho^{n+1}=\frac{\overline{\pi}}{\mathcal{R}\theta^{n+1}% \displaystyle{\sum_{i=1}^{N_{e}}\frac{Y^{n+1}_{i}}{\mathcal{M}_{i}}}},$$ (8) $$\displaystyle\frac{1}{\Delta t}\left(\mathbf{W}_{\mathbf{u}}^{n+1}-\widetilde{% \mathbf{W}}_{\mathbf{u}}^{n+1}\right)+\mathrm{grad}\left(\pi^{n+1}-\pi^{n}% \right)=0,$$ (9) $$\displaystyle\mathrm{div}\,\mathbf{W}_{\mathbf{u}}^{n+1}=Q^{n+1},$$ (10) where $\mathbf{Y}^{n+1}$ and $\theta^{n+1}$ are the evaluations of the given functions $\mathbf{y}\left(x,y,z,t^{n+1}\right)$ and $\theta\left(x,y,z,t^{n+1}\right)$, respectively, and $Q^{n+1}$ is an approximation of $q\left(x,y,z,t^{n+1}\right)\!=\!-\partial_{t}\rho\left(x,y,z,t^{n+1}\right)$. The field $\widetilde{\mathbf{W}}^{n+1}$ computed at this stage does not necessarily satisfy the divergence constraint at $t^{n+1}$. Let us notice that $q\left(x,y,z,t^{n+1}\right)$ could also be provided so that $Q^{n+1}$, would be its evaluation. The procedure to determine the solution of the above system is similar to the already presented in [BFSVC14] and [BFTVC18] for the incompressible case. Firstly, we solve equation (7) to obtain an intermediate approach of the conservative variables $\widetilde{\mathbf{W}}_{\mathbf{u}}^{n+1}$. Let us notice that, in order to get $Q^{n+1}$, to accomplish the projection stage, we need to compute the density. Herein, we set a new stage, the pre-projection stage, where equation (8) is used to obtain $\rho^{n+1}$. Once we get the approximation of the density, we calculate its time derivative to approximate $Q^{n+1}$ and apply a finite element method to estimate the pressure correction. Summarizing, the global algorithm involves four stages: • Transport-difusion stage: equation (7) is solved through a FVM. • Pre-projection stage: density is computed from mass fractions of species and temperature using (8). Next, $Q^{n+1}$ is obtained. If we are given the enthalpy, the temperature is recovered by solving (6). • Projection stage: a FEM is applied to (9)-(10) in order to determine the pressure correction, $\delta^{n+1}:=\pi^{n+1}-\pi^{n}$. • Post-projection stage: the conservative velocities are updated by using the pressure correction and relation (9). The following sections are devoted to the description of the previous stages highlighting the main differences with the incompressible method. 3.1 A dual finite volume mesh For the space discretization we consider a 3D unstructured tetrahedral finite element mesh $\{T_{k},\,i=1,\dots,nel\}$. From this mesh we build a dual finite volume mesh as introduced in [BFSVC14], [BFTVC18] and [BDDVC98]. The nodes, to be denoted by $\{N_{i},\,i=1,\dots,nvol\}$, are the barycenters of the faces of the initial tetrahedra. In Figure 1 node $N_{i}$ is the barycenter of the face defined by vertices $V_{1}$, $V_{2}$ and $V_{3}$. This is why we will call this finite volume of face-type. The notation employed is as follows: • Each interior node $N_{i}$ has as neighbouring nodes the set ${\cal K}_{i}$ consisting of the barycentres of the faces of the two tetrahedra of the initial mesh to which it belongs. • Each finite volume is denoted by $C_{i}$. We denote by $\Gamma_{i}$ its boundary and $\boldsymbol{\widetilde{\eta}}_{i}$ its outward unit normal. • The face $\Gamma_{ij}$ is the interface between cells $C_{i}$ and $C_{j}$. $N_{ij}$ is the barycentre of the face. • The boundary of $C_{i}$ is denoted by $\Gamma_{i}=\displaystyle\bigcup_{N_{j}\in{\cal K}_{i}}\Gamma_{ij}$ . • $\left|C_{i}\right|$ is the volume of $C_{i}$. • Finally, $\widetilde{\boldsymbol{\eta}}_{ij}$ represents the outward unit normal vector to $\Gamma_{ij}$. We define $\boldsymbol{\eta_{ij}}:=\boldsymbol{\widetilde{\eta}_{ij}}||\boldsymbol{\eta_{% ij}}||$, where, $||\boldsymbol{\eta_{ij}}||:=\mbox{area}(\Gamma_{ij})$. 4 Transport-diffusion stage Within the transport diffusion stage a finite volume method is applied in order to provide a first approximation of the conservative variables related to the velocity. Their discrete approximations are taken to be constant per finite volume, as it represents an integral average. Integrating equation (7) on $C_{i}$ and applying Gauss’ theorem we get $$\displaystyle\frac{\left|C_{i}\right|}{\Delta t}\left(\widetilde{\mathbf{W}}_{% \bf{u},\,i}^{n+1}-\mathbf{W}_{\bf{u},\,i}^{n}\right)+\int_{\Gamma_{i}}\mathcal% {F}^{\mathbf{W}_{\bf{u}}}\left(\mathbf{W}^{n}_{\bf{u}},\rho^{n}\right)% \boldsymbol{\widetilde{\eta}}_{i}\,dS+\int_{C_{i}}\operatorname{\mathrm{grad}}% \pi^{n}\,dV-\int_{\Gamma_{i}}\tau^{n}\,\boldsymbol{\widetilde{\eta}}_{i}\,dS=0.$$ (11) The above integrals can be computed analogously to the ones related to the incompressible model (see [BFSVC14] and [BFTVC18]) although the time and space dependency of density and the compressibility condition produce several changes on advection and viscous terms. More importantly, when willing to obtain a second order in space and time scheme using LADER methodology, special attention must be paid to the density approximation. The former concerns also appear when solving a scalar advection equation with a time and space dependent advection coefficient. Therefore, this simplified model has been used to develop the numerical schemes presented in this paper (see A for further details). Subsequently, they have been extended to solve compressible Navier-Stokes equations. 4.1 Numerical flux We start by approximating the flux term in (11). To this end we define the global normal flux on ${\Gamma}_{i}$ as ${\mathcal{Z}}(\mathbf{W}^{n},\rho^{n},{\boldsymbol{\tilde{\eta}}}_{i}):={% \mathbf{\mathcal{F}}}^{\mathbf{w}_{\mathbf{u}}}(\mathbf{W}^{n},\rho^{n}){% \boldsymbol{\tilde{\eta}}}_{i}$. Next, we split $\Gamma_{i}$ into the cell interfaces $\Gamma_{ij}$, namely $$\int_{\Gamma_{i}}\mathbf{\mathcal{F}}^{\mathbf{w}_{\mathbf{u}}}(\mathbf{W}^{n}% ,\rho^{n})\boldsymbol{\widetilde{\eta}}_{i}\,\mathrm{dS}=\displaystyle\sum_{N_% {j}\in{\cal K}_{i}}\displaystyle\int_{\Gamma_{ij}}{\mathcal{Z}}(\mathbf{W}^{n}% ,\rho^{n},{\boldsymbol{\tilde{\eta}}}_{ij})\,\mathrm{dS}.$$ (12) Then, in order to get a stable discretization, the integral on $\Gamma_{ij}$ is approximated by an upwind scheme using a numerical flux function $\boldsymbol{\phi}_{\mathbf{u}}$. We use Rusanov scheme (see [Rus62]), $$\displaystyle\boldsymbol{\phi}_{\mathbf{u}}\left(\mathbf{W}_{\mathbf{u},\,i}^{% n},\mathbf{W}_{\mathbf{u},\,j}^{n},\rho_{i}^{n},\rho_{j}^{n},\boldsymbol{\eta}% _{ij}\right)=$$ $$\displaystyle\displaystyle\frac{1}{2}({\cal Z}(\mathbf{W}_{\mathbf{u},\,i}^{n}% ,\rho_{i}^{n},\boldsymbol{\eta}_{ij})+{\cal Z}(\mathbf{W}_{\mathbf{u},\,j}^{n}% ,\rho_{j}^{n},\boldsymbol{\eta}_{ij}))$$ $$\displaystyle-\frac{1}{2}\alpha^{\mathbf{W}_{\bf{u}},\,n}_{RS,\,ij}\left(% \mathbf{W}_{\mathbf{u},\,j}^{n}-\mathbf{W}_{\mathbf{u},\,,i}^{n}\right)$$ (13) with $$\alpha^{\mathbf{W}_{\bf{u}},\,n}_{RS,\,ij}=\alpha^{\mathbf{W}_{\bf{u}}}_{RS}(% \mathbf{W}_{\mathbf{u},\,i}^{n},\mathbf{W}_{\mathbf{u},\,j}^{n},\rho_{i}^{n},% \rho_{j}^{n},\boldsymbol{\eta}_{ij}):=\max\left\{2\left|\bf{U}_{i}^{n}\cdot% \boldsymbol{\eta}_{ij}\right|,2\left|\bf{U}_{j}^{n}\cdot\boldsymbol{\eta}_{ij}% \right|\right\}$$ (14) the so-called Rusanov coefficient. Therefore, equation (11) can be rewritten as $$\displaystyle\frac{1}{\Delta t}\left(\widetilde{\mathbf{W}}_{\bf{u},\,i}^{n+1}% -\mathbf{W}_{\bf{u},\,i}^{n}\right)+\frac{1}{\left|C_{i}\right|}\sum_{\mathcal% {N}_{j}\in\mathcal{K}_{i}}\boldsymbol{\phi}_{\mathbf{u}}\left(\mathbf{W}_{% \mathbf{u},\,i}^{n},\mathbf{W}_{\mathbf{u},\,j}^{n},\rho_{i}^{n},\rho_{j}^{n},% \boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle+\frac{1}{\left|C_{i}\right|}\int_{C_{i}}\operatorname{\mathrm{% grad}}\pi^{n}\mathrm{dV}-\frac{1}{\left|C_{i}\right|}\sum_{\mathcal{N}_{j}\in% \mathcal{K}_{i}}\boldsymbol{\varphi}_{\mathbf{u}}\left(\mathbf{U}_{i}^{n},% \mathbf{U}_{j}^{n},\boldsymbol{\eta}_{ij}\right)=0$$ (15) where $\boldsymbol{\varphi}_{\mathbf{u}}$ denotes a diffusion flux function to be detailed in Section 4.2. For constant density this scheme corresponds to the proposed in [BFTVC18] to solve incompressible flows. However, it may produce spurious oscillations on the solutions for compressible low Mach number flows. We come now to propose a modification of it trying to avoid this undesirable behaviour. 4.1.1 Artificial viscosity related to density As it is well known, the numerical flux of Rusanov scheme splits into a centred approximation of the whole flux and a numerical viscosity needed for the stability of the scheme. This upwind term is built with the Jacobian matrix of the flux, namely, $$\frac{\partial\mathcal{F}^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{w}_{\mathbf{u% }},\rho\right)}{\partial\mathbf{w}_{\mathbf{u}}}\mathrm{div}\mathbf{w}_{% \mathbf{u}}.$$ (16) For incompressible flows, it corresponds with the spatial derivative of the flux. However, for compressible flows, the flux also depends on the spatial variable through the density: $$\mathrm{div}\mathcal{F}^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{w}_{\mathbf{u}}% ,\rho\right)=\frac{\partial\mathcal{F}^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{% w}_{\mathbf{u}},\rho\right)}{\partial\mathbf{w}_{\mathbf{u}}}\mathrm{div}% \mathbf{w}_{\mathbf{u}}+\frac{\partial\mathcal{F}^{\mathbf{w}_{\mathbf{u}}}% \left(\mathbf{w}_{\mathbf{u}},\rho\right)}{\partial\rho}\operatorname{\mathrm{% grad}}\rho.$$ (17) Therefore, when applying (14) we are taking into account just the numerical viscosity related to the first term on the right hand side of (17) whereas the second term is considered only in the centred part of the flux. This lack of upwinding may produce spurious oscillations on the solution of the compressible model. Consequently, the bad behaviour of (15) can be corrected by adding a new artificial viscosity term to get an upwind discretization of $$\frac{\partial\mathcal{F}^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{w}_{\mathbf{u% }},\rho\right)}{\partial\rho}\operatorname{\mathrm{grad}}\rho.$$ (18) To this end, we present two different approaches extending the treatment first put forward in [BLVC17] for the unidimensional case. The first one consists in subtracting the integral in the control volume of (18) from both sides of (15). Next, a centred discretization is used to compute the integral introduced in the left-hand side, while the one inserted in the right-hand side is approximated in an upwind form. The second approach incorporates the upwind term into the Rusanov scheme by considering an approximation of the derivative of the normal flux with respect to the density. Remark 1. For an incompressible flow the density gradient is zero and so is (18). Approach 1 Denoting $$\mathbf{V}\left(\mathbf{W}_{\mathbf{u}},\rho\right):=\frac{\partial\mathcal{F}% ^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{W}_{\mathbf{u}},\rho\right)}{\partial% \rho}\operatorname{\mathrm{grad}}\rho=-\frac{1}{\rho^{2}}\mathbf{W}_{\mathbf{u% }}\otimes\mathbf{W}_{\mathbf{u}}\operatorname{\mathrm{grad}}\rho,$$ (19) $$\mathbf{G}\left(\mathbf{W}_{\mathbf{u}},\rho\right):=-\frac{\partial\mathcal{F% }^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{W}_{\mathbf{u}},\rho\right)}{\partial% \rho}\operatorname{\mathrm{grad}}\rho=\frac{1}{\rho^{2}}\mathbf{W}_{\mathbf{u}% }\otimes\mathbf{W}_{\mathbf{u}}\operatorname{\mathrm{grad}}\rho$$ (20) and incorporating both terms in the discrete equation (15) we get $$\displaystyle\frac{1}{\Delta t}\left(\widetilde{\mathbf{W}}_{\bf{u},\,i}^{n+1}% -\mathbf{W}_{\bf{u},\,i}^{n}\right)+\frac{1}{\left|C_{i}\right|}\sum_{\mathcal% {N}_{j}\in\mathcal{K}_{i}}\boldsymbol{\phi}_{\mathbf{u}}\left(\mathbf{W}_{% \mathbf{u},\,i}^{n},\mathbf{W}_{\mathbf{u},\,j}^{n},\rho_{i}^{n},\rho_{j}^{n},% \boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle+\frac{1}{\left|C_{i}\right|}\int_{C_{i}}\operatorname{\mathrm{% grad}}\pi^{n}\mathrm{dV}-\frac{1}{\left|C_{i}\right|}\sum_{\mathcal{N}_{j}\in% \mathcal{K}_{i}}\boldsymbol{\varphi}_{\mathbf{u}}\left(\mathbf{U}_{i}^{n},% \mathbf{U}_{j}^{n},\boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle-\frac{1}{\left|C_{i}\right|}\int_{C_{i}}\mathbf{V}\left(\mathbf{% W}_{\mathbf{u}}^{n},\rho^{n}\right)\mathrm{dV}=\frac{1}{\left|C_{i}\right|}% \int_{C_{i}}\mathbf{G}\left(\mathbf{W}_{\mathbf{u}}^{n},\rho^{n}\right)\mathrm% {dV}.$$ (21) To compute these new terms, we propose to divide each interior finite volume into six sub-tetrahedra denoted by $C_{ij}$. Each of them has, as basis, a face, $\Gamma_{ij}$, of the original finite volume, $C_{i}$, and the barycenter of the finite volume, that is, the node $N_{i}$, as opposite vertex. Similarly, boundary volumes are split into three sub-tetrahedra. Let us denote $d_{ij}$ the distance between $\Gamma_{ij}$ and $N_{i}$. We notice that $d_{ij}=d_{ji}$ due to the construction of the staggered mesh and $\left|C_{ij}\right|=\frac{1}{3}d_{ij}\|\boldsymbol{\eta}_{ij}\|$. In Figure 2 the 2D sub-triangles corresponding to the 3D sub-tetrahedra are depicted. We observe that in 2D we divide each interior cell into four sub-triangles instead of the six sub-tetrahedra needed in 3D. Using this new structure, we can split the integral on the finite volume as the sum of the integrals on the sub-tetrahedra, $C_{ij}$: $$\frac{1}{\left|C_{i}\right|}\int_{C_{i}}\mathbf{V}\left(\mathbf{W}_{\mathbf{u}% }^{n},\rho^{n}\right)\mathrm{dV}=\frac{1}{\left|C_{i}\right|}\sum_{N_{j}\in% \mathcal{K}_{i}}\int_{C_{ij}}\mathbf{V}\left(\mathbf{W}_{\mathbf{u}}^{n},\rho^% {n}\right)\mathrm{dV}.$$ (22) Next, we denote $\mathbf{V}^{n}_{i}$ an approximation of (22), $$\mathbf{V}^{n}_{i}:=\frac{1}{\left|C_{i}\right|}\sum_{N_{j}\in\mathcal{K}_{i}}% \mathbf{V}_{i,\,ij}^{n},$$ (23) where $$\mathbf{V}_{i,\,ij}^{n}\approx\int_{C_{ij}}\mathbf{V}\left(\mathbf{W}_{\mathbf% {u}}^{n},\rho^{n}\right)\mathrm{dV}.$$ (24) To approximate these new integrals we consider the conservative variables to be constant by sub-tetrahedra and set its value as the average velocity at the two nodes related to the face. On the other hand, the density gradient is approximated by its orthogonal part to be consistent with the approximation of the divergence of velocities on the viscous term of Rusanov’s flux (see [BVC94] and [VC94]). Therefore, we obtain $$\displaystyle\mathbf{V}_{i,\,ij}^{n}$$ $$\displaystyle=-\left|C_{ij}\right|\frac{1}{4}\left[\left(\mathbf{W}^{n}_{% \mathbf{u},\,{i}}+\mathbf{W}^{n}_{\mathbf{u},\,{j}}\right)\otimes\left(\mathbf% {W}^{n}_{\mathbf{u},\,{i}}+\mathbf{W}^{n}_{\mathbf{u},\,{j}}\right)\right]% \frac{1}{\frac{1}{4}\left(\rho_{i}^{n}+\rho_{j}^{n}\right)^{2}}\frac{\rho_{j}^% {n}-\rho_{i}^{n}}{d_{ij}}\boldsymbol{\widetilde{\eta}}_{ij}$$ $$\displaystyle=-\frac{1}{3}\left(\mathbf{W}^{n}_{\mathbf{u},\,{i}}+\mathbf{W}^{% n}_{\mathbf{u},\,{j}}\right)\left(\mathbf{W}^{n}_{\mathbf{u},\,{i}}+\mathbf{W}% ^{n}_{\mathbf{u},\,{j}}\right)\cdot\boldsymbol{\eta}_{ij}\frac{\rho^{n}_{j}-% \rho^{n}_{i}}{\left(\rho_{i}^{n}+\rho_{j}^{n}\right)^{2}}.$$ (25) Substituting (25) in (23) we get $$\displaystyle\mathbf{V}_{i}^{n}=-\frac{1}{3\left|C_{i}\right|}\sum_{N_{j}\in% \mathcal{K}_{i}}\left[\left(\mathbf{W}_{\mathbf{u},\,{i}}^{n}+\mathbf{W}_{% \mathbf{u},\,{j}}^{n}\right)\left(\mathbf{W}_{\mathbf{u},\,{i}}^{n}+\mathbf{W}% _{\mathbf{u},\,{j}}^{n}\right)\cdot\boldsymbol{\eta}_{ij}\frac{\rho_{j}^{n}-% \rho_{i}^{n}}{\left(\rho_{i}^{n}+\rho_{j}^{n}\right)^{2}}\right].$$ (26) Let us denote $\mathbf{G}^{n}_{i}$ the upwind approximation of $\frac{1}{\left|C_{i}\right|}\int_{C_{i}}\mathbf{G}\left(\mathbf{W}_{\mathbf{u}% }^{n},\rho^{n}\right)\mathrm{dV}$. Then, taking into account the procedure to upwind source terms when using Rusanov flux, we obtain $$\displaystyle\mathbf{G}_{i}^{n}=\frac{1}{\left|C_{i}\right|}\sum_{N_{j}\in{% \cal K}_{i}}\boldsymbol{\psi}_{i,\,j}\left(\mathbf{W}^{n}_{\mathbf{u},\,i},% \mathbf{W}^{n}_{\mathbf{u},\,j},\rho_{i}^{n},\rho_{j}^{n},\boldsymbol{\eta}_{% ij}\right)=-\frac{1}{\left|C_{i}\right|}\sum_{N_{j}\in\mathcal{K}_{i}}\left[1-% \textrm{sign}\left(\breve{\alpha}_{RS,\,ij}^{\mathbf{w_{u}},\,n}\right)\right]% \mathbf{V}_{i,\,ij}^{n}$$ $$\displaystyle=\frac{1}{3\left|C_{i}\right|}\sum_{N_{j}\in\mathcal{K}_{i}}\left% [1-\textrm{sign}\left(\breve{\alpha}_{RS,\,ij}^{\mathbf{w_{u}},\,n}\right)% \right]\left(\mathbf{W}^{n}_{\mathbf{u},\,{i}}+\mathbf{W}^{n}_{\mathbf{u},\,{j% }}\right)\left(\mathbf{W}^{n}_{\mathbf{u},\,{i}}+\mathbf{W}^{n}_{\mathbf{u},\,% {j}}\right)\cdot\boldsymbol{\eta}_{ij}\frac{\rho_{j}^{n}-\rho_{i}^{n}}{\left(% \rho_{i}^{n}+\rho_{j}^{n}\right)^{2}},$$ (27) where we have denoted by $\textrm{sign}\left(\breve{\alpha}_{RS,\,ij}^{\mathbf{w_{u}},\,n}\right)$ the sign of the eigenvalue considered in Rusanov constant. Moreover, we have introduced the mappings $$\boldsymbol{\psi}_{i,\,j}=\left[1-\textrm{sign}\left(\breve{\alpha}_{RS,\,ij}^% {\mathbf{w_{u}},\,n}\right)\right]\mathbf{V}_{i,\,ij}^{n}$$ (28) in order to extend the procedure to upwind source terms presented in [VC94] and [BLVC17] to the upwind of $\mathbf{G}$. Substituting (23) and (27) in (21) it results $$\displaystyle\frac{1}{\Delta t}\left(\widetilde{\mathbf{W}}_{\bf{u},\,i}^{n+1}% -\mathbf{W}_{\bf{u},\,i}^{n}\right)+\frac{1}{\left|C_{i}\right|}\sum_{\mathcal% {N}_{j}\in\mathcal{K}_{i}}\boldsymbol{\phi}_{\mathbf{u}}\left(\mathbf{W}_{% \mathbf{u},\,i}^{n},\mathbf{W}_{\mathbf{u},\,j}^{n},\rho_{i}^{n},\rho_{j}^{n},% \boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle+\frac{1}{\left|C_{i}\right|}\int_{C_{i}}\operatorname{\mathrm{% grad}}\pi^{n}\mathrm{dV}-\frac{1}{\left|C_{i}\right|}\sum_{\mathcal{N}_{j}\in% \mathcal{K}_{i}}\boldsymbol{\varphi}_{\mathbf{u}}\left(\mathbf{U}_{i}^{n},% \mathbf{U}_{j}^{n},\boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle-\frac{1}{\left|C_{i}\right|}\sum_{N_{j}\in\mathcal{K}_{i}}% \mathbf{V}_{i,\,ij}^{n}=-\frac{1}{\left|C_{i}\right|}\sum_{N_{j}\in\mathcal{K}% _{i}}\left[1-\textrm{sign}\left(\breve{\alpha}_{RS}^{\mathbf{w_{u}},\,n}\right% )\right]\mathbf{V}_{i,\,ij}^{n}.$$ (29) Hence, the discretized equation reads $$\displaystyle\frac{1}{\Delta t}\left(\widetilde{\mathbf{W}}_{\bf{u},\,i}^{n+1}% -\mathbf{W}_{\bf{u},\,i}^{n}\right)+\frac{1}{\left|C_{i}\right|}\sum_{\mathcal% {N}_{j}\in\mathcal{K}_{i}}\boldsymbol{\phi}_{\mathbf{u}}\left(\mathbf{W}_{% \mathbf{u},\,i}^{n},\mathbf{W}_{\mathbf{u},\,j}^{n},\rho_{i}^{n},\rho_{j}^{n},% \boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle+\frac{1}{\left|C_{i}\right|}\int_{C_{i}}\operatorname{\mathrm{% grad}}\pi^{n}\mathrm{dV}-\frac{1}{\left|C_{i}\right|}\sum_{\mathcal{N}_{j}\in% \mathcal{K}_{i}}\boldsymbol{\varphi}_{\mathbf{u}}\left(\mathbf{U}_{i}^{n},% \mathbf{U}_{j}^{n},\boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle-\frac{1}{\left|C_{i}\right|}\sum_{N_{j}\in\mathcal{K}_{i}}% \textrm{sign}\left(\breve{\alpha}_{RS}^{\mathbf{w_{u}},\,n}\right)\mathbf{V}_{% i,\,ij}^{n}=0.$$ (30) Approach 2 Operating with the increment of the normal flux, we get $$\Delta\mathcal{Z}\left(\mathbf{w}_{\mathbf{u}},\rho,\boldsymbol{\eta}\right)=% \mathcal{A}\Delta\mathbf{w}_{\mathbf{u}}+\mathcal{R}\Delta\rho$$ (31) where $\mathcal{A}$ denotes the jacobian matrix of the normal flux and $$\mathcal{R}=\frac{\partial\mathcal{Z}\left(\mathbf{w}_{\mathbf{u}},\rho,% \boldsymbol{\eta}\right)}{\partial\rho}.$$ (32) Analysing expression (31), we notice that the first term is related to the upwind term of Rusanov scheme, whereas the second term, $$\frac{\partial\mathcal{F}^{\mathbf{w}_{\mathbf{u}}}\left(\mathbf{w}_{\mathbf{u% }},\rho\right)\boldsymbol{\eta}}{\partial\rho}\Delta\rho=-\mathbf{w}_{\mathbf{% u}}\left(\mathbf{w}_{\mathbf{u}}\cdot\boldsymbol{\eta}\right)\rho^{-2}\Delta\rho,$$ (33) is not included. Thus, we propose a modification of the scheme consisting in incorporating the term $$\displaystyle\left[-\mathbf{W}_{\mathbf{u}}^{n}\left(\mathbf{W}_{\mathbf{u}}^{% n}\cdot\boldsymbol{\eta}\right)\left(\rho^{n}\right)^{-2}\Delta\rho^{n}\right]% _{|\Gamma_{ij}}$$ $$\displaystyle\approx-\left[\frac{1}{2}\left(\mathbf{W}^{n}_{\bf{u},\,i}+% \mathbf{W}^{n}_{\bf{u},\,j}\right)\right]\left[\frac{1}{2}\left(\mathbf{W}_{% \bf{u},\,i}^{n}+\mathbf{W}_{\bf{u},\,j}^{n}\right)\cdot\boldsymbol{\eta}_{ij}% \right]\left[\frac{1}{2}\left(\rho_{i}^{n}+\rho_{j}^{n}\right)\right]^{-2}% \left(\rho_{j}^{n}-\rho_{i}^{n}\right)$$ $$\displaystyle=-\left[\left(\mathbf{W}^{n}_{\bf{u},\,i}+\mathbf{W}^{n}_{\bf{u},% \,j}\right)\right]\left[\left(\mathbf{W}_{\bf{u},\,i}^{n}+\mathbf{W}_{\bf{u},% \,j}^{n}\right)\cdot\boldsymbol{\eta}_{ij}\right]\left(\rho_{i}^{n}+\rho_{j}^{% n}\right)^{-2}\left(\rho_{j}^{n}-\rho_{i}^{n}\right)$$ (34) to the flux function. Since the approximation must be consistent, we multiply the above expression by a constant $\beta=\frac{1}{3}$. Furthermore, we use the sign of the eigenvalue considered in the Rusanov constant, $\breve{\alpha}_{RS,\,ij}^{\mathbf{w_{u}},\,n}$, to account for the sense of the flux. Therefore, the term reads $$-\beta\textrm{sign}\left(\breve{\alpha}_{RS,\,ij}^{\mathbf{w_{u}},\,n}\right)% \left(\mathbf{W}^{n}_{\bf{u},\,i}+\mathbf{W}^{n}_{\bf{u},\,j}\right)\left(% \mathbf{W}_{\bf{u},\,i}^{n}+\mathbf{W}_{\bf{u},\,j}^{n}\right)\cdot\boldsymbol% {\eta}_{ij}\left(\rho_{i}^{n}+\rho_{j}^{n}\right)^{-2}\left(\rho_{j}^{n}-\rho_% {i}^{n}\right)$$ (35) and the new flux function, to be replaced in (15), becomes $$\displaystyle\boldsymbol{\phi}_{\mathbf{u}}\left(\mathbf{W}_{\bf{u},\,i}^{n},% \mathbf{W}_{\bf{u},\,j}^{n},\rho_{i}^{n},\rho_{j}^{n},\boldsymbol{\eta}_{ij}\right)$$ $$\displaystyle=\displaystyle\frac{1}{2}\left[{\cal Z}(\mathbf{W}_{\bf{u},\,i}^{% n},\rho_{i}^{n},\boldsymbol{\eta}_{ij})+{\cal Z}(\mathbf{W}_{\bf{u},\,j}^{n},% \rho_{j}^{n},\boldsymbol{\eta}_{ij})\right]-\frac{1}{2}\alpha^{\mathbf{w_{u}},% \,n}_{RS,\,ij}\left(\mathbf{W}_{j}^{n}-\mathbf{W}_{i}^{n}\right)$$ $$\displaystyle-\frac{1}{3}\textrm{sign}\left(\breve{\alpha}_{RS,\,ij}^{\mathbf{% w_{u}},\,n}\right)\left(\mathbf{W}^{n}_{\bf{u},\,i}+\mathbf{W}^{n}_{\bf{u},\,j% }\right)\left(\mathbf{W}_{\bf{u},\,i}^{n}+\mathbf{W}_{\bf{u},\,j}^{n}\right)% \cdot\boldsymbol{\eta}_{ij}\left(\rho_{i}^{n}+\rho_{j}^{n}\right)^{-2}\left(% \rho_{j}^{n}-\rho_{i}^{n}\right).$$ (36) Remark 2. Both approaches produce the same numerical scheme. 4.1.2 LADER methodology By using the flux function (36) we would obtain a first order scheme both in space and time. To achieve a second order scheme we extend LADER techniques to solve equation (7). This method was first proposed in [BFTVC18] as a modification of ADER methodology (see [TMN01] and [Tor09]) and makes its extension to solve the three-dimensional problem easier. We follow the steps described in [BFTVC18] for solving the incompressible Navier-Stokes equations taking into account the dependency of density in time and space. Accordingly, the conservative variables are extrapolated and the mid-point rule is applied. As a novelty, we propose to compute particular evolved values of density at the neighbouring of faces, $\Gamma_{ij}$. This additional step is essential to obtain a second order scheme. Moreover, the variables involved in the approximation of the last term added to the flux, (35), must also be evolved. We come now to detail the new computations to be performed at each step of the extended method: Step 1. ENO-based reconstruction. Reconstruction of the data in terms of first degree polynomials is considered. For each finite volume we define four polynomials, each one of them at the neighbourhood of one of the boundary faces. Focusing on a face $\Gamma_{ij}$ and on the discretization of a scalar variable, $W$, its two related reconstruction polynomials for the conservative velocity are $$p^{i}_{ij}(N)=W_{i}+(N-N_{i})\left(\nabla W\right)^{i}_{ij},\quad p^{j}_{ij}(N% )=W_{j}+(N-N_{j})\left(\nabla W\right)^{j}_{ij}.$$ (37) To avoid spurious oscillations on the solution we add a non-linearity on the scheme by applying an ENO (Essentially Non-Oscillatory) interpolation method. The slopes are adaptively chosen as follows: $$\displaystyle\left(\nabla W\right)^{i}_{ij}=\left\{\begin{array}[]{lc}\left(% \nabla W\right)_{T_{ijL}},&\textrm{if }\left|\left(\nabla W\right)_{T_{ijL}}% \cdot\left(N_{ij}-N_{i}\right)\right|\leq\left|\left(\nabla W\right)_{T_{ij}}% \cdot\left(N_{ij}-N_{i}\right)\right|,\\ \left(\nabla W\right)_{T_{ij}},&\mathrm{otherwise};\end{array}\right.$$ $$\displaystyle\left(\nabla W\right)^{j}_{ij}=\left\{\begin{array}[]{lc}\left(% \nabla W\right)_{T_{ijR}},&\textrm{if }\left|\left(\nabla W\right)_{T_{ijR}}% \cdot\left(N_{ij}-N_{j}\right)\right|\leq\left|\left(\nabla W\right)_{T_{ij}}% \cdot\left(N_{ij}-N_{j}\right)\right|,\\ \left(\nabla W\right)_{T_{ij}},&\mathrm{otherwise};\end{array}\right.$$ where $\left(\nabla W\right)_{T_{ij}}$ is the gradient of $W$ at the auxiliary tetrahedra that intersects the face (see, on the 2D representation in Figure 3, the triangle with green contour). Step 2. Computation of the boundary extrapolated values at the barycenter of faces, $N_{ij}$: $$\displaystyle W_{i\,N_{ij}}=p^{i}_{ij}(N_{ij})=W_{i}+(N_{ij}-N_{i})\left(% \nabla W\right)^{i}_{ij},$$ (38) $$\displaystyle W_{j\,N_{ij}}=p^{j}_{ij}(N_{ij})=W_{j}+(N_{ij}-N_{j})\left(% \nabla W\right)^{j}_{ij}.$$ (39) Step 3. Computation of the variables involved in the flux term with second order of accuracy using the mid-point rule. Taylor series expansion in time and Cauchy-Kovalevskaya procedure are applied to locally approximate the conservative variables at time $\frac{\Delta t}{2}$. This methodology accounts for the contribution of the advection and diffusion terms to the time evolution of the flux term at the momentum conservation equation. The resulting evolved variables read $$\displaystyle\overline{\mathbf{W}}_{i\,N_{ij}}=\mathbf{W}_{\mathbf{u},\,i\,N_{% ij}}-\frac{\Delta t}{2\mathcal{L}_{ij}}\left[\mathcal{Z}(\mathbf{W}_{\mathbf{u% },\,i\,N_{ij}},\boldsymbol{\eta}_{ij})+\mathcal{Z}(\mathbf{W}_{\mathbf{u},\,j% \,N_{ij}},\boldsymbol{\eta}_{ij})\right]$$ $$\displaystyle+\frac{\mu\Delta t}{2\mathcal{L}_{ij}^{2}}\left[\left(\mathrm{% grad}\,\mathbf{W}_{\mathbf{u}}+\mathrm{grad}\,\mathbf{W}_{\mathbf{u}}^{T}% \right)^{i}_{ij}\boldsymbol{\eta}_{ij}+\left(\mathrm{grad}\,\mathbf{W}_{% \mathbf{u}}+\mathrm{grad}\,\mathbf{W}_{\mathbf{u}}^{T}\right)^{j}_{ij}% \boldsymbol{\eta}_{ij}\phantom{\frac{2}{3}}\right.$$ $$\displaystyle\left.-\frac{2}{3}\left(\mathrm{div}\,\mathbf{W}_{\mathbf{u},\,i% \,N_{ij}}\boldsymbol{\eta}_{ij}+\mathrm{div}\,\mathbf{W}_{\mathbf{u},\,j\,N_{% ij}}\boldsymbol{\eta}_{ij}\right)\right],$$ (40) $$\displaystyle\overline{\mathbf{W}}_{j\,N_{ij}}=\mathbf{W}_{\mathbf{u},\,j\,N_{% ij}}-\frac{\Delta t}{2\mathcal{L}_{ij}}\left[\mathcal{Z}(\mathbf{W}_{\mathbf{u% },\,i\,N_{ij}},\boldsymbol{\eta}_{ij})+\mathcal{Z}(\mathbf{W}_{\mathbf{u},\,j% \,N_{ij}},\boldsymbol{\eta}_{ij})\right]$$ $$\displaystyle+\frac{\mu\Delta t}{2\mathcal{L}_{ij}^{2}}\left[\left(\mathrm{% grad}\,\mathbf{W}_{\mathbf{u}}+\mathrm{grad}\,\mathbf{W}_{\mathbf{u}}^{T}% \right)^{i}_{ij}\boldsymbol{\eta}_{ij}+\left(\mathrm{grad}\,\mathbf{W}_{% \mathbf{u}}+\mathrm{grad}\,\mathbf{W}_{\mathbf{u}}^{T}\right)^{j}_{ij}% \boldsymbol{\eta}_{ij}\phantom{\frac{2}{3}}\right.$$ $$\displaystyle\left.-\frac{2}{3}\left(\mathrm{div}\,\mathbf{W}_{\mathbf{u},\,i% \,N_{ij}}\boldsymbol{\eta}_{ij}+\mathrm{div}\,\mathbf{W}_{\mathbf{u},\,j\,N_{% ij}}\boldsymbol{\eta}_{ij}\right)\right].$$ (41) We have denoted $\mathcal{L}_{ij}=\min\left\{\frac{\left|C_{i}\right|}{\mathrm{S}(C_{i})},\frac% {\left|C_{j}\right|}{\mathrm{S}(C_{j})}\right\}$ with $\mathrm{S}(C_{i})$ the area of the surface of cell $C_{i}$. Two different options can be considered concerning the evolved variables related to conservative velocities. The first one corresponds to the previous definition of the evolved variables, (40)-(41). Meanwhile, the second one neglects the contribution of the diffusion term: $$\displaystyle\overline{\mathbf{W}}_{i\,N_{ij}}=\mathbf{W}_{\mathbf{u},\,i\,N_{% ij}}-\frac{\Delta t}{2\mathcal{L}_{ij}}\left[\mathcal{Z}(\mathbf{W}_{\mathbf{u% },\,i\,N_{ij}},\boldsymbol{\eta}_{ij})+\mathcal{Z}(\mathbf{W}_{\mathbf{u},\,j% \,N_{ij}},\boldsymbol{\eta}_{ij})\right],$$ (42) $$\displaystyle\overline{\mathbf{W}}_{j\,N_{ij}}=\mathbf{W}_{\mathbf{u},\,j\,N_{% ij}}-\frac{\Delta t}{2\mathcal{L}_{ij}}\left[\mathcal{Z}(\mathbf{W}_{\mathbf{u% },\,i\,N_{ij}},\boldsymbol{\eta}_{ij})+\mathcal{Z}(\mathbf{W}_{\mathbf{u},\,j% \,N_{ij}},\boldsymbol{\eta}_{ij})\right].$$ (43) Regarding the densities, we propose to consider the mass conservation equation in order to apply Cauchy-Kovalevskaya procedure. Therefore, we define $$\displaystyle\overline{\rho}_{i\,N_{ij}}:=\rho_{i\,N_{ij}}-\frac{\Delta t\|% \boldsymbol{\eta}_{ij}\|}{4\mathcal{L}_{ij}}\left(\operatorname{\mathrm{div}}% \mathbf{W}_{\mathbf{u},\,i\,N_{ij}}+\operatorname{\mathrm{div}}\mathbf{W}_{% \mathbf{u},\,j\,N_{ij}}\right),$$ (44) $$\displaystyle\overline{\rho}_{j\,N_{ij}}:=\rho_{j\,N_{ij}}-\frac{\Delta t\|% \boldsymbol{\eta}_{ij}\|}{4\mathcal{L}_{ij}}\left(\operatorname{\mathrm{div}}% \mathbf{W}_{\mathbf{u},\,i\,N_{ij}}+\operatorname{\mathrm{div}}\mathbf{W}_{% \mathbf{u},\,j\,N_{ij}}\right).$$ (45) Step 4. Computation of the numerical flux using (36): $$\displaystyle\boldsymbol{\phi}_{\mathbf{u}}\left(\overline{\mathbf{W}}_{% \mathbf{u},\,i\,N_{ij}}^{n},\overline{\mathbf{W}}_{\mathbf{u},\,j\,N_{ij}}^{n}% ,\overline{\rho}_{i\,N_{ij}}^{n},\overline{\rho}_{j\,N_{ij}}^{n},\boldsymbol{% \eta}_{ij}\right)=$$ $$\displaystyle+\displaystyle\frac{1}{2}\left[{\cal Z}(\overline{\mathbf{W}}_{% \mathbf{u},\,i\,N_{ij}}^{n},\overline{\rho}_{i\,N_{ij}}^{n},\boldsymbol{\eta}_% {ij})+{\cal Z}(\overline{\mathbf{W}}_{\mathbf{u},\,j\,N_{ij}}^{n},\overline{% \rho}_{j\,N_{ij}}^{n},\boldsymbol{\eta}_{ij})\right]$$ $$\displaystyle-\frac{1}{2}\overline{\alpha}^{\mathbf{w_{u}},\,n}_{RS,\,ij}\left% (\overline{\mathbf{W}}_{\mathbf{u},\,j\,N_{ij}}^{n}-\overline{\mathbf{W}}_{% \mathbf{u},\,i\,N_{ij}}^{n}\right)$$ $$\displaystyle-\frac{1}{3}\textrm{sign}\left(\breve{\overline{\alpha}}_{RS,\,ij% }^{\mathbf{w_{u},\,n}}\right)\left(\overline{\rho}_{i\,N_{ij}}^{n}+\overline{% \rho}_{j\,N_{ij}}^{n}\right)^{-2}\left(\overline{\mathbf{W}}^{n}_{\mathbf{u},% \,i\,N_{ij}}+\overline{\mathbf{W}}^{n}_{\mathbf{u},\,j\,N_{ij}}\right)$$ $$\displaystyle\left(\overline{\mathbf{W}}_{\mathbf{u},\,i\,N_{ij}}^{n}+% \overline{\mathbf{W}}_{\mathbf{u},\,j\,N_{ij}}^{n}\right)\cdot\boldsymbol{\eta% }_{ij}\left(\overline{\rho}_{j\,N_{ij}}^{n}-\overline{\rho}_{i\,N_{ij}}^{n}% \right).$$ (46) 4.2 Viscous term In this section we describe the computation of the viscous term. First, applying Gauss’ theorem we relate the volume integral of the diffusion term with a surface integral over the boundary, $\Gamma_{i}$. Next, this integral is split into the integrals over the cell interfaces $\Gamma_{ij}$. Thus, the viscous term of the momentum conservation equation reads $$\displaystyle\int_{C_{i}}\operatorname{\mathrm{div}}\tau^{n}dV=\sum_{N_{j}\in% \mathcal{K}_{i}}\int_{\Gamma_{ij}}\tau^{n}\boldsymbol{\widetilde{\eta}}_{ij}\,% \mathrm{dS}$$ $$\displaystyle=\sum_{N_{j}\in\mathcal{K}_{i}}\int_{\Gamma_{ij}}\mu\left[% \operatorname{\mathrm{grad}}\mathbf{U}^{n}+\left(\operatorname{\mathrm{grad}}% \mathbf{U}^{n}\right)^{T}-\frac{2}{3}\operatorname{\mathrm{div}}\mathbf{U}^{n}% I\right]\boldsymbol{\widetilde{\eta}}_{ij}\,\mathrm{dS},$$ (47) where a new divergence term has appeared with respect to the incompressible model. We define the numerical diffusion function as $$\displaystyle\int_{\Gamma_{ij}}\varphi_{\mathbf{u}}\left(\mathbf{U}_{i}^{n},% \mathbf{U}_{j}^{n},\boldsymbol{\eta}_{ij}\right)\approx\mu\left[\mathrm{grad}% \,\mathbf{U}^{n}+\left(\mathrm{grad}\,\mathbf{U}^{n}\right)^{T}-\frac{2}{3}% \mathrm{div}\,\mathbf{U}^{n}\right]\widetilde{\boldsymbol{\eta}}_{ij}\mathrm{% dS}.$$ (48) Accounting for the dual mesh structure, we compute the spatial derivatives on the auxiliary tetrahedra $T_{ij}$ through a Galerkin approach. Hence, $$\boldsymbol{\varphi}_{\mathbf{u}}\left(\mathbf{U}_{i}^{n},\mathbf{U}_{j}^{n},% \boldsymbol{\eta}_{ij}\right)=\mu\left(\operatorname{\mathrm{grad}}\mathbf{U}^% {n}\right)_{T_{ij}}\boldsymbol{\eta}_{ij}+\mu\left(\operatorname{\mathrm{grad}% }\mathbf{U}^{n}\right)_{T_{ij}}^{T}\boldsymbol{\eta}_{ij}-\frac{2}{3}\mu\left(% \operatorname{\mathrm{div}}\mathbf{U}^{n}\right)_{T_{ij}}\boldsymbol{\eta}_{ij}.$$ (49) In order to attain a second order scheme we apply LADER methodology also to the diffusion term (see [BFTVC18]). That is, we construct evolved variables $\overline{\mathbf{U}}_{i}^{n}$ which only have the diffusion term as contribution for the half in time evolution: 1. The gradients of the original variables are computed at each auxiliary tetrahedra of the FE mesh, $T_{ij}$. The value of the gradient at each node, $N_{i}$, is obtained as the average of the values on the two tetrahedra containing the node, $T_{ijL}$ (green filled triangle in Figure 3) and $T_{ij}$. Taking into account the viscosity coefficients and the turbulent kinetic energy term, we introduce the auxiliary variable: $$\displaystyle\mathbf{D}^{n}_{i}:=$$ $$\displaystyle\frac{\mu}{2}\left[\left(\mathrm{grad}\,\mathbf{U}^{n}+\left(% \mathrm{grad}\,\mathbf{U}^{n}\right)^{T}-\frac{2}{3}\mathrm{div}\left(\mathbf{% U}^{n}\right)I\right)_{T_{ijL}}\right.$$ $$\displaystyle\left.+\left(\mathrm{grad}\,\mathbf{U}^{n}+\left(\mathrm{grad}\,% \mathbf{U}^{n}\right)^{T}-\frac{2}{3}\mathrm{div}\left(\mathbf{U}^{n}\right)I% \right)_{T_{ij}}\right].$$ (50) 2. The divergence is computed as the average of the trace of the gradients of $\mathbf{V}^{n}$ obtained on the auxiliary tetrahedra: $$\overline{\mathbf{U}}_{i}^{n}=\mathbf{U}_{i}^{n}+\frac{\Delta t}{4}\mathrm{tr}% \left(\left(\mathrm{grad}\,\mathbf{D}^{n}\right)_{T_{ijL}}+\left(\mathrm{grad}% \,\mathbf{D}^{n}\right)_{T_{ij}}\right).$$ (51) 3. The diffusion function $\boldsymbol{\varphi}_{\mathbf{u}}$ is evaluated on the evolved variables: $$\displaystyle\boldsymbol{\varphi}_{\mathbf{u}}\left(\overline{\mathbf{U}}_{i}^% {n},\overline{\mathbf{U}}_{j}^{n},\boldsymbol{\eta}_{ij}\right)=\mu\left(% \operatorname{\mathrm{grad}}\overline{\mathbf{U}}^{n}\right)_{T_{ij}}% \boldsymbol{\eta}_{ij}+\mu\left(\operatorname{\mathrm{grad}}\overline{\mathbf{% U}}^{n}\right)_{T_{ij}}^{T}\boldsymbol{\eta}_{ij}$$ $$\displaystyle-\frac{2}{3}\mu\left(\operatorname{\mathrm{div}}\overline{\mathbf% {U}}^{n}\right)_{T_{ij}}\boldsymbol{\eta}_{ij}.$$ (52) 4.3 Pressure term For the integral of the pressure gradient we follow [BFSVC14]. We split the boundary $\Gamma_{i}$ into the cell interfaces $\Gamma_{ij}$ using Gauss’ theorem and we compute the pressure as the arithmetic mean of its values at the three vertices of face ${\Gamma}_{ij}$ and the barycentre of the tetrahedra to which the face belongs. Then, the corresponding approximation of the integral is given by $$\int_{{\Gamma}_{ij}}\pi^{n}\,\boldsymbol{\widetilde{\eta}}_{ij}\mathrm{dS}% \approx\left[\frac{5}{12}\left(\pi^{n}(V_{1})+\pi^{n}(V_{2})\right)+\frac{1}{1% 2}\left(\pi^{n}(V_{3})+\pi^{n}(V_{4})\right)\right]\boldsymbol{\eta}_{ij}.$$ (53) 5 Pre-projection stage The novel pre-projection stage is devoted to the computation of the source term of equation (10), $Q_{i}^{n+1}$. Since we are assuming that we are given the mass fraction of the species and the temperature, we compute the density, $\rho^{n+1}_{i}$, by substituting $\mathbf{Y}_{i}^{n+1}$ and $\theta_{i}^{n+1}$ in the state equation, namely, $$\rho^{n+1}_{i}=\frac{\overline{\pi}^{n+1}}{\mathcal{R}\theta^{n+1}_{i}% \displaystyle{\sum_{l=1}^{N_{e}}\frac{Y^{n+1}_{l,\,i}}{\mathcal{M}_{l}}}}.$$ (54) Finally, the source term of the projection stage results $$Q^{n+1}_{i}=\frac{\rho^{n}_{i}-\rho^{n+1}_{i}}{\Delta t}.$$ (55) Remark 3. Let us notice that if an analytical expression of the density is known then, we can directly evaluate the term $$q(x,y,z,t^{n})=\frac{\partial\rho(x,y,z,t^{n})}{\partial t}$$ (56) so that the exact value of $Q_{i}^{n+1}$ is provided to the mass conservation equation. Remark 4. If the supplied data is no longer the temperature but the enthalpy, we apply Newton’s method to equation (6), $$H_{i}^{n+1}=h_{0}+\int_{\theta_{0}}^{\theta^{n+1}_{i}}c_{\pi}(r)\,dr,$$ (57) to obtain the value of the temperature at node $N_{i}$ at the new time step, $\theta_{i}^{n+1}$. 6 Projection and post-projection stages Following the methodology already introduced in [BFSVC14] for incompressible flows, equations (9)-(10) are solved via a finite element method. Firstly, we obtain the weak problem: Find $\delta^{n+1}\in V_{0}:=\left\{z\in H^{1}(\Omega):\int_{\Omega}zdV=0\right\}$ verifying $$\displaystyle\displaystyle\int_{\Omega}\mathrm{grad}\,\delta^{n+1}\cdot\mathrm% {grad}\,z\,dV=\displaystyle\frac{1}{\Delta t}\int_{\Omega}\widetilde{\mathbf{W% }}^{n+1}_{\mathbf{u}}\cdot\mathrm{grad}\,z\,dV+\frac{1}{\Delta t}\int_{\Omega}% Q^{n+1}zdV$$ $$\displaystyle-\displaystyle\frac{1}{\Delta t}\int_{\partial\Omega}G^{n+1}z\,dA$$ (58) for all $z\in V_{0}$. The strong formulation corresponds to the Laplace problem $$\displaystyle\Delta\delta^{n+1}=\frac{1}{\Delta t}\left(\operatorname{\mathrm{% div}}\widetilde{\mathbf{W}}_{\mathbf{u}}^{n+1}-Q^{n+1}\right)\quad\textrm{in }\Omega,$$ (59) $$\displaystyle\frac{\partial\delta^{n+1}}{\partial\boldsymbol{\eta}}=\frac{1}{% \Delta t}\left(\widetilde{\mathbf{W}}_{\mathbf{u}}^{n+1}\cdot\boldsymbol{\eta}% -G^{n+1}\right)\quad\textrm{in }\Gamma.$$ (60) Finally, at the post-projection stage, $\mathbf{W}^{n+1}_{\mathbf{u}}$ is calculated by substituting $\delta^{n+1}$ in $$\mathbf{W}^{n+1}_{\mathbf{u},\,i}=\widetilde{\mathbf{W}}^{n+1}_{\mathbf{u},\,i% }+\Delta t\textrm{ grad }\delta^{n+1}_{i}.$$ (61) 7 Boundary conditions The boundary conditions were defined following [BFSVC14]: • Dirichlet boundary conditions for inviscid fluids: the normal component of the conservative variable is set at the boundary nodes. • Dirichlet boundary conditions for viscous fluids: the value of the conservative variable is imposed at the boundary nodes. In the manufactured tests designed to analyse the order of accuracy of the numerical discretizations, it is a usual practice to impose the values of the exact solution at the boundary nodes. This practice avoids that the accuracy of the method can be affected by the treatment of the boundary conditions. From the mathematical point of view, it is like considering Dirichlet boundary conditions. 8 Numerical results Two analytical test problems will be presented willing to asses the performance of the methodology. Firstly, we recall the function spaces defined by $$L^{2}(\Omega)=\left\{g:\Omega\rightarrow\mathbb{R}\;|\;g\textrm{ is measurable% and }\int_{\Omega}\left|g\right|^{2}dV<\infty\right\},$$ (62) $$l^{2}([a,b])=\left\{g:[a,b]\rightarrow\mathbb{R}\;|\;g\textrm{ is measurable % and }\int_{a}^{b}\left|g\right|^{2}dt<\infty\right\}$$ (63) $$l^{2}(L^{2}(\Omega),[a,b])=\left\{g:\Omega\times[a,b]\rightarrow\mathbb{R}\;|% \;g\textrm{ is measurable and }\int_{a}^{b}\left|\int_{\Omega}\left|g\right|^{% 2}dV\right|^{2}dt<\infty\right\}$$ (64) with the corresponding norms $$\left\|g\right\|_{L^{2}(\Omega)}=\left(\int_{\Omega}\left|g\right|^{2}dV\right% )^{\frac{1}{2}},$$ (65) $$\left\|g\right\|_{l^{2}([a,b])}=\left(\int_{a}^{b}\left|g\right|^{2}dt\right)^% {\frac{1}{2}},$$ (66) $$\left\|g\right\|_{l^{2}(L^{2}(\Omega))}=\left[\int_{a}^{b}\left(\int_{\Omega}% \left|g\right|^{2}dV\right)dt\right]^{\frac{1}{2}}.$$ (67) Then, the errors are computed as follows: $$\displaystyle E(\pi)_{M_{i}}=\left\|\pi-\pi_{M_{i}}\right\|_{l^{2}(L^{2}(% \Omega))},\quad E(\mathbf{w}_{\mathbf{u}})_{M_{i}}=\left\|\mathbf{w}_{\mathbf{% u}}-\mathbf{w}_{\mathbf{u}\,M_{i}}\right\|_{l^{2}(L^{2}(\Omega)^{3})},$$ (68) $$\displaystyle o_{\pi_{M_{i}/M_{j}}}=\frac{\ln\left(E(\pi)_{M_{i}}/E(\pi)_{M_{j% }}\right)}{\ln\left(h_{M_{i}}/h_{M_{j}}\right)},\quad o_{\mathbf{w}_{\mathbf{u% }\,M_{i}/M_{j}}}=\frac{\ln\left(E(\mathbf{w}_{\mathbf{u}})_{M_{i}}/E(\mathbf{w% }_{\mathbf{u}})_{M_{j}}\right)}{\ln\left(h_{M_{i}}/h_{M_{j}}\right)},$$ (69) where $\pi_{M_{i}}$ and $\mathbf{w}_{\mathbf{u}M_{i}}$ denote, respectively, the pressure and the conservative velocity obtained for mesh $M_{i}$ whereas $\pi$ and $\mathbf{w_{u}}$ are the respective exact values. Regarding the estimation of the time step a CFL number is set. Thus, at each time iteration we determine a local value for the time step at each cell $C_{i}$, $$\Delta t_{C_{i}}=\frac{\rm{CFL}\,\mathcal{L}_{i}^{2}}{2\left|\mathbf{U}_{i}% \right|\mathcal{L}_{i}+2\mu}$$ (70) with $\mathcal{L}_{i}:=\frac{\left|C_{i}\right|}{\mathrm{S}\left(C_{i}\right)}$. The global time step at each time iteration, $\Delta t$, is chosen as the minimum of the time steps obtained at each cell. 8.1 Test 1. Euler flow As a first test, we consider the computational domain $\Omega=[0,1]^{3}$ and define the flow as $$\displaystyle\rho(x,y,z,t)=\cos(t)+x+1,$$ (71) $$\displaystyle\pi(x,y,z,t)=1,$$ (72) $$\displaystyle\mathbf{u}(x,y,z,t)=\left(\frac{x\sin(t)+1}{\cos(t)+x+1},0,0% \right)^{T},$$ (73) $$\displaystyle\mathrm{y}(x,y,z,t)=1,$$ (74) $$\displaystyle\theta(x,y,z,t)=\frac{10^{3}}{\cos(t)+x+1}$$ (75) with $\mu=0$ and $$\displaystyle f_{u_{1}}=f_{u_{2}}=0,$$ (76) $$\displaystyle f_{u_{3}}=x\cos(t)-\frac{(x\sin(t)+1)^{2}}{(x+\cos(t)+1)^{2}}+% \frac{(2\sin(t)(x\sin(t)+1))}{x+\cos(t)+1}$$ (77) the source terms related to the momentum equation. Let us notice that we have assumed the mixture to be homogeneous, therefore we only consider one species, so that it is easier to apply the method of manufactured solutions. Moreover, since the mass fractions of the species are supposed to be known, the performance of the method can be analysed independently on the number of species considered. Dirichlet boundary conditions are set on the boundary. The numerical simulations were run on the meshes defined in Table 1. We have denoted by $N+1$ the number of nodes along the edges of the domain, $h=1/N$, $V_{h}^{m}$ the minimum volume of the finite volumes and $V_{h}^{M}$ the maximum volume of the finite volumes. We present the results obtained at time $t_{end}=1$ and for a $CFL=1$. In Table 2, the errors and the convergence rates are depicted. We have consider three different schemes: • The first order scheme constructed from (15) by substituting (36), (49) and (53). • LADER scheme without the evolution of the density, $\overline{\rho}_{iN_{ij}}$, which results from assembling (15), (46) neglecting the evolution of the density, (52) and (53). • LADER scheme built gathering together (15), (46) with de density values taken from (44)-(45) , (52) and (53). We can conclude that, as we have analysed in A for the unidimensional advection-diffusion-reaction equation, applying LADER methodology to compute the density is crucial to achieve a second order scheme. Remark 5. If the new viscosity term, (35), is not considered, spurious oscillations arise when applying LADER methodology. 8.2 Test 2. Navier-Stokes flow The flow of the second academic test reads $$\displaystyle\rho(x,y,z,t)=\sin\left(\pi yt\right)+2,$$ (78) $$\displaystyle\pi(x,y,z,t)=\exp\left(xyz\right)\cos(t),$$ (79) $$\displaystyle\mathbf{u}(x,y,z,t)=\left((\cos(\pi xt))^{2},\exp(-2\pi yt),-\cos% (\pi xyt)\right)^{T},$$ (80) $$\displaystyle\mathrm{y}(x,y,z,t)=1,$$ (81) $$\displaystyle\theta(x,y,z,t)=\frac{10^{3}}{\sin\left(\pi yt\right)+2}.$$ (82) Then, the source terms for the momentum equations result $$\displaystyle f_{u_{1}}=\pi y\cos(\pi tx)^{2}\cos(\pi ty)-4\pi t\sin(\pi tx)% \cos(\pi tx)^{3}(\sin(\pi ty)+2)$$ $$\displaystyle-(2\mu(2\pi^{2}t^{2}\cos(\pi tx)^{2}-2\pi^{2}t^{2}\sin(\pi tx)^{2% }))/3+2\pi^{2}t^{2}\mu\cos(\pi tx)^{2}$$ $$\displaystyle-2\pi^{2}t^{2}\mu\sin(\pi tx)^{2}+\pi t\exp(-2\pi ty)\cos(\pi tx)% ^{2}\cos(\pi ty)+yz\exp\left(xyz\right)\cos\left(t\right)$$ $$\displaystyle-2\pi x\sin(\pi tx)\cos(\pi tx)(\sin(\pi ty)+2)-2\pi t\exp(-2\pi ty% )\cos(\pi tx)^{2}(\sin(\pi ty)+2),$$ (83) $$\displaystyle f_{u_{2}}=\pi t\exp(-4\pi ty)\cos(\pi ty)+\pi y\exp(-2\pi ty)% \cos(\pi ty)$$ $$\displaystyle-(4\pi^{2}t^{2}\mu\exp(-2\pi ty))/3-4\pi t\exp(-4\pi ty)(\sin(\pi ty% )+2)+xz\exp\left(xyz\right)\cos\left(t\right)$$ $$\displaystyle-2\pi y\exp(-2\pi ty)(\sin(\pi ty)+2)-2\pi t\sin(\pi tx)\exp(-2% \pi ty)\cos(\pi tx)(\sin(\pi ty)+2),$$ (84) $$\displaystyle f_{u_{3}}=2\pi t\exp(-2\pi ty)\cos(\pi txy)(\sin(\pi ty)+2)-\pi y% \cos(\pi txy)\cos(\pi ty)$$ $$\displaystyle+\pi xy\sin(\pi txy)(\sin(\pi ty)+2)-\pi t\exp(-2\pi ty)\cos(\pi txy% )\cos(\pi ty)-\pi^{2}t^{2}x^{2}\mu\cos(\pi txy)$$ $$\displaystyle-\pi^{2}t^{2}y^{2}\mu\cos(\pi txy)+2\pi t\sin(\pi tx)\cos(\pi txy% )\cos(\pi tx)(\sin(\pi ty)+2)+xy\exp\left(xyz\right)\cos\left(t\right)$$ $$\displaystyle+\pi tx\exp(-2\pi ty)\sin(\pi txy)(\sin(\pi ty)+2)+\pi ty\sin(\pi txy% )\cos(\pi tx)^{2}(\sin(\pi ty)+2).$$ (85) We have taken $\mu=10^{-2}$. Besides, in order to satisfy the mass conservation equation we also define a source term for it: $$\displaystyle f_{\rho}=\pi y\cos(\pi ty)+\pi t\exp(-2\pi ty)\cos(\pi ty)$$ $$\displaystyle-2\pi t\exp(-2\pi ty)(\sin(\pi ty)+2)-\pi t\sin(2\pi tx)(\sin(\pi ty% )+2).$$ (86) We consider the computational domain used in the previous test as well as the three uniform meshes introduced above. Computations were carried out until time $t_{end}=1$ with $CFL=1$. The obtained numerical results are shown in Table 3. To analyse the performance of the proposed methodology we have run several simulations slightly modifying the proposed schemes. On the one hand, rows denoted by Order 1 $(\partial_{t}\rho)_{\textrm{exact}}$ and LADER $(\partial_{t}\rho)_{\textrm{exact}}$ provide the results obtained when the exact value of the time derivative of the density is imposed in the computation of the pressure correction. We observe that the errors obtained are close to the ones corresponding to the simulations in which the former derivative is approximated. On the other hand, in LADER $\rho_{i_{\textrm{exact}}}^{n+\frac{1}{2}\Delta t}$ we assume that the exact values of the density at half in time steps are known and so, instead of approximating $\overline{\rho}_{iN_{ij}}$ we set $\rho_{i}^{n+\frac{1}{2}\Delta t}$. The decrease in the accuracy of the method compared with LADER scheme can be explained by the fact that when applying LADER methodology we are not only evolving the densities half in time but we also extrapolate them in a vicinity of the face. In fact, the errors obtained when applying LADER almost match the ones corresponding with LADER $\overline{\rho}_{i\,N_{ij}\;\textrm{exact}}$. Within this last simulation, we calculate the exact gradient of the density to obtain the extrapolation polynomials. Next, we compute the exact value of the density at the half in time step. Therefore, we confirm that coupling both techniques is necessary to attain the order of accuracy sought. This conclusion is reinforced by the studies presented in A for the advection-diffusion-reaction equation with a time and space dependent advection coefficient. 9 Summary and conclusions In this paper we have developed a projection hybrid high order FV/FE method to solve compressible low Mach number flows. It has been shown that the space dependence of density may produce spurious oscillations on the solution of the momentum equation. Therefore, two different approaches have been presented aiming to correct this behaviour. As a result, the numerical flux function has been modified by adding a new upwind term. Willing to obtain a high order scheme, LADER methodology has also been used. In order to get insight on the effects that the variable density has on the accuracy of the scheme, the unidimensional advection-diffusion-reaction equation with space and time dependant advection coefficient has been examined in the appendix. The corresponding accuracy analysis together with the empirical convergence rate studies reveal the necessity of reconstructing and evolving both the conservative variable and the advection coefficient to attain a second order scheme. These conclusions were confirmed by the numerical results obtained for the three-dimensional problem. Acknoledgements The authors are indebted to Professor E.F. Toro, from the Laboratory of Applied Mathematics, University of Trento, for the useful discussions on the subject. This work was financially supported by Spanish MICINN projects MTM2008-02483, CGL2011-28499-C03-01 and MTM2013-43745-R; by the Spanish MECD under grant FPU13/00279; by the Xunta de Galicia Consellería de Cultura Educación e Ordenación Universitaria under grant Axudas de apoio á etapa predoutoral do Plan I2C; by Xunta de Galicia and FEDER under research project GRC2013-014 and by Fundación Barrié under grant Becas de posgrado en el extranjero. References [BDA${}^{+}$02] J. B. Bell, A. S. Day, A. S. Almgren, M. J. Lijewski, and C. A. Rendleman. A parallel adaptive projection method for low Mach number flows. Int. J. Numer. Methods Fluids, 40:209–216, 2002. [BDDVC98] A. Bermúdez, A. Dervieux, J. A. Desideri, and M. E. Vázquez-Cendón. Upwind schemes for the two-dimensional shallow water equations with variable depth using unstructured meshes. Comput. Methods Appl. Mech. Eng., 155(1):49–72, 1998. [Ber05] A. Bermúdez. Continuum thermomechanics, volume 43 of Progress in Mathematical Physics. Birkhäuser Verlag, Basel, 2005. [BF91] F. Brezzi and M. Fortin. Mixed and hybrid finite element methods, volume 15 of Springer Series in Computational Mathematics. Springer-Verlag, 1991. [BFSVC14] A. Bermúdez, J. L. Ferrín, L. Saavedra, and M. E. Vázquez-Cendón. A projection hybrid finite volume/element method for low-Mach number flows. J. Comp. Phys., 271:360–378, 2014. [BFTVC18] S. Busto, J. L. Ferrín, E. F. Toro, and M. E. Vázquez-Cendón. A projection hybrid high order finite volume/finite element method for incompressible turbulent flows. J. Comput. Phys., 353:169–192, 2018. [BLVC17] A. Bermúdez, X. López, and M. E. Vázquez-Cendón. Finite volume methods for multi-component Euler equations with source terms. Comput. Fluids, 156:113 – 134, 2017. [BTVC16] S. Busto, E. F. Toro, and M. E. Vázquez-Cendón. Design and analisis of ADER–type schemes for model advection–diffusion–reaction equations. J. Comp. Phys., 327:553–575, 2016. [BVC94] A. Bermúdez and M. E. Vázquez-Cendón. Upwind methods for hyperbolic conservation laws with source terms. Comput. Fluids, 23(8):1049–1071, 1994. [CKS00] B. Cockburn, G. E. Karniadakis, and C.-W. Shu, editors. Discontinuous Galerkin Methods, volume 11 of Lecture Notes in Computational Science and Engineering. Springer-Verlag Berlin Heidelberg, 2000. [Col90] P. Colella. Multidimensional upwind methods for hyperbolic conservation laws. J. Comput. Phys., 87(1):171–200, 1990. [CS01] B. Cockburn and C. W. Shu. Runge–kutta discontinuous galerkin methods for convection-dominated problems. J. Sci. Comput., 16:199–224, 2001. [DHC${}^{+}$10] M. Dumbser, A. Hidalgo, M. Castro, C. Parés, and E. F. Toro. FORCE schemes on unstructured meshes II: Non-conservative hyperbolic systems. Comput. Methods Appl. Mech. Eng., 199:625–647, 2010. [GGHL08] T. Gallouët, L. Gastaldo, R. Herbin, and J.-C. Latché. An unconditionally stable pressure correction scheme for the compressible barotropic Navier-Stokes equations. ESAIM: Mathematical Modelling and Numerical Analysis, 42(2):303–331, 2008. [GLL12] W. Gao, H. Li, and R. Liu. An unstructured finite volume projection method for pulsatile flows through an asymmetric stenosis. J. Eng. Math., 72(1):125–140, 2012. [HFB86] T. J. R. Hughes, L. P. Franca, and M. Balestra. A new finite element formulation for computational uid mechanics: V. circumventing the babuska-brezzi condition: A stable petrov-galerkin formulation of the stokes problem accommodating equal order interpolation. Comput. Meth. Appl. Mech. Engrg., 58:85–99, 1986. [PBH04] S. Perron, S. Boivin, and J.-M. Hérard. A finite volume method to solve the 3D Navier-Stokes equations on unstructured collocated meshes. Comput. fluids, 33(10):1305–1333, 2004. [Rus62] V. V. Rusanov. The calculation of the interaction of non-stationary shock waves and obstacles. USSR Computational Mathematics and Mathematical Physics, 1:304–320, 1962. [TA07] S. Tu and S. Aliabadi. Development of a hybrid finite volume/element solver for incompressible flows. Int. J. Numer. Methods Fluids, 55(2):177–203, 2007. [TD17] M. Tavelli and M. Dumbser. A pressure-based semi-implicit space-time discontinuous Galerkin method on staggered unstructured meshes for the solution of the compressible Navier-Stokes equations at all Mach numbers. J. Comput. Phys., 341:341 – 376, 2017. [THD09] E. F. Toro, A. Hidalgo, and M. Dumbser. FORCE schemes on unstructured meshes I: Conservative hyperbolic systems. J. Comput. Phys., 228(9):3368 – 3389, 2009. [TMN01] E. F. Toro, R. C. Millington, and L. A. M. Nejad. Godunov methods, chapter Towards very high order Godunov schemes. Springer, 2001. [Tor09] E. F. Toro. Riemann solvers and numerical methods for fluid dynamics: A practical introduction. Springer, 2009. [VC94] M. E. Vázquez-Cendón. Estudio de esquemas descentrados para su aplicación a las leyes de conservación hiperbólicas con términos fuente. PhD thesis, Universidade de Santiago de Compostela, 1994. Appendix A Advection-diffusion-reaction equation with variable advection coefficient The numerical analysis of high order finite volume schemes is too complex in three dimensions. Consequently, a usual practice is to set a scalar simplified problem which retains the main difficulties of the three-dimensional model but eases the theoretical study of the developed methods. In this Appendix, we will apply this technique to analyse the main difficulties that compressible Navier-Stokes equations entail. Time and space dependency of density constitute one of the leading challenges of modelling compressible low Mach number flows. This problem can be conveyed to the scalar advection equation by assuming a time and space dependent advection coefficient. Moreover, an arbitrary source term can be introduced. As a result we obtain the advection-diffusion-reaction equation with variable coefficients, $$\partial_{t}q(x,t)+\partial_{x}\left(\lambda q\right)(x,t)=\partial_{x}\left(% \alpha\partial_{x}q\right)(x,t)+s(x,t,q).$$ (87) Let us remark that the corresponding analysis for the advection-diffusion-reaction equation with constant advection coefficient can be found in [BFTVC18] and [BTVC16]. In order to facilitate the theoretical study of advection equations with variable advection coefficient, we focus on the following scalar advection equation $$\partial_{t}q(x,t)+\partial_{x}\left(\lambda q\right)(x,t)=0.$$ (88) Two main issues must be taken into account with respect to the advection equation with constant coefficient: • A new numerical viscosity related to the spatial derivative of $\lambda(x,t)$ should be included. The main difficulties introduced due to the time and space dependency of the advection coefficient have been put forward in [BLVC17] for the multi-component Euler equations. In this article, we follow the methodology already proposed to solve the non-linear advection equation. • To build a second order in time and space scheme using LADER methodology, extrapolation and half in time evolution of $\lambda(x,t)$ need to be performed. In what follows, we will further develop the points above. A.1 Advection term with time and space dependent coefficient The use of Rusanov scheme to approximate the numerical flux function may not work properly. Let us remark that Rusanov flux is divided into two terms: $$\phi(q_{j}^{n},q_{j+1}^{n},\lambda_{j}^{n},\lambda_{j+1}^{n})=\frac{1}{2}\left% (\lambda_{j}^{n}q_{j}^{n}+\lambda_{j+1}^{n}q_{j+1}^{n}\right)-\frac{1}{2}% \alpha_{RS}\left(q_{j}^{n},q_{j+1}^{n},\lambda_{j}^{n},\lambda_{j+1}^{n}\right% )\left(q_{j+1}^{n}-q_{j}^{n}\right)$$ (89) with $$\alpha_{RS}\left(q_{j}^{n},q_{j+1}^{n},\lambda_{j}^{n},\lambda_{j+1}^{n}\right% )=\max\left\{\left|\lambda_{j}^{n}\right|,\left|\lambda_{j+1}^{n}\right|\right\}$$ (90) the Rusanov coefficient. The first term corresponds to a centred approximation of the flux. The second one is supposed to introduce the numerical viscosity needed for the stability of the scheme. However, splitting the spatial derivative of the flux into two terms, $$\displaystyle\partial_{x}\left(\lambda q\right)(x,t)=$$ $$\displaystyle\,\partial_{q}\left(\lambda q\right)(x,t)\;\partial_{x}q(x,t)+% \partial_{\lambda}\left(\lambda q\right)(x,t)\;\partial_{x}\lambda(x,t)$$ $$\displaystyle=$$ $$\displaystyle\,\lambda(x,t)\partial_{x}q(x,t)+q(x,t)\partial_{x}\lambda(x,t),$$ (91) we notice that Rusanov flux only adds the artificial viscosity related to the first one. Therefore, spurious oscillations may be produced. To correct this lack of upwinding, we propose to introduce a new artificial viscosity term, namely, $$-\left[\partial_{\lambda}\left(\lambda q\right)\Delta\lambda\right]_{|_{j+% \frac{1}{2}}}\approx-\frac{1}{2}\mathrm{sign}\left(\breve{\alpha}_{RS}\right)q% _{j+\frac{1}{2}}^{n}\left(\lambda_{j+1}^{n}-\lambda_{j}^{n}\right)$$ (92) with $\breve{\alpha}_{RS}$ the value of the eigenvalue used to compute $\alpha_{RS}$, that is, $\left|\breve{\alpha}_{RS}\right|=\alpha_{RS}$. Then, the numerical flux on the boundary $x_{j+\frac{1}{2}}$ reads, $$\displaystyle\phi(q_{j}^{n},q_{j+1}^{n},\lambda_{j}^{n},\lambda_{j+1}^{n})=$$ $$\displaystyle\frac{1}{2}\left(\lambda_{j}^{n}q_{j}^{n}+\lambda_{j+1}^{n}q_{j+1% }^{n}\right)-\frac{1}{2}\alpha_{RS}\left(q_{j}^{n},q_{j+1}^{n},\lambda_{j}^{n}% ,\lambda_{j+1}^{n}\right)\left(q_{j+1}^{n}-q_{j}^{n}\right)$$ $$\displaystyle-\frac{1}{2}\mathrm{sign}\left(\breve{\alpha}_{RS}\left(q_{j}^{n}% ,q_{j+1}^{n},\lambda_{j}^{n},\lambda_{j+1}^{n}\right)\right)q_{j+\frac{1}{2}}^% {n}\left(\lambda_{j+1}^{n}-\lambda_{j}^{n}\right).$$ (93) A.2 LADER scheme for advection equation with time and space dependent advection coefficient To attain a second order of accuracy scheme for equation (88) we apply LADER methodology. Initially, the polynomial reconstruction is performed on the conservative variable obtaining the following extrapolated values at the neighbouring of the boundaries of a cell $C_{j}$: $$\displaystyle q_{j-1R}^{n}=q_{j}^{n}+\frac{1}{2}\left(q_{j-1}^{n}-q_{j-2}^{n}% \right),$$ (94) $$\displaystyle q_{jL}^{n}=q_{j}^{n}-\frac{1}{2}\left(q_{j+1}^{n}-q_{j}^{n}% \right),$$ (95) $$\displaystyle q_{jR}^{n}=q_{j}^{n}+\frac{1}{2}\left(q_{j}^{n}-q_{j-1}^{n}% \right),$$ (96) $$\displaystyle q_{j+1L}^{n}=q_{j}^{n}-\frac{1}{2}\left(q_{j+2}^{n}-q_{j+1}^{n}% \right),$$ (97) where we have considered the approximations of the spatial derivatives of $q(x,t)$ at time $t^{n}$ related to the auxiliary elements of each volume $C_{j}=\left[x_{j-\frac{1}{2}},x_{j+\frac{1}{2}}\right]$: $$T_{j-1j\,L}=\left[x_{j},x_{j+1}\right],\qquad T_{jj+1\,R}=\left[x_{j-1},x_{j}\right]$$ (98) (see Figure 4). Next, the resolution of the generalised Riemann problem entails the computation of the half in time evolved values, $$\displaystyle\overline{q}_{j-1R}^{n}=q_{j-1R}^{n}-\frac{\Delta t}{2\Delta x}% \left(\lambda_{j}^{n}q^{n}_{j}-\lambda_{j-1}^{n}q^{n}_{j-1}\right),$$ (99) $$\displaystyle\overline{q}_{jL}^{n}=q_{jL}^{n}-\frac{\Delta t}{2\Delta x}\left(% \lambda_{j}^{n}q^{n}_{j}-\lambda_{j-1}^{n}q^{n}_{j-1}\right),$$ (100) $$\displaystyle\overline{q}_{jR}^{n}=q_{jR}^{n}-\frac{\Delta t}{2\Delta x}\left(% \lambda_{j+1}^{n}q^{n}_{j+1}-\lambda_{j}^{n}q^{n}_{j}\right),$$ (101) $$\displaystyle\overline{q}_{j+1L}^{n}=q_{j+1L}^{n}-\frac{\Delta t}{2\Delta x}% \left(\lambda_{j+1}^{n}q^{n}_{j+1}-\lambda_{j}^{n}q^{n}_{j}\right).$$ (102) Furthermore, by extrapolating the advection coefficient and calculating its half in time evolution, we get $$\displaystyle\overline{\lambda}_{j-1R}^{n}=\lambda_{j-1R}^{n}+\frac{\Delta t}{% 2}\partial_{t}\lambda^{n}_{j-\frac{1}{2}}=\lambda_{j}^{n}+\frac{1}{2}\left(% \lambda_{j-1}^{n}-\lambda_{j-2}^{n}\right)+\frac{\Delta t}{2}\partial_{t}% \lambda^{n}_{j-\frac{1}{2}},$$ (103) $$\displaystyle\overline{\lambda}_{jL}^{n}=\lambda_{jL}^{n}+\frac{\Delta t}{2}% \partial_{t}\lambda^{n}_{j-\frac{1}{2}}=\lambda_{j}^{n}-\frac{1}{2}\left(% \lambda_{j+1}^{n}-\lambda_{j}^{n}\right)+\frac{\Delta t}{2}\partial_{t}\lambda% ^{n}_{j-\frac{1}{2}},$$ (104) $$\displaystyle\overline{\lambda}_{jR}^{n}=\lambda_{jR}^{n}+\frac{\Delta t}{2}% \partial_{t}\lambda^{n}_{j+\frac{1}{2}}=\lambda_{j}^{n}+\frac{1}{2}\left(% \lambda_{j}^{n}-\lambda_{j-1}^{n}\right)+\frac{\Delta t}{2}\partial_{t}\lambda% ^{n}_{j+\frac{1}{2}},$$ (105) $$\displaystyle\overline{\lambda}_{j+1L}^{n}=\lambda_{j+1L}^{n}+\frac{\Delta t}{% 2}\partial_{t}\lambda^{n}_{j+\frac{1}{2}}=\lambda_{j}^{n}-\frac{1}{2}\left(% \lambda_{j+2}^{n}-\lambda_{j+1}^{n}\right)+\frac{\Delta t}{2}\partial_{t}% \lambda^{n}_{j+\frac{1}{2}}.$$ (106) Substituting (99)-(106) in the numerical flux function (93), LADER scheme for the advection equation, (88), finally reads $$\displaystyle q_{j}^{n+1}=$$ $$\displaystyle q_{j}^{n}-\frac{\Delta t}{2\Delta x}\left\{\left[\phantom{\frac{% 1}{2}\!\!\!\!\!}\left(\overline{\lambda}_{jR}^{n}\overline{q}_{jR}^{n}+% \overline{\lambda}_{j+1L}^{n}\overline{q}_{j+1L}^{n}\right)-\overline{\alpha}_% {RS,\,i+\frac{1}{2}}^{n}\left(\overline{q}_{j+1L}^{n}-\overline{q}_{jR}^{n}% \right)\right.\right.$$ $$\displaystyle\left.\left.-\mathrm{sign}\left(\breve{\overline{\alpha}}_{RS,\,i% +\frac{1}{2}}^{n}\right)\overline{q}_{j+\frac{1}{2}}^{n}\left(\overline{% \lambda}_{j+1L}^{n}-\overline{\lambda}_{jR}^{n}\right)\right]-\left[\phantom{% \frac{1}{2}\!\!\!\!\!}\left(\overline{\lambda}_{j-1R}^{n}\overline{q}_{j-1R}^{% n}+\overline{\lambda}_{jL}^{n}\overline{q}_{jL}^{n}\right)\right.\right.$$ $$\displaystyle\left.\left.-\overline{\alpha}_{RS,\,i-\frac{1}{2}}^{n}\left(% \overline{q}_{jL}^{n}-\overline{q}_{j-1R}^{n}\right)-\mathrm{sign}\left(\breve% {\overline{\alpha}}_{RS,\,i-\frac{1}{2}}^{n}\right)\overline{q}_{j-\frac{1}{2}% }^{n}\left(\overline{\lambda}_{jL}^{n}-\overline{\lambda}_{j-1R}^{n}\right)% \right]\right\}$$ (107) where $$\displaystyle\overline{\alpha}_{RS,\,i-\frac{1}{2}}^{n}=\max\left\{\left|% \overline{\lambda}_{j-1R}^{n}\right|,\left|\overline{\lambda}_{jL}^{n}\right|% \right\},$$ $$\displaystyle\overline{\alpha}_{RS,\,i+\frac{1}{2}}^{n}=\max\left\{\left|% \overline{\lambda}_{jR}^{n}\right|,\left|\overline{\lambda}_{j+1L}^{n}\right|% \right\}.$$ A.3 LADER scheme for the advection-diffusion-reaction equation with time and space dependent advection coefficient The construction of a numerical scheme to solve the advection-diffusion-reaction equation with time and space dependent advection and diffusion coefficients requires for a modification of the generalized Riemann problem to be solved. As a result, the approximation of the evolved variables (99)-(102) accounts for the diffusion and reaction terms, $$\displaystyle\overline{q}_{j-1R}^{n}=$$ $$\displaystyle q_{j-1R}^{n}-\frac{\Delta t}{2\Delta x}\left(\lambda_{j}^{n}q^{n% }_{j}-\lambda_{j-1}^{n}q^{n}_{j-1}\right)$$ $$\displaystyle+\frac{\Delta t}{2\Delta x^{2}}\left[\alpha_{j}^{n}\left(q_{j+1}^% {n}-q_{j}^{n}\right)-\alpha_{j-1}^{n}\left(q_{j-1}^{n}-q_{j-2}^{n}\right)% \right]+\frac{\Delta t}{2}s_{j-\frac{1}{2}}^{n},$$ (108) $$\displaystyle\overline{q}_{jL}^{n}=$$ $$\displaystyle q_{jL}^{n}-\frac{\Delta t}{2\Delta x}\left(\lambda_{j}^{n}q^{n}_% {j}-\lambda_{j-1}^{n}q^{n}_{j-1}\right)$$ $$\displaystyle+\frac{\Delta t}{2\Delta x^{2}}\left[\alpha_{j}^{n}\left(q_{j+1}^% {n}-q_{j}^{n}\right)-\alpha_{j-1}^{n}\left(q_{j-1}^{n}-q_{j-2}^{n}\right)% \right]+\frac{\Delta t}{2}s_{j-\frac{1}{2}}^{n},$$ (109) $$\displaystyle\overline{q}_{jR}^{n}=$$ $$\displaystyle q_{jR}^{n}-\frac{\Delta t}{2\Delta x}\left(\lambda_{j+1}^{n}q^{n% }_{j+1}-\lambda_{j}^{n}q^{n}_{j}\right)$$ $$\displaystyle+\frac{\Delta t}{2\Delta x^{2}}\left[\alpha_{j+1}^{n}\left(q_{j+2% }^{n}-q_{j+1}^{n}\right)-\alpha_{j}^{n}\left(q_{j}^{n}-q_{j-1}^{n}\right)% \right]+\frac{\Delta t}{2}s_{j+\frac{1}{2}}^{n},$$ (110) $$\displaystyle\overline{q}_{j+1L}^{n}=$$ $$\displaystyle q_{j+1L}^{n}-\frac{\Delta t}{2\Delta x}\left(\lambda_{j+1}^{n}q^% {n}_{j+1}-\lambda_{j}^{n}q^{n}_{j}\right)$$ $$\displaystyle+\frac{\Delta t}{2\Delta x^{2}}\left[\alpha_{j+1}^{n}\left(q_{j+2% }^{n}-q_{j+1}^{n}\right)-\alpha_{j}^{n}\left(q_{j}^{n}-q_{j-1}^{n}\right)% \right]+\frac{\Delta t}{2}s_{j+\frac{1}{2}}^{n},$$ (111) where $s_{j+\frac{1}{2}}^{n}=s\left(\frac{1}{2}(x_{j}+x_{j+1}),t^{n},q_{j+\frac{1}{2}% ^{n}}\right)$, $s_{j-\frac{1}{2}}^{n}=s\left(\frac{1}{2}(x_{j-1}+x_{j}),t^{n},q_{j-\frac{1}{2}% ^{n}}\right)$. These new approximations are substituted in the flux function (93) providing the expression of the numerical flux. The diffusion and reaction terms are computed by the mid-point rule. Finally, LADER scheme for equation (87) reads $$\displaystyle q_{j}^{n+1}=$$ $$\displaystyle q_{j}^{n}-\frac{\Delta t}{2\Delta x}\left\{\left[\phantom{\frac{% 1}{2}\!\!\!\!\!}\left(\overline{\lambda}_{jR}^{n}\overline{q}_{jR}^{n}+% \overline{\lambda}_{j+1L}^{n}\overline{q}_{j+1L}^{n}\right)-\overline{\alpha}_% {RS,\,i+\frac{1}{2}}^{n}\left(\overline{q}_{j+1L}^{n}-\overline{q}_{jR}^{n}% \right)\right.\right.$$ $$\displaystyle\left.\left.-\mathrm{sign}\left(\breve{\overline{\alpha}}_{RS,\,i% +\frac{1}{2}}^{n}\right)\overline{q}_{j+\frac{1}{2}}^{n}\left(\overline{% \lambda}_{j+1L}^{n}-\overline{\lambda}_{jR}^{n}\right)\right]-\left[\phantom{% \frac{1}{2}\!\!\!\!\!}\left(\overline{\lambda}_{j-1R}^{n}\overline{q}_{j-1R}^{% n}+\overline{\lambda}_{jL}^{n}\overline{q}_{jL}^{n}\right)\right.\right.$$ $$\displaystyle\left.\left.-\overline{\alpha}_{RS,\,i-\frac{1}{2}}^{n}\left(% \overline{q}_{jL}^{n}-\overline{q}_{j-1R}^{n}\right)-\mathrm{sign}\left(\breve% {\overline{\alpha}}_{RS,\,i-\frac{1}{2}}^{n}\right)\overline{q}_{j-\frac{1}{2}% }^{n}\left(\overline{\lambda}_{jL}^{n}-\overline{\lambda}_{j-1R}^{n}\right)% \right]\right\}$$ $$\displaystyle+\frac{\Delta t}{\Delta x^{2}}\left\{\left[\alpha^{n}_{j+\frac{1}% {2}}+\frac{\Delta t}{2}\partial_{t}\alpha_{j+\frac{1}{2}}^{n}\right]\left[q_{j% +1}^{n}-q_{j}^{n}+\frac{\Delta t}{2\Delta x^{2}}\left(\alpha^{n}_{j+\frac{3}{2% }}\left(q_{j+2}^{n}-q_{j+1}^{n}\right)\right.\right.\right.$$ $$\displaystyle\left.\left.\left.-2\alpha^{n}_{j+\frac{1}{2}}\left(q_{j+1}^{n}-q% _{j}^{n}\right)+\alpha^{n}_{j-\frac{1}{2}}\left(q_{j}^{n}-q_{j-1}^{n}\right)% \right)+\frac{\Delta t}{2}\left(s_{j+1}^{n}-s_{j}^{n}\right)\right]\right.$$ $$\displaystyle\left.+\left[\alpha^{n}_{j-\frac{1}{2}}+\frac{\Delta t}{2}% \partial_{t}\alpha_{j-\frac{1}{2}}^{n}\right]\left[q_{j-1}^{n}-q_{j}^{n}+\frac% {\Delta t}{2\Delta x^{2}}\left(-\alpha^{n}_{j+\frac{1}{2}}\left(q_{j+1}^{n}-q_% {j}^{n}\right)\right.\right.\right.$$ $$\displaystyle\left.\left.\left.+2\alpha^{n}_{j-\frac{1}{2}}\left(q_{j}^{n}-q_{% j-1}^{n}\right)-\alpha^{n}_{j-\frac{3}{2}}\left(q_{j-1}^{n}-q_{j-2}^{n}\right)% \right)+\frac{\Delta t}{2}\left(s_{j-1}^{n}-s_{j}^{n}\right)\right]\right\}$$ $$\displaystyle+s_{j}^{n+\frac{1}{2}}$$ (112) with $s_{j}^{n}=s\left(x_{j},t^{n},q_{j}^{n}\right)$ and $s_{j}^{n+\frac{1}{2}}=s\left(x_{j},t^{n}+\frac{\Delta t}{2},q_{j}^{n+\frac{1}{% 2}}\right)$. A.4 Accuracy analysis Lemma 1. LADER scheme for the advection equation with a time and space dependent advection coefficient, (107), is second order accurate in space and time. Proof. We start by computing the local truncation error related to $\overline{\lambda}_{jR}^{n}\overline{q}_{jR}^{n}+\overline{\lambda}_{j+1L}^{n}% \overline{q}_{j+1L}^{n}$; $$\displaystyle 2\lambda(x_{j},t^{n})q(x_{j},t^{n})+\partial_{x}\left(\lambda(x_% {j},t^{n})q(x_{j},t^{n})\right)\Delta x$$ $$\displaystyle+\frac{(\Delta x)^{2}}{2}\left(-\lambda(x_{j},t^{n})\partial_{x}^% {(2)}q(x_{j},t^{n})-q(x_{j},t^{n})\partial_{x}^{(2)}\lambda(x_{j},t^{n})+% \partial_{x}\lambda(x_{j},t^{n})\partial_{x}q(x_{j},t^{n})\right)$$ $$\displaystyle-\Delta t\left[\lambda(x_{j},t^{n})\partial_{x}\left(\lambda(x_{j% },t^{n})q(x_{j},t^{n})\right)-q(x_{j},t^{n})\partial_{t}\lambda(x_{j+\frac{1}{% 2}},t^{n})\right]$$ $$\displaystyle-\frac{\Delta t\Delta x}{2}\left[\lambda(x_{j},t^{n})\partial_{x}% ^{(2)}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)+\partial_{x}\lambda(x_{j% },t^{n})\partial_{x}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)\right.$$ $$\displaystyle\left.-\partial_{t}\lambda(x_{j+\frac{1}{2}},t^{n})\partial_{x}q(% x_{j},t^{n})\right]-\frac{(\Delta t)^{2}}{2}\partial_{t}\lambda(x_{j+\frac{1}{% 2}},t^{n})\partial_{x}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)+\mathcal% {O}(\Delta x^{3}).$$ Similarly, the local truncation error of $\overline{\lambda}_{j-1R}^{n}\overline{q}_{j-1R}^{n}+\overline{\lambda}_{jL}^{% n}\overline{q}_{jL}^{n}$ results $$\displaystyle 2\lambda(x_{j},t^{n})q(x_{j},t^{n})-\partial_{x}\left(\lambda(x_% {j},t^{n})q(x_{j},t^{n})\right)\Delta x$$ $$\displaystyle+\frac{(\Delta x)^{2}}{2}\left(-\lambda(x_{j},t^{n})\partial_{x}^% {(2)}q(x_{j},t^{n})-q(x_{j},t^{n})\partial_{x}^{(2)}\lambda(x_{j},t^{n})+% \partial_{x}\lambda(x_{j},t^{n})\partial_{x}q(x_{j},t^{n})\right)$$ $$\displaystyle-\Delta t\left[\lambda(x_{j},t^{n})\partial_{x}\left(\lambda(x_{j% },t^{n})q(x_{j},t^{n})\right)-q(x_{j},t^{n})\partial_{t}\lambda(x_{j-\frac{1}{% 2}},t^{n})\right]$$ $$\displaystyle+\frac{\Delta t\Delta x}{2}\left[\lambda(x_{j},t^{n})\partial_{x}% ^{(2)}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)+\partial_{x}\lambda(x_{j% },t^{n})\partial_{x}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)\right.$$ $$\displaystyle\left.-\partial_{t}\lambda(x_{j-\frac{1}{2}},t^{n})\partial_{x}q(% x_{j},t^{n})\right]-\frac{(\Delta t)^{2}}{2}\partial_{t}\lambda(x_{j-\frac{1}{% 2}},t^{n})\partial_{x}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)+\mathcal% {O}(\Delta x^{3}).$$ Finally, it is verified that $\alpha_{RS,\,i+\frac{1}{2}}^{n}\left(\overline{q}_{j+1L}^{n}\!-\!\overline{q}_% {jR}^{n}\right)$, $\alpha_{RS,\,i-\frac{1}{2}}^{n}\left(\overline{q}_{jL}^{n}\!-\!\overline{q}_{j% -1R}^{n}\right)$, $\overline{q}_{j+\frac{1}{2}}^{n}\left(\overline{\lambda}_{j+1L}^{n}\!-\!% \overline{\lambda}_{jR}^{n}\right)$ and $\overline{q}_{j-\frac{1}{2}}^{n}\left(\overline{\lambda}_{jL}^{n}\!-\!% \overline{\lambda}_{j-1R}^{n}\right)$ have a local truncation error of $\mathcal{O}(\Delta x^{3})$. Hence, $$\displaystyle\tau^{n}=$$ $$\displaystyle\,\partial_{t}q(x_{j},t^{n})+\partial_{x}\left(q(x_{j},t^{n})% \lambda(x_{j},t^{n})\right)+\frac{\Delta t}{2\Delta x}q(x_{j},t^{n})\left(% \partial_{t}\lambda(x_{j+\frac{1}{2}},t^{n})-\partial_{t}\lambda(x_{j-\frac{1}% {2}},t^{n})\right)$$ $$\displaystyle+\frac{\Delta t}{2}\partial_{t}^{(2)}q(x_{j},t^{n})+\frac{\Delta t% }{4}\left(\partial_{t}\lambda(x_{j+\frac{1}{2}},t^{n})+\partial_{t}\lambda(x_{% j-\frac{1}{2}},t^{n})\right)\partial_{x}q(x_{j},t^{n})$$ $$\displaystyle-\frac{\Delta t}{2}\left[\lambda(x_{j},t^{n})\partial_{x}^{(2)}% \left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)+\partial_{x}\lambda(x_{j},t^{n% })\partial_{x}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)\right]$$ $$\displaystyle+\mathcal{O}\left(\Delta x^{2}\right)+\mathcal{O}\left(\Delta t^{% 2}\right)+\mathcal{O}\left(\Delta t\Delta x\right).$$ (113) Moreover, applying the Taylor series expansion in space of $\lambda(x_{j+\frac{1}{2}},t^{n})$ and $\lambda(x_{j-\frac{1}{2}},t^{n})$ we obtain $$\displaystyle\frac{\Delta t}{2\Delta x}q(x_{j},t^{n})\left(\partial_{t}\lambda% (x_{j+\frac{1}{2}},t^{n})-\partial_{t}\lambda(x_{j-\frac{1}{2}},t^{n})\right)=% \frac{\Delta t}{2}q(x_{j},t^{n})\partial_{t}\left(\partial_{x}\lambda(x_{j},t^% {n})\right)+\mathcal{O}\left(\Delta x\Delta t\right),$$ $$\displaystyle\frac{\Delta t}{4}\left(\partial_{t}\lambda(x_{j+\frac{1}{2}},t^{% n})+\partial_{t}\lambda(x_{j-\frac{1}{2}},t^{n})\right)\partial_{x}q(x_{j},t^{% n})=\frac{\Delta t}{2}\partial_{t}\lambda(x_{j},t^{n})\partial_{x}q(x_{j},t^{n% })+\mathcal{O}\left(\Delta x\Delta t\right).$$ Therefore, $$\displaystyle\tau^{n}=$$ $$\displaystyle\,\partial_{t}q(x_{j},t^{n})+\partial_{x}\left(q(x_{j},t^{n})% \lambda(x_{j},t^{n})\right)+\frac{\Delta t}{2}\partial_{t}^{(2)}q(x_{j},t^{n})$$ $$\displaystyle+\frac{\Delta t}{2}q(x_{j},t^{n})\partial_{t}\left(\partial_{x}% \lambda(x_{j},t^{n})\right)+\frac{\Delta t}{2}\partial_{t}\lambda(x_{j},t^{n})% \partial_{x}q(x_{j},t^{n})$$ $$\displaystyle-\frac{\Delta t}{2}\left[\lambda(x_{j},t^{n})\partial_{x}^{(2)}% \left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)+\partial_{x}\lambda(x_{j},t^{n% })\partial_{x}\left(\lambda(x_{j},t^{n})q(x_{j},t^{n})\right)\right]$$ $$\displaystyle+\mathcal{O}\left(\Delta x^{2}\right)+\mathcal{O}\left(\Delta t^{% 2}\right)+\mathcal{O}\left(\Delta t\Delta x\right)$$ $$\displaystyle=$$ $$\displaystyle\,\mathcal{O}\left(\Delta x^{2}\right)+\mathcal{O}\left(\Delta t^% {2}\right)+\mathcal{O}\left(\Delta t\Delta x\right),$$ (114) where we have taken into account that the time derivative equation (88) reads $$\displaystyle\partial_{t}^{(2)}q+\partial_{x}q\partial_{t}\lambda+q\partial_{t% }\left(\partial_{x}\lambda\right)-\partial_{x}\lambda\partial_{x}\left(\lambda q% \right)-\lambda\partial_{x}^{(2)}\left(\lambda q\right)=0.$$ (115) ∎ Remark 6. Let us notice that, when applying LADER methodology to the advection equation with a space and time dependent advection coefficient, we must consider the extrapolated and evolved values for the two variables involved in equation (88), $\lambda$ and $q$. Otherwise, the order of the resulting scheme is lower than two. We will give the main idea to the proof of the previous statement. If we do not evolve in time the advection coefficient then we will not have any time derivative of it in the final local truncation error expression analogous to (114). On the other hand, equation (115) still holds. Since this relation is no more useful to cancel the terms obtained at the last step of the truncation error analysis, the first order terms can not be removed. As a result, we obtain a first order scheme. A similar argument can be given if $q$ is not properly evolved. Lemma 2. LADER scheme for the advection-diffusion-reaction equation with time and space dependent advection and diffusion coefficients is second order accurate in space and time. Proof. The result is derived from Lemma 1 following the calculus presented in [BFTVC18] for the advection-diffusion-reaction equation with constant advection coefficient. ∎ A.5 Numerical results To asses the accuracy of LADER scheme (107), we present several analytical tests including space and time dependent advection coefficients. In fact, the method of manufactured solutions is applied in order to obtain a generalized source term assuming a given expression for the exact solution. Dirichlet boundary conditions are considered in all the presented tests. A.5.1 Test A1 As first test, we assume a space dependent advection coefficient of the form $\lambda(x,t)=x+2$ and an exact solution given by $q(x,t)=e^{-2(x-t)^{2}-t}$. Applying the method of manufactured solutions to compute the source term of the corresponding advection equation, the problem to be solved reads $$\displaystyle\partial_{t}q(x,t)+\partial_{x}\left[\lambda(x,t)q\left(x,t\right% )\right]=s(x,t),$$ $$\displaystyle q(x,0)=e^{-2x^{2}},$$ (116) with $$\displaystyle s(x,t)=4(x-t)(-1+x)e^{-2(x-t)^{2}-t}.$$ (117) The simulations were run considering six uniform meshes on the computational domain $\Omega=[0,2]$. Table 4 reports the errors and convergence rates obtained using the first order scheme with the numerical flux function defined by (93). The attained orders match the theoretically expected ones. In table 5, we present the results obtained by applying the first order scheme without the extra viscosity term in the flux, that is, assuming the numerical flux function being given by (89). We observe that they are not highly affected by the lack of the upwind term. This fact is consistent with the truncation error analysis performed which proves that the new term is of first order and so it is needed only when applying high order schemes. The requirement of introducing the new artificial viscosity term related to the advection coefficient is evidenced in Tables 6 and 7. The first of them was obtained by applying LADER scheme to the flux function which neglects the new numerical viscosity term, (89). Neglecting this new upwind term in the numerical flux function, LADER methodology does not achieve second order of accuracy. Moreover, spurious oscillations arise. On the other hand, Table 7 confirms that scheme (112) reaches the expected second order of accuracy. Finally, the results presented in Table 8 ratify that second order can not be reached if we do not apply LADER methodology to the advection coefficient. That is, if we consider the values of $\lambda$ at the previous time step neglecting its extrapolation to the cell boundaries and their half in time evolution. Furthermore, instability occurs. A.5.2 Test A2 The second proposed test with a variable advection coefficient reads $$\displaystyle\partial_{t}q(x,t)+\partial_{x}\left[\lambda\left(x,t\right)q% \left(x,t\right)\right]=s(x,y),$$ $$\displaystyle q(x,0)=e^{-2x^{2}},$$ (118) where $$\displaystyle\lambda(x,t)=x+t^{2}+2,s(x,y)=4(x-t)\left(-1-x-t^{2}\right)e^{-2(% x-t)^{2}-t}.$$ (119) Let us notice that the advection coefficient considered is time and space dependent. Furthermore, we have included a generalized source term so that the exact solution is $$q\left(x,t\right)=e^{-2(x-t)^{2}-t}.$$ (120) The results obtained using the first order scheme are depicted in Table 9. Regarding LADER scheme, this test case requires ENO reconstruction on the conservative variable to avoid instabilities. Moreover, the test case is set to push the capabilities of the numerical scheme to the limit. This leads to a decrease on the attained order of accuracy (see Table 10). A.6 Test A3 As last test we propose the following initial value problem on the advection-diffusion-reaction equation with variable advection and diffusion coefficients $$\displaystyle\partial_{t}q(x,t)+\partial_{x}\left[\lambda(x,t)q\left(x,t\right% )\right]+\partial_{x}\left[\alpha(x,t)\partial_{x}q(x,t)\right]=s(x,t),$$ $$\displaystyle q(x,0)=e^{-2x^{2}},$$ (121) where $$\displaystyle\lambda(x,t)=2+x+t^{2},$$ (122) $$\displaystyle\alpha(x,t)=10^{-5}e^{x(t-1)^{2}},$$ (123) $$\displaystyle s(x,t)=$$ $$\displaystyle-e^{-2(x-t)^{2}-t}+4e^{-2(x-t)^{2}-t}(x-t)(-1-x-t^{2})+e^{-2(x-t)% ^{2}-t}$$ $$\displaystyle+10^{-5}(t-1)^{2}e^{x(t-1)^{2}}(-4(x-t)e^{-2(x-t)^{2}-t})$$ $$\displaystyle+10^{-5}e^{x(t-1)^{2}}(-4+16(x-t)^{2})e^{-2(x-t)^{2}-t}.$$ (124) Herein, the exact solution reads $$q\left(x,t\right)=e^{-2(x-t)^{2}-t}.$$ (125) The results obtained using the first order scheme and LADER methodology with ENO reconstruction are shown in Tables 11 and 12. They reinforce the conclusions already got for the previous tests. In Table 13 we present the errors and order obtained for LADER methodology when neglecting the artificial viscosity term related to the gradient of the advection coefficient. We can observe that spurious oscillations arise if this term is not included. Hence, in order to avoid instabilities, an ENO-based reconstruction needs to be performed resulting in a decrease of the order of accuracy. Although, the magnitude of the obtained errors is lower than for the first order scheme.
Ensemble linear interpolators: The role of ensembling Mingqi Wu Department of Mathematics and Statistics, Mcgill University, 805 Sherbrooke Street West, Montreal, Quebec H3A 0B9, Canada; Email: [email protected].    Qiang Sun Corresponding author. Department of Statistical Sciences, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada; Email: [email protected]. (August 31st, 2023 ) Abstract Interpolators are unstable. For example, the mininum $\ell_{2}$ norm least square interpolator exhibits unbounded test errors when dealing with noisy data. In this paper, we study how ensemble stabilizes and thus improves the generalization performance, measured by the out-of-sample prediction risk, of an individual interpolator. We focus on bagged linear interpolators, as bagging is a popular randomization-based ensemble method that can be implemented in parallel. We introduce the multiplier-bootstrap-based bagged least square estimator, which can then be formulated as an average of the sketched least square estimators. The proposed multiplier bootstrap encompasses the classical bootstrap with replacement as a special case, along with a more intriguing variant which we call the Bernoulli bootstrap. Focusing on the proportional regime where the sample size scales proportionally with the feature dimensionality, we investigate the out-of-sample prediction risks of the sketched and bagged least square estimators in both underparametrized and overparameterized regimes. Our results reveal the statistical roles of sketching and bagging. In particular, sketching modifies the aspect ratio and shifts the interpolation threshold of the minimum $\ell_{2}$ norm estimator. However, the risk of the sketched estimator continues to be unbounded around the interpolation threshold due to excessive variance. In stark contrast, bagging effectively mitigates this variance, leading to a bounded limiting out-of-sample prediction risk. To further understand this stability improvement property, we establish that bagging acts as a form of implicit regularization, substantiated by the equivalence of the bagged estimator with its explicitly regularized counterpart. We also discuss several extensions. Keywords: Bagging, ensemble, interpolators, overparameterization, random matrix theory, sketching. Contents 1 Introduction 2 Bagged linear interpolators 2.1 Bagged linear interpolators 2.2 Risk, bias, and variance 2.3 Standing assumptions 3 A warm-up: Isotropic features 3.1 Sketching shifts the interpolation threshold 3.2 Bagging stabilizes the generalization performance 3.3 Bagging as implicit regularization 4 Correlated features 4.1 Sketching under correlated features 4.2 Bagging under correlated features 4.3 Bagging as implicit regularization 5 Extensions 5.1 Deterministic signal 5.2 Training error 5.3 Adversarial risk 6 Conclusions and discussions S.1 Basics S.1.1 Proofs for Section S.1 S.1.1.1 Proof of Lemma S.1.2 S.1.1.2 Proof of Lemma S.1.3 S.1.1.3 Proof of Lemma S.1.4 S.1.2 Technical lemmas S.2 Proofs for Section 2 S.2.1 Proof of Lemma 2.1 S.2.2 Proof for Lemma 2.2 S.2.3 Proof of Lemma 2.3 S.3 Proofs for Section 3 S.3.1 Proof of Lemma 3.1 S.3.2 Proof of Theorem 3.2 S.3.3 Proof of Corollary 3.3 S.3.4 Proof of Corollary 3.4 S.3.5 Proof of Corollary 3.5 S.3.6 Proof of Theorem 3.6 S.3.6.1 A direct proof S.3.6.2 As the isotropic case of Theorem 4.3 S.3.7 Proof of Lemma 3.7 S.3.8 Proof of Lemma 3.8 S.3.9 Supporting lemmas S.3.9.1 Proof of Lemma S.3.2 S.3.9.2 Proof of Lemma S.3.3 S.3.9.3 Proof of Lemma S.3.4 S.3.9.4 Proof of Lemma S.3.5 S.3.9.5 Proof of Lemma S.3.6 S.3.9.6 Proof of Lemma S.3.7 S.3.10 Technical lemmas S.4 Proofs for Section 4 S.4.1 Proof of Lemma 4.1 S.4.2 Proof of Theorem 4.2 S.4.3 Proof of Theorem 4.3 S.4.4 Proof of Corollary 4.4 S.4.5 Proof of Corollary 4.5 S.4.6 Supporting lemmas S.4.6.1 Proof of Lemma S.4.1 S.4.6.2 Proof of Lemma S.4.2 S.4.6.3 Proof of Lemma S.4.3 S.4.6.4 Proof of Lemma S.4.4 S.4.6.5 Proof of Lemma S.4.5 S.4.6.6 Proof of Lemma S.4.6 S.4.6.7 Proof of Lemma S.4.7 S.4.7 Technical lemmas S.5 Proofs for Section 5 S.5.1 Proof of Theorem 5.1 S.5.2 Proof of Corollary 5.2 S.5.3 Proof of Theorem 5.3 S.5.4 Proof of Lemma 5.4 S.5.5 Proof of Lemma 5.5 S.5.6 Technical lemmas S.6 Preliminary lemmas 1 Introduction Deep neural networks have achieved remarkable performance across a broad spectrum of tasks, such as computer vision, speech recognition, and natural language processing (LeCun et al.,, 2015; Schmidhuber,, 2015; Wei et al.,, 2023). These networks are often overparameterized, allowing them to effortlessly interpolate the training data and achieve zero training errors. However, the success of overparameterized neural networks comes at the cost of heightened instability. For example, neural networks can fail dramatically in the presence of subtle and almost imperceptible image perturbations in image classification tasks (Goodfellow et al.,, 2018) or they may struggle to capture signals in low signal-to-noise ratio scenarios, such as minor structural variations in image reconstruction tasks (Antun et al.,, 2020). The instability inherent to overparameterized models, or interpolators, extends beyond deep neural networks; even simple models such as trees and linear models, exhibit instability, as evidenced by their unbounded test errors (Wyner et al.,, 2017; Belkin et al.,, 2019; Hastie et al.,, 2022). In this paper, we focus on least square interpolators. Consider independently and identically distributed (i.i.d.) data $\mathcal{D}:=\{(x_{i},y_{i}):1\leq i\leq n\}$, which are generated according to the model $$\displaystyle(x_{i},\varepsilon_{i})\sim(x,\varepsilon)\sim P_{x}\times P_{\varepsilon},$$ $$\displaystyle y_{i}=x_{i}^{\mathrm{\scriptscriptstyle T}}\beta+\varepsilon_{i},~{}~{}1\leq i\leq n,$$ (1.1) where $P_{x}$ is a distribution on $\mathbb{R}^{d}$ and $P_{\varepsilon}$ is a distribution on $\mathbb{R}$ with mean $0$ and variance $\sigma^{2}$. In a matrix form, we can write $Y=X\beta+E,$ where $Y=(y_{1},\ldots,y_{n})^{\mathrm{\scriptscriptstyle T}}$, $X=(x_{1},\ldots,x_{n})^{\mathrm{\scriptscriptstyle T}}$, and $E=(\varepsilon_{1},\ldots,\varepsilon_{n})^{\mathrm{\scriptscriptstyle T}}$. Let the minimum $\ell_{2}$ norm, or min-norm for short, least square estimator (Hastie et al.,, 2022) be $$\displaystyle\widehat{\beta}^{\rm mn}$$ $$\displaystyle:=\mathop{\mathrm{argmin}}\left\{\|b\|_{2}:~{}b~{}\text{minimizes}~{}\|Y-Xb\|_{2}^{2}\right\}$$ $$\displaystyle=(X^{\mathrm{\scriptscriptstyle T}}X)^{+}X^{\mathrm{\scriptscriptstyle T}}Y=\lim_{\lambda\rightarrow 0^{+}}(X^{\mathrm{\scriptscriptstyle T}}X+n\lambda I)^{-1}{X}^{\mathrm{\scriptscriptstyle T}}{Y},$$ where $(\cdot)^{+}$ denotes the Moore-Penrose pseudoinverse, and $I\in\mathbb{R}^{d\times d}$ is the identity matrix. This estimator is also called the ridgeless least square estimator. When $X$ possesses full row rank, typically the case for $p>n$, the min-norm estimator becomes an interpolator, implying $y_{i}=x_{i}^{\mathrm{\scriptscriptstyle T}}\widehat{\beta}^{\rm mn}$ for $1\leq i\leq n$. The least square interpolator $\widehat{\beta}^{\rm mn}$ has unbounded test errors. To see this, Figure 1 plots its out-of-sample prediction risks versus the dimension-to-sample-size ratio $\gamma=d/n$111To be rigorous, $\gamma=d/n$ should be understood as $\gamma\simeq d/n$, that is, $\gamma$ is asymptotically equivalent to $d/n$ or $\lim_{n\rightarrow}a_{n}/b_{n}=1$., which is laso referred to as the aspect ratio. Notably, the out-of-sample prediction risk spikes around the interpolation threshold $\gamma=1$, leading to highly unstable predictions if the model size and thus the aspect ratio is not judiciously chosen. Recent empirical studies showed that ensembles of interpolators, including random forests and deep ensembles, improve the predictive performance of individual interpolators (Lee et al.,, 2015; Wyner et al.,, 2017). For instance, Wyner et al., (2017) interpreted random forests as “self-averaging interpolators” and hypothesized that such a behavior led to their success. Similarly, Lee et al., (2015) demonstrated that deep ensembles, employing various ensembling strategies, effectively enhance the predictive performance of individual networks. However, the mechanisms through which ensembling fosters stability and improves the generalization performance of individual interpolators remain elusive. Therefore we ask the following questions: What is the statistical role of ensembling? How does ensembling offer stability and predictive improvement for individual interpolators? This paper addresses the above questions in the context of bagged least square interpolators. Bagging, an abbreviation for bootstrap aggregating (Breiman,, 1996), represents a popular randomization-based ensemble technique that lends itself to parallel implementation. Given a training dataset $\mathcal{D}$ comprising $n$ instances, bagging generates $B$ subsamples ${\mathcal{D}_{1},\ldots,\mathcal{D}_{B}}$, each of size $n$, through uniform sampling with replacement from $\mathcal{D}$. This sampling procedure is called the classical bootstrap, and each of these subsamples is referred to as a bootstrap sample. Subsequently, $B$ min-norm least square estimators are individually fitted to these $B$ bootstrap samples, which are then aggregated to produce the final estimator. We shall call the resulting estimator the bagged least square estimator or simply the bagged estimator222We use the bootstrap estimator interchangeably with the bagged estimator.. Specifically, we study the generalization performance, measured by the out-of-sample prediction risks, of bagged least square interpolators when the dimensionality scales proportionally with the sample size $d\asymp n$. This allows us to understand how bagging stabilizes and thus improves the generalization performance. We summarize our main contributions below. 1. First, we introduce a new bagged least square estimator using a multiplier bootstrap approach, and formulate the former as an average of sketched least square estimators. This approach includes the classical bootstrap with replacement as a special case and introduces an intriguing variant which we call the Bernoulli bootstrap. Remarkably, bagged estimators based on these two bootstrap methods yield identical limiting out-of-sample prediction risks, when the success probability in the Bernoulli bootstrap is taken as $1-1/e\approx 0.632$. 2. Second, we provide precise formulas for the limiting out-of-sample prediction risks of both sketched and bagged least square estimators. This unveils the statistical roles of sketching and bagging in terms of the generalization performance. While sketching modifies the aspect ratio and shifts the interpolation threshold of the min-norm estimator, the prediction risk still spikes near the interpolation threshold. Remarkably, bagging consistently stabilizes and improves the risks of an individual sketched estimator, and keeps the risk of the bagged estimator bounded. This benefit arises from fact that the bagged estimator is not an interpolator anymore. 3. Third, we find that the out-of-sample prediction risk of the sketched estimator depends on the sampling distribution of the multipliers, while the risk of the bagged least square estimator is invariant to it. This demonstrates certain robustness of bagging. For sketching, we establish the optimality of the Bernoulli sketching among various sketching methods. Numerical studies prove that the Bernoulli sketching is computationally faster than other methods. 4. Fourth, we prove that bagging acts as a form of implicit regularization. Specifically, we establish that the bagged estimator is equivalent to a ridge regression estimator under isotropic features. Under more general covariance matrices, we prove an equivalence between the bagged least square estimator under model (1) and the min-norm least square estimator under a different model with a shrunken feature covariance and a shrunken signal. 5. Fifth, in comparison to the classical bootstrap, which requires multiple passes over the training data and is challenging to parallelize due to random selection of $n$ elements, the Bernoulli bootstrap stands out as a more efficient alternative. Specifically, when generating a single bootstrap sample, instead of randomly drawing from the sample data with replacement, each data point is assigned a random Bernoulli weight. Bernoulli bootstrap is particularly advantageous for large values of $n$ due to its lower number of data passes. 6. Lastly, we characterize the training error and adversarial risk of the bagged estimator. The training error is linear in the out-of-sample prediction risk: Better the rescaled training error, better the prediction risk. Thus, the bagged estimator with the smallest rescaled training error generalizes the best. Comparing with the full-sample min-norm estimator, bagging also helps stabilize the adversarial risk, in part by shrinking the $\ell_{2}$ norm of the estimator. Related work We briefly review work that is closely related to ours. Benign generalization of overparameterized models Overparameterized neural networks often exhibit benign generalization performance, even when they are trained without explicit regularization (He et al.,, 2016; Neyshabur et al.,, 2014; Canziani et al.,, 2016; Novak et al.,, 2018; Zhang et al.,, 2021; Bartlett et al.,, 2020; Liang and Rakhlin,, 2020). Belkin et al., (2019) furthered this line of research by positing the “double descent” performance curve applicable to models beyond neural networks. This curve subsumes the textbook $U$-shape bias-variance-tradeoff curve (Hastie et al.,, 2009), illustrating how increasing model capacity beyond some interpolation threshold may yield improved out-of-sample prediction risks. Subsequent studies characterized this double descent phenomenon for various simplified models (Hastie et al.,, 2022; Ba et al.,, 2020; Richards et al.,, 2021; Mei and Montanari,, 2022). Sketching Raskutti and Mahoney, (2016) and Dobriban and Liu, (2019) first studied the statistical performance of several sketching methods in the underparameterized regime. More recently, Chen et al., (2023) analyzed the out-of-sample prediction risks of the sketched ridgeless least square estimators in both underparameterized and overparameterized regimes. They introduced an intriguing perspective: Sketching serves as the dual of overparameterization and may help generalize. However, the sketching matrices considered there, such as orthogonal and i.i.d. sketching matrices, are of full rank. In contrast, we focus on diagonal and mostly singular sketching matrices in the form of (2.3), which complements their results. Here, each sketch dataset is constructed by multiplying each observation in the training dataset by a random multiplier. This approach is more computationally efficient than multiplying the data matrix by the often dense sketching matrix as done in Chen et al., (2023), which is also supported by our numerical studies. Bootstrap and bagging Bootstrap, a resampling technique introduced by Efron, (1979) and inspired by the jackknife method (Quenouille,, 1949), finds extensive applications in estimating and inferring sampling distributions, such as the regression coefficient $\beta$ in model (1). However, as the aspect ratio $d/n$ approaches the interpolation threshold, the regression coefficient $\beta$ quickly becomes unidentifiable, rendering its estimation and inference a groundless task. Indeed, El Karoui and Purdom, (2018) concluded that it is perhaps not possible to develop universal and robust bootstrap techniques for inferring $\beta$ in high dimensions. This negative observation, with hindsight, is not surprising because identifiability issues for parameter estimation arise in high dimensions, while prediction tasks are free of such issues. Bühlmann and Yu, (2002) established that bagging acts as a smoothing operation for hard decision problems. In contrast, we analyze bagged least square interpolators, showing how bagging can stabilize the variance and benefit generalization properties. Paper overview The rest of this paper proceeds as follows. The first introduce notation that is used throughout the paper. Section 2 presents the proposed bagged estimator, related definitions, and standing assumptions. In Section 3, we study the out-of-sample prediction risks in the context of isotropic features. Section 4 delves into the analysis of correlated features. We extends the results to the deterministic signal case, characterize the training errors and adversarial risks in Section 5. Finally, Section 6 concludes the paper with discussions. All proofs are collected in the supplementary material. Notation We use $c$ and $C$ to denote generic constants which may change from line to line. For a sequence $a_{n}$, $a_{n}\rightarrow a^{-}$ or $a_{n}\searrow a$ denotes that $a_{n}$ goes to $a$ from the left side, while $a_{n}\rightarrow a^{+}$ or $a_{n}\nearrow a$ denotes that $a_{n}$ goes to $a$ from the right side. For a vector $u$ and any $p\geq 1$, $\|u\|_{p}$ denotes its $p$-th norm. For a matrix $A$, we use $A^{+}$ to denote its Moore-Penrose pseudoinverse, $\|A\|_{2}$ to denote its spectral norm, $\|A\|_{\mathrm{\scriptstyle F}}$ to denote its Frobenius norm, and $\textrm{tr}(A)$ to denote its trace. For a sequence of random variables $\{X_{n}\}$, we use $X_{n}\overset{{\rm a.s.}}{\rightarrow}X$ to denote that $X_{n}$ converges almost surely to $X$, and $X_{n}\rightsquigarrow X$ to denote that $X_{n}$ converges in distribution to $X$. 2 Bagged linear interpolators This section introduces the multiplier bootstrap procedure and the associated bagged ridgeless least square estimator, formally defines the out-of-sample prediction risk, and states standing assumptions. 2.1 Bagged linear interpolators We introduce the multiplier-bootstrap-based bagged ridgeless least square estimator. Recall the training dataset $\mathcal{D}:=\{(x_{i},y_{i})\in\mathbb{R}^{d}\times\mathbb{R}:1\leq i\leq n\}$. Let $\mathcal{W}_{k}=\{w_{k,1},\ldots,w_{k,n}\}$ be $n$ non-negative multipliers. Multiplying each summand of $L_{n}=\sum_{i=1}^{n}(y_{i}-x_{i}b)^{2}$ by each multiplier in $\mathcal{W}_{k}$ and summing them up, we obtain the bootstrapped empirical loss $$\displaystyle L_{n}^{(k)}(b):=\sum_{i=1}^{n}w_{k,i}(y_{i}-x_{i}^{\top}b)^{2}.$$ We then calculate the bootstrap estimator $\widehat{\beta}_{k}$ as the min-norm solution that minimizes the above loss $$\displaystyle\widehat{\beta}_{k}$$ $$\displaystyle=\mathop{\mathrm{argmin}}\Big{\{}\|b\|_{2}:~{}b~{}\text{minimizes}~{}L_{n}^{(k)}(b)=\sum_{i=1}^{n}w_{k,i}\|y_{i}-x_{i}^{\mathrm{\scriptscriptstyle T}}b\|_{2}^{2}\Big{\}}.$$ (2.1) This procedure is repeated $B$ times to obtain a sequence of estimators $\big{\{}\widehat{\beta}_{k}:\,1\leq k\leq B\big{\}}$. The bagged least square estimator, or simply the bagged estimator, is obtained by simply averaging these $B$ estimators as $$\widehat{\beta}^{B}=\frac{1}{B}\sum_{k=1}^{B}\widehat{\beta}_{k}.$$ (2.2) The above bagged estimator $\widehat{\beta}^{B}$ can be formulated as an average of sketched ridgeless least square estimators. To see this, it suffices to show each individual estimator $\widehat{\beta}_{k}$ is a sketched estimator. Let $S_{k}$ be the sketching matrix such that $$S_{k}=\begin{bmatrix}\sqrt{w_{k,1}}&0&\ldots&0\\ 0&\sqrt{w_{k,2}}&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&\sqrt{w_{k,n}}\end{bmatrix}.$$ (2.3) Then the sketched dataset $\mathcal{D}_{k}=(S_{k}Y,S_{k}X)$ corresponds to the $k$-th bootstrap sample in the multiplier bootstrap, and the $k$-th individual bootstrap estimator $\widehat{\beta}_{k}$ coincides with sketched ridgeless least square estimator fitted on $\mathcal{D}_{k}$: $$\displaystyle\widehat{\beta}_{k}$$ $$\displaystyle=\mathop{\mathrm{argmin}}\Big{\{}\|b\|_{2}:~{}b~{}\text{minimizes}~{}\sum_{i=1}^{n}w_{k,i}\|y_{i}-x_{i}^{\mathrm{\scriptscriptstyle T}}b\|_{2}^{2}\Big{\}}$$ $$\displaystyle=\mathop{\mathrm{argmin}}\left\{\|b\|_{2}:~{}b~{}\text{minimizes}~{}\left\|S_{k}Y-S_{k}Xb\right\|_{2}^{2}\right\}$$ $$\displaystyle=\left((S_{k}X)^{\top}S_{k}X\right)^{+}X^{\top}S_{k}^{\top}S_{k}Y.$$ (2.4) Finally, our multiplier bootstrap framework encompasses the classical bootstrap with replacement as a specific case. In the classical bootstrap, each bootstrap sample $\mathcal{D}_{k}$ is generated by independently and uniformly sampling from $\mathcal{D}$ with replacement. This sampling method permits the possibility of individual observations being repeated within $\mathcal{D}_{k}$. When the sample size $n$ is sufficiently large, $\mathcal{D}_{k}$ is expected to contain approximately $1-1/e\approx 63.2\%$ distinct examples from $\mathcal{D}$. In this case, each bootstrap sample $\mathcal{D}_{k}$ corresponds to the sketched dataset $(S_{k}X,S_{k}Y)$, where the sketching matrix $S_{k}$ is composed of multipliers $(w_{k,1},\ldots,w_{k,n})\sim{\rm Multinomial}(n;p_{1},\ldots,p_{n})$ with $p_{i}=1/n$ for $1\leq i\leq n$ as the diagonal entries. 2.2 Risk, bias, and variance Let us consider a test data point $x_{\textrm{new}}\sim P_{x}$, which is independent of both the training data and the multipliers $\{\mathcal{W}_{k}:1\leq k\leq B\}$. To measure the generalization performance, we consider the following out-of-sample prediction risk, also referred to as the prediction risk or simply risk: $$R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\mathbb{E}\left[\left(x_{\textrm{new}}^{\top}\widehat{\beta}^{B}-x_{\textrm{new}}^{\top}\beta\right)^{2}\;\middle|\;X,\,\mathcal{W},\,\beta\right]=\mathbb{E}\left[\left\|\widehat{\beta}^{B}-\beta\right\|_{\Sigma}^{2}\;\middle|\;X,\,\mathcal{W},\,\beta\right],$$ (2.5) where $\|x\|_{\Sigma}^{2}=x^{\mathrm{\scriptscriptstyle T}}\Sigma x$, and the conditional expectation is taken with respect to the noises $\{\varepsilon_{i}\}_{1\leq i\leq n}$ and the test point $x_{\textrm{new}}$. We have the following bias-variance decomposition $$\displaystyle R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle=\big{\|}\mathbb{E}\left(\widehat{\beta}^{B}\;\middle|\;X,\,\mathcal{W},\,\beta\right)-\beta\big{\|}_{\Sigma}^{2}+\textrm{tr}\left[{\mathrm{cov}}\left(\widehat{\beta}^{B}\;\middle|\;X,\,\mathcal{W},\,\beta\right)\Sigma\right]$$ $$\displaystyle=B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})+V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B}),$$ where $\|x\|_{\Sigma}^{2}=x^{\mathrm{\scriptscriptstyle T}}\Sigma x$. Our next result provides expressions for the bias and variance of the bagged least square estimator (2.2). In the subsequent sections, we will characterize the out-of-sample prediction risks by analyzing the bias and variance terms respectively. Lemma 2.1 (Bias-variance decomposition). Under model (1) with ${\mathrm{cov}}(x)=\Sigma$, the bias and variance of the bagged linear regression estimator $\widehat{\beta}^{B}$ are $$\displaystyle B_{X,\,\mathcal{W},\,\beta}\left(\widehat{\beta}^{B}\right)=\frac{1}{B^{2}}\sum_{k,\ell}\beta^{\top}\Pi_{k}\Sigma\Pi_{\ell}\beta,$$ (2.6) $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\frac{\sigma^{2}}{B^{2}}\sum_{k,\ell}\frac{1}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma\right),$$ (2.7) where $\widehat{\Sigma}_{k}=X^{\top}S_{k}^{\top}S_{k}X/n$ is the sketched sample covariance matrix, and $\Pi_{k}=I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}$ is the projection matrix onto the null space of $S_{k}X$. 2.3 Standing assumptions This subsection collects standing assumptions. Assumption 1 (Proportional asymptotic regime). Assume $n,d\rightarrow+\infty$ such that $d/n\rightarrow\gamma\in(0,+\infty)$. Assumption 2 (Moment and covariance conditions). Assume that the feature vector $x$ can be written as $x=\Sigma^{1/2}z$, where $z\in\mathbb{R}^{d}$ has i.i.d. entries, each with a zero mean, unit variance, and a bounded $(8+\epsilon)$-th moment. Moreover, the eigenvalues of $\Sigma$ are bounded away from zero and infinity, i.e., $0<c_{\lambda}\leq\lambda_{\min}(\Sigma)<\lambda_{\max}(\Sigma)\leq C_{\lambda}<+\infty$ where $c_{\lambda}$ and $C_{\lambda}$ are constants. The empirical spectral distribution $F_{\Sigma}$ of $\Sigma$ converges weakly to a probability measure $H$ as $n,d\rightarrow+\infty$. Assumption 1 specifies the proportional asymptotic regime and is frequently adopted by recent literature on exact risk characterizations (Hastie et al.,, 2022; Mei and Montanari,, 2022; Chen et al.,, 2023). Assumption 2 specifies the covariance structure and moment conditions. We require a bounded $(8+\epsilon)$-th moment for each entry of $z_{i}$, which is slightly stronger than the $(4+\epsilon)$-th moment condition by Hastie et al., (2022). This is because we need to derive a uniform concentration inequality on quadratic forms to establish results on the Stieltjes transforms; see Lemma S.6.9. Our next assumption is on the multipliers. Assumption 3 (Multipliers). Assume the multipliers $\{\mathcal{W}_{k}:1\leq k\leq B\}$ are non-negative, independent of the training dataset $(X,Y)$, and the non-zero multipliers are bounded away from zero, i.e., there exists a positive constant $c_{w}$ such that $$\mathbb{P}\left(w_{k,i}\in\{0\}\cup[c_{w},+\infty)\right)=1,~{}\text{for all}~{}1\leq k\leq B,\,1\leq i\leq n.$$ Moreover, the empirical measure of the multipliers $\mathcal{W}_{k}$ converges weakly to some probability measure $\mu_{w}$ as $n\rightarrow\infty$ almost surely333We say $\mu_{n}$ converges weakly to $\mu$ almost surely if, for each continuous bounded function $g$, $\lim\int g\mu_{n}(dx)=\int g\mu(dx)$ for $\mu$-almost all $x$.. We refer to $\theta:=1-\mu_{w}(\{0\})>0$ as the downsampling ratio. Additionally, assume $\mathcal{W}_{1},\ldots,\mathcal{W}_{B}$ are asymptotically pairwise independent, aka the joint empirical measure of any two distinct sets of multipliers, i.e., any $\mathcal{W}_{k},\mathcal{W}_{\ell}$ with $k\neq\ell$, converges weakly to the probability measure $\mu_{w}\otimes\mu_{w}$ almost surely. If $\{\mathcal{W}_{k},1\leq k\leq B\}$ are independently and identically distributed, then they are also pairwise independent. In addition to requiring the multipliers to be pairwise independent, non-negative, and independent of the training data, Assumption 3 follows a similar spirit as Assumption 2 by assuming that the empirical measure of the multipliers converges weakly to a limiting probability measure $\mu_{w}$ almost surely. The limiting measure $\mu_{w}$ consists of point masses either at $0$ or in an interval bounded away from $0$. The downsampling ratio $\theta=1-\mu_{w}({0})$ quantifies the long-term proportion of nonzero multipliers, reflecting the fraction of training samples picked up by each bootstrap sample. As $\gamma/\theta$ corresponds to the aspect ratio within each bootstrap sample, we shall refer to the underparameterized and overparameterized regimes as $\gamma/\theta<1$ and $\gamma/\theta>1$ respectively. Before presenting some examples of the multiplier bootstrap procedures that satisfy Assumption 3, we need the following lemma. Lemma 2.2. Let $\widetilde{\mu}_{w}$ be the empirical measure of $(w_{k,1},\ldots,w_{k,n})\sim{\rm Multinomial}(n;1/n,\ldots,1/n)$ and $\text{Poisson}(1)$ be the Poisson distribution with parameter $1$. Then, almost surely, we have $${\widetilde{\mu}_{w}}\rightsquigarrow\text{Poisson}(1),~{}\text{as}~{}n,d\rightarrow+\infty.$$ Example 2.1 (Classical bootstrap with replacement (Efron,, 1979)). The classical bootstrap with replacement is equivalent to the multiplier bootstrap with i.i.d. $\mathcal{W}_{1},\ldots,\mathcal{W}_{B}$ such that $\mathcal{W}_{k}=(w_{k,1},\ldots,w_{k,n})\sim{\rm Multinomial}(n;1/n,\ldots,1/n)$. Then Lemma 2.2 indicates that the empirical measure of the multipliers $(w_{k,1},\ldots,w_{k,n})$ converges weakly to a Poisson distribution with parameter $1$ almost surely. Consequently, the downsampling ratio is $\theta=1-1/e\approx 0.632>0$. Example 2.2 (Bernoulli bootstrap). The Bernoulli bootstrap samples i.i.d.  multipliers $w_{i,j}$ from the Bernoulli distribution with success probability $p$. In this case, the downsampling ratio is $\theta=p$. Example 2.3 (Jackknife (Quenouille,, 1949)). The jackknife method samples multipliers $\{w_{k,j}\}$ such that exactly one multiplier equals zero, while all the others equal one. In this case, the multipliers $\{\mathcal{W}_{k},1\leq k\leq B\}$ are pairwise independent and the downsampling ratio $\theta$ equals $1$. Lastly, we assume that the true signal vector is isotropic. Assumption 4 (Random signal). The true signal $\beta$ is a random vector with i.i.d. entries, $\mathbb{E}[\beta]=0$, $\mathbb{E}\big{[}d\beta_{j}^{2}\big{]}=r^{2}$, and $\mathbb{E}\big{[}|\beta_{j}|^{4+\eta}\big{]}\leq C$ for some $\eta>0$ and $C<\infty$. Moreover, the random $\beta$ is independent of the training data $(X,E)$ and the multipliers $\mathcal{W}_{k},1\leq k\leq B$. We first focus on the case of a random $\beta$ as specified in Assumption 4, where $\beta$ follows an isotropic distribution. This assumption facilitates a clear presentation of the exact risk results. Such an assumption is commonly adopted in the literature (Dobriban and Wager,, 2018; Li et al.,, 2021). We also consider the case of a deterministic $\beta$ in Section 5, where the interplay between $\beta$ and $\Sigma$ needs to be taken into account. Under Assumption 4, we present the following simplified version of Lemma 2.1. Lemma 2.3. Assume model (1) with ${\mathrm{cov}}(x)=\Sigma$ and Assumption 4. If $\Sigma$ has bounded eigenvalues, then the bias of the bagged linear regression estimator $\widehat{\beta}^{B}$ is $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}\left(\widehat{\beta}^{B}\right)=\lim_{n,d\rightarrow+\infty}\frac{r^{2}}{B^{2}}\sum_{k,\ell}\frac{1}{d}\textrm{tr}\left(\Pi_{k}\Sigma\Pi_{\ell}\right){\quad\rm a.s.}$$ (2.8) The variance is the same as in Lemma 2.1. 3 A warm-up: Isotropic features As a warm-up, this section studies the case of isotropic features where the covariance matrix is an identity matrix $\Sigma=I$. The investigation of the correlated case will be postponed to Section 4. We present first the limiting risk of the sketched min-norm least square estimator $\widehat{\beta}^{1}$, aka $\widehat{\beta}^{B}$ with $B=1$, and then the risk of the bagged min-norm least square estimator $\widehat{\beta}^{B}$. These risk characterizations shed light on how bagging stabilizes and thus improves the generalization performance. 3.1 Sketching shifts the interpolation threshold This subsection studies the risk of $\widehat{\beta}^{1}$ when the sketching matrix, in the form of (2.3), is diagonal and mostly singular. While Chen et al., (2023) also explored the exact risks of sketched ridgeless least square estimators, they required the sketching matrices to be full rank, which distinguishes it from our work. In order to characterize the underparametrized variance, we need the following lemma. Lemma 3.1. Assume Assumptions 1-3 and $\Sigma=I$. Suppose $\gamma>\theta$. Then the following equation has a unique positive solution $c_{0}:=c_{0}(\gamma,\mu_{w})$ with respect to $x$, $$\int\frac{1}{1+xt}\mu_{w}(dt)=1-\gamma.$$ (3.1) The above lemma establishes the existence and uniqueness of a positive solution to equation (3.1). Equations of this type are known as self-consistent equations (Bai and Silverstein,, 2010), and are fundamental in caldulating the exact risks. The solutions to self-consistent equations are fundamental in calculating the exact risks. They do not generally admit closed form solutions but can be solved numerically. Our next result characterizes the limiting risk, as well as the limiting bias and variance, of $\widehat{\beta}^{1}$. Both the variance and risk in the underparameterized regime depend on the solution to equation (3.1). Theorem 3.2 (Sketching under isotropic features). Assume Assumptions 1-4 and $\Sigma=I$. The out-of-sample prediction risk of $\widehat{\beta}^{1}$ satisfies $$\displaystyle\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\begin{cases}\sigma^{2}\left(\frac{\gamma}{1-\gamma-f(\gamma)}-1\right),&\gamma/\theta<1\\ r^{2}\frac{\gamma/\theta-1}{\gamma/\theta}+\sigma^{2}\frac{1}{\gamma/\theta-1},&\gamma/\theta>1\end{cases}{\quad\rm a.s.}$$ (3.2) where $f(\gamma)=\int\frac{1}{(1+c_{0}t)^{2}}\mu_{w}(dt),$ and the constant $c_{0}$ is the same as in Lemma 3.1. Specifically, the bias and variance satisfy $$\displaystyle B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\begin{cases}0,&\gamma/\theta<1\\ r^{2}\frac{\gamma/\theta-1}{\gamma/\theta},&\gamma/\theta>1\end{cases},\quad V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})\overset{{\rm a.s.}}{\rightarrow}\begin{cases}\sigma^{2}\left(\frac{\gamma}{1-\gamma-f(\gamma)}-1\right),&\gamma/\theta<1\\ \sigma^{2}\frac{1}{\gamma/\theta-1},&\gamma/\theta>1\end{cases}.$$ We first compare the limiting risk of the sketched min-norm estimator $\widehat{\beta}^{1}$ with that of the min-norm estimator $\widehat{\beta}^{\rm mn}$ under isotropic features. The latter’s risk is given by Hastie et al., (2022): $$\displaystyle\lim_{n,d\rightarrow+\infty}R_{X,\,\beta}(\widehat{\beta}^{\rm mn})\ =\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<1\\ r^{2}\frac{\gamma-1}{\gamma}+\sigma^{2}\frac{1}{\gamma-1},&\gamma>1\end{cases}{\quad\rm a.s.}$$ (3.3) where $R_{X,\,\beta}$ is the same as $R_{X,\,\mathcal{W},\,\beta}$ but without conditioning on $\mathcal{W}$. Comparing these two risks, we observes that, in the overparameterized regime, sketching modifies the risk by modifying the aspect ratio from $\gamma$ to $\gamma/\theta$, and shifts the interpolation threshold from $\gamma=1$ to $\gamma/\theta=1$. However, despite these modifications, the risk still explodes as $\gamma/\theta$ approaches the interpolation threshold from the right side, i.e., as $\gamma/\theta\searrow 1$. This observation concurs with the findings by Chen et al., (2023), who focused on orthogonal and i.i.d. sketching. In the underparameterized regime however, the limiting risk of $\widehat{\beta}^{1}$ relies on the limiting distribution of the multipliers $\mu_{w}$ and is, therefore, different for different multipliers. Our next result demonstrates that the risk becomes unbounded as ${\gamma/\theta\nearrow 1}$, aka $\gamma/\theta$ approaches 1 from the left side. This, in conjunction with the earlier discussion, indicates that the risk of $\widehat{\beta}^{1}$ explodes when $\gamma/\theta=1$ from either side. Corollary 3.3. For any probability measure $\mu_{w}$ satisfying Assumption 3 and sampling ratio $\gamma<\theta\leq 1$, the function $f(\gamma)$ satisfies $\lim_{\gamma/\theta\nearrow 1}f(\gamma)=1-\gamma.$ Consequently, the asymptotic risk of $\widehat{\beta}^{1}$ explodes when $\gamma/\theta\nearrow 1$, i.e., $\lim_{\gamma/\theta\nearrow 1}\lim_{n,d\rightarrow+\infty}R_{X,\mathcal{W}}(\widehat{\beta}^{1})=+\infty.$ Since the risk of $\widehat{\beta}^{1}$ in the underparameterized regime depends on the limiting distribution of the multipliers, we consider three different types of multipliers and compute the risks of the associated sketched estimators. Specifically, we consider Jackknife multipliers, Bernoulli multipliers, and multinomial multipliers, where the multinomial multipliers correspond to the classical bootstrap with replacement. We will refer to the corresponding sketching methods as Jackknife sketching, Bernoulli sketching, and multinomial sketching, respectively. Corollary 3.4. Assume Assumptions 1-4, $\Sigma=I$, and $\gamma<\theta\leq 1$. Then the following holds. (i) The full-sample min-norm estimator $\widehat{\beta}^{\rm mn}$ satisfies $$\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{\rm mn})=\sigma^{2}\frac{\gamma}{1-\gamma}.$$ (ii) The Bernoulli sketched estimator $\widehat{\beta}^{1}_{\rm{\scriptstyle Bern}}$ with $w_{1,j}\overset{\rm i.i.d.}{\sim}\text{Bernoulli}(\theta)$ satisfies $$\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1}_{\rm{\scriptstyle Bern}})=\sigma^{2}\frac{\gamma/\theta}{1-\gamma/\theta}.$$ (iii) The multinomial sketched estimator $\widehat{\beta}^{1}_{\rm{\scriptstyle multi}}$ with $$\displaystyle(w_{1,1},\ldots,w_{1,n})\sim{\rm Multinomial}(n;1/n,\ldots,1/n)$$ corresponds to the classical bootstrap with replacement, and satisfies $$\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1}_{\rm{\scriptstyle multi}})>\sigma^{2}\frac{\gamma/(1-1/e)}{1-\gamma/(1-1/e)}.$$ (iv) The Jackknife sketched estimator shares the same limiting risk in item (i). The corollary above confirms that taking different sketching matrices yields different limiting risks in the underparameterized regime, while they agree in the overparameterized regime. To provide a visual representation, Figure 2 depicts the limiting risk curves as functions of $\gamma$, as well as the finite-sample risks for Bernoulli, multinomial, and Jackknife sketched estimators with $(r,\sigma)=(5,5)$. The symbols (dots, crosses, triangles) indicate the finite-sample risks for $n=400$, $\gamma$ varying in $[0.1,10]$, and $d=[n\gamma]$, whose values are averaged over 100 repetitions444In all following figures, we use the same setup for finite sample risks and omit these details.. Notably, the downsampling ratios for multinomial and Jackknife sketching remain fixed at $1-1/e$ and $1$ respectively. However, in the case of Bernoulli sketching, the downsampling ratio $\theta$, serves as a tuning parameter. This offers more flexibility. Figure 3 provides a comparison between the Bernoulli and multinomial sketched estimators as well as a comparison between the full-sample min-norm estimator and the Jackknife estimator. These comparisons yield two observations. First, the Jackknife sketched estimator and the full-sample min-norm estimator exhibit identical limiting risks, aligning with the aforementioned corollary. Second and perhaps surprisingly, multinomial sketching results in a slightly worse limiting risk than the Bernoulli sketched estimator with $\theta=1-1/e$ in the underparameterized regime, while the two risks agree in the overparameterized regime. This aligns with Corollary 3.4: In the underparameterized regime, multinomial sketching leads to an increased limiting variance when compared with Bernoulli sketching. This naturally raises the following question: What is the optimal sketching matrix among all sketching matrices in the form of (2.3)? We answer the question above by leveraging the variance formula in Theorem 3.2. Specifically, the following result establishes the optimality of the Bernoulli sketching, as it minimizes the variance formula and thus the limiting risks, among all other sketching techniques considered. Corollary 3.5. Taking Bernoulli multipliers with a downsampling ratio of $\theta$, corresponding to the limiting probability measure $\mu_{w}=(1-\theta)\delta_{0}+\theta\delta_{1}$, minimizes the limiting risk of $\widehat{\beta}^{1}$ in Theorem 3.2 among all choices of multipliers satisfying Assumption 3 with $B=1$ and downsampling ratio $\theta$. The above corollary holds even for the case of correlated features. This universality stems from the fact that, in the underparameterized regime, the risks for one sketched estimator in the correlated and isotropic cases are identical, while the risks for both cases in the overparameterized regime are independent of the multiplier distribution $\mu_{w}$; see Theorem 4.2 in Section 4. Recently, Chen et al., (2023) established the optimality of using an orthogonal sketching matrix among sketching matrices that are of full rank. Our findings complement theirs by considering sketching matrices in the form of (2.3), which are predominantly singular. Upon comparing our results to those by Chen et al., (2023), it becomes clear that both the Bernoulli and orthogonal sketching yield identical limiting risks, thereby sharing the same generalization performance. However, computationally, the Bernoulli sketching is more efficient than its orthogonal counterpart, primarily because the former avoids the necessity for multiplying the data matrix by a dense sketching matrix. Table 1 compares the run time for computing the Bernoulli, orthogonal, and multinomial sketched estimators using an Apple M1 CPU with data generated in the same way as in Figure 1, which shows that Bernoulli sketching is faster than all other sketching methods, especially when the sample size is large. When increasing the sample size from $n=400$ to $n=800$, the computational efficiency gain of the Bernoulli sketching over the othorgonal sketching improves from 26% to 36%. 3.2 Bagging stabilizes the generalization performance This subsection studies the exact risk of the bagged min-norm least square estimator $\widehat{\beta}^{B}$ introduced in Section 2. Recall that our bagged estimator can be formulated as the average of the sketched min-norm least square estimators. Theorem 3.6 (Bagging under isotropic features). Assume Assumptions 1-4 and $\Sigma=I$. Then the out-of-sample prediction risk of $\widehat{\beta}^{B}$ satisfies $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<\theta\\ r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)}+\sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}},&\gamma>\theta\end{cases}{\quad\rm a.s.}$$ Specifically, the bias and variance satisfy $$\displaystyle B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\begin{cases}0,&\gamma/\theta<1\\ r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)},&\gamma/\theta>1\end{cases},\quad V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})\overset{{\rm a.s.}}{\rightarrow}\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma/\theta<1\\ \sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}},&\gamma/\theta>1\end{cases}.$$ The above theorem characterizes the limiting risk of the bagged min-norm least square estimator. We have two observations. First, comparing with the sketched min-norm least square estimator, one immediate advantage is that bagging makes the risk invariant to the limiting distributions of the multipliers. Second, in contrast to the full-sample and sketched min-norm estimator whose limiting risks diverge to infinity as $\gamma$ and $\gamma/\theta$ approaches to $1$ and $1$ respectively, the limiting risk of the bagged estimator approaches to $\sigma^{2}\theta/(1-\theta)$ from both sides and remains bounded by $\max\{r^{2},\sigma^{2}\theta/(1-\theta)\}$ in both regimes, improving the stability and thus the generalization performance. The following corollary proves this rigorously. Corollary 3.7. Suppose $\gamma\neq\theta$. The limiting risk of the bagged estimator $\widehat{\beta}^{B}$ in Theorem 3.6 satisfies $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})\leq\sigma^{2}\frac{\theta}{1-\theta}\vee r^{2}.$$ To comprehend how bagging enhances the stability over an individual estimator, we begin by comparing the bagged estimator with the sketched estimator. We focus on the Bernoulli sketched estimator due to its optimality. Bagging serves to substantially reduce the limiting variance of the Bernoulli sketched estimator across both regimes $\gamma<\theta$ and $\gamma>\theta$. Specifically, the limiting variance is consistently reduced by a factor of at least $\theta$: $$\displaystyle\frac{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})}{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})}$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\frac{\gamma/(1-\gamma)}{{(\gamma/\theta)}\big{/}(1-\gamma/\theta)}1(\gamma<\theta)+\frac{1/(\gamma/\theta^{2}-1)}{{1}/(\gamma/\theta-1)}1(\gamma>\theta)\leq\theta.$$ Furthermore, as $\gamma\rightarrow\theta>1$ from either side, the variance reduction becomes even more pronounced, resulting in an order of difference: $$\displaystyle\lim_{\gamma\rightarrow\theta}\lim_{n,d\rightarrow+\infty}\frac{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})}{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})}\overset{{\rm a.s.}}{=}\lim_{\gamma\rightarrow\theta}\frac{\gamma/(1-\gamma)}{{(\gamma/\theta)}\big{/}(1-\gamma/\theta)}1(\gamma<\theta)+\frac{1/(\gamma/\theta^{2}-1)}{{1}/(\gamma/\theta-1)}1(\gamma>\theta)=0.$$ Surprisingly, at least to us, bagging not only reduces variance but also mitigates the implicit bias in the overparameterized regime: $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle\overset{{\rm a.s.}}{=}r^{2}\cdot\frac{\gamma/\theta-1}{\gamma/\theta}\cdot\frac{\gamma/\theta-1}{\gamma/\theta-\theta}\leq r^{2}\cdot\frac{\gamma/\theta-1}{\gamma/\theta}\overset{{\rm a.s.}}{=}\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1}).$$ We then compare the limiting risk of the bagged estimator with that of the full-sample min-norm estimator. The limiting risks of these two estimators are identical when $\gamma<\theta\leq 1$. However, when $\gamma>\theta$, we have: $$\displaystyle\frac{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})}{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{\rm mn})}\overset{{\rm a.s.}}{\rightarrow}\begin{cases}\frac{\sigma^{2}/{(\gamma/\theta^{2}-1)}}{\sigma^{2}\gamma/(1-\gamma)}\leq\frac{\theta}{\gamma}<1,&1>\gamma>\theta\\ \frac{\sigma^{2}/{(\gamma/\theta^{2}-1)}}{\sigma^{2}/(\gamma-1)}=\frac{\gamma-1}{\gamma/\theta^{2}-1}\leq\theta^{2},&\gamma>1\geq\theta\end{cases}.$$ This indicates that the variance is reduced by at least a factor of $\theta/\gamma$ when $1>\gamma>\theta$, and by at least a factor of $\theta^{2}$ when $\gamma>1\geq\theta$. Moreover, when $\gamma\rightarrow 1$ from either side, we have $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}\frac{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})}{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{\rm mn})}\rightarrow\begin{cases}0,&\theta<\gamma\nearrow 1\\ 0,&\gamma\searrow 1>\theta\end{cases}.$$ In other words, as $\gamma$ approaches $1$, the variance of the bagged estimator $\widehat{\beta}^{B}$ becomes of a smaller order compared with that of the full-sample min-norm estimator. Figure 4 plots the limiting risk curves as functions of $\gamma$ and the finite-sample risks for the Jackknife estimator, Bernoulli bootstrap estimators555With a slight abuse of notation, we shall refer to Bernoulli (classical) bootstrap based bagged estimators as Bernoulli (classical) bootstrap estimators for simplicity. with different downsampling ratios, and the classical bootstrap estimator, all with $(r,\sigma)=(5,5)$. The symbols mark the finite-sample risks. It is evident that the limiting risks of Bernoulli bootstrap estimators with $\theta=0.2$ and $0.6$, as well as the classical bootstrap estimator, remain bounded. On the other hand, the Bernoulli bootstrap estimator with $\theta=1.0$ and the Jackknife estimator are identical to the full-sample min-norm estimator, whose limiting risk becomes unbounded at the interpolation threshold. Figure 5 provides a comparison of the limiting variances and risks between the bagged and full-sample min-norm estimators, both with $(r,\sigma)=(5,5)$. When $\theta$ is taken as either $0.2$ or $0.6$, both the limiting variance and risk for the bagged estimator are of a smaller order than those for the full-sample min-norm estimator when $\gamma\rightarrow 1$. These experimental results effectively validate our theoretical findings. Finally, Table 2 compares the run time for computing the Bernoulli, orthogonal, and classical bootstrap666We use orthogonal bootstrap to refer to the procedure of generating subsamples in the form of $(SX,SY)$, where $S$ is an orthogonal sketching matrix. estimators using an Apple M1 CPU with data generated in the same way as in Figure 1, which shows that Bernoulli bootstrap is faster than all other bootstrap methods. When increasing the sample size from 400 to 800, the computational efficiency gains of the Bernoulli bootstrap over the orthogonal and classical bootstrap improves from 22% and 10% to 47% and 24%, respectively. 3.3 Bagging as implicit regularization This subsection establishes an equivalence between the bagged ridgeless least square estimator and the ridge regression estimator. Lemma 3.8. Assume Assumptions 1-4 and $\Sigma=I$. Then $$\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}\mathbb{E}\left[\|\widehat{\beta}^{B}-\widehat{\beta}_{\lambda}\|_{2}^{2}\;\middle|\;X,\,\mathcal{W},\,\beta\right]=0{\quad\rm a.s.}$$ where $\lambda=(1-\theta)(\gamma/\theta-1)\vee 0.$ The lemma above demonstrates that the bagged estimator, when using a downsampling ratio of $\theta$, can be seen as equivalent to the ridge regression estimator with a penalty parameter $\lambda=(1/\theta-1)(\gamma-\theta)\vee 0$. This equivalence is based on the expected $\ell_{2}$ normed difference between the estimators. In essence, bagging acts as a form of implicit regularization. Figure 6 depicts the $\ell_{2}$ normed differences between the bootstrap estimators and their their corresponding ridge regression estimators. The observed differences decrease as the number of bootstrap samples $B$ or the sample size $n$ increases. 4 Correlated features This section delves into the anaylsis of correlated features. In this case, the limiting risks are implicitly determined by some self-consistent equations. We first introduce these equations. Given an aspect ratio $\gamma>0$, a downsampling ratio $0<\theta\leq 1$, a limiting spectral distribution $H$ of $\Sigma$, and $z\leq 0$, we define $v(z)$ and $\tilde{v}(z)$ as the positive solutions to the self-consistent equations: $$\displaystyle v(z)$$ $$\displaystyle=\left(-z+\frac{\gamma}{\theta}\int\frac{t\,dH(t)}{1+v(z)t}\right)^{-1}\quad\text{and}$$ (4.1) $$\displaystyle\tilde{v}(z)$$ $$\displaystyle=\left(-z+\frac{\gamma}{\theta^{2}}\int\frac{t\,d\tilde{H}(t)}{1+\tilde{v}(z)t}\right)^{-1},$$ (4.2) where $\tilde{H}$ represents the limiting spectral distribution of $(I+k(0)\Sigma)^{-1}\Sigma$, and $k(0)=(1-\theta)v(0)$. Our first result provides the existence, uniqueness, and differentiability of $v(z)$ and $\tilde{v}(z)$. Lemma 4.1. For any $z\leq 0$, equations (4.1) and (4.2) have unique positive solutions $v(z)$ and $\tilde{v}(z)$. Moreover, $v(z)$ and $\tilde{v}(z)$ are differentiable for any $z<0$. When $\gamma>\theta$, $v(0):=\lim_{z\rightarrow 0^{-}}v(z)$, $v(0):=\lim_{z\rightarrow 0^{-}}v(z)$, $v^{\prime}(0):=\lim_{z\rightarrow 0^{-}}v^{\prime}(z)$, $\tilde{v}(0):=\lim_{z\rightarrow 0^{-}}\tilde{v}(z)$, and $\tilde{v}^{\prime}(0):=\lim_{z\rightarrow 0^{-}}\tilde{v}^{\prime}(z)$ exist. 4.1 Sketching under correlated features We first study the exact risk of the sketched min-norm least square estimator $\widehat{\beta}^{1}$. Recall $f(\gamma)$ from Theorem 3.2. Theorem 4.2 (Sketching under correlated features). Assume Assumptions 1-4. Then the out-of-sample prediction risk of $\widehat{\beta}^{1}$ satisfies $$\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})=\begin{cases}\sigma^{2}\left(\frac{\gamma}{1-\gamma-f(\gamma)}-1\right),&\gamma<\theta\\ r^{2}\frac{\theta}{\gamma v(0)}+\sigma^{2}\left(\frac{v^{\prime}(0)}{v(0)^{2}}-1\right),\quad&\gamma>\theta\end{cases}{\quad\rm a.s.}$$ Specifically, the bias and variance satisfy $$\displaystyle B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\begin{cases}0,&\gamma/\theta<1\\ r^{2}\frac{\theta}{\gamma v(0)},&\gamma/\theta>1\end{cases},\quad V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})\overset{{\rm a.s.}}{\rightarrow}\begin{cases}\sigma^{2}\left(\frac{\gamma}{1-\gamma-f(\gamma)}-1\right),&\gamma/\theta<1\\ \sigma^{2}\left(\frac{v^{\prime}(0)}{v(0)^{2}}-1\right),&\gamma/\theta>1\end{cases}.$$ The limiting risk in the presence of correlated features does not admit closed-form expressions in either regime, but it can be computed numerically. In the specific case of $\Sigma=I$, the limiting spectral distribution $H$ simplifies to the Dirac measure $\delta_{1}$. This allows us to find the closed-form solutions for $v(0)$ and $v^{\prime}(0)$, which are given by $$\displaystyle v(0)=\frac{\theta}{\gamma-\theta}~{}~{}~{}\text{and}~{}~{}~{}v^{\prime}(0)=\frac{\theta^{2}\gamma}{(\gamma-\theta)^{3}},$$ resulting in the following limiting risk expressions: $$\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})=\begin{cases}\sigma^{2}\left(\frac{\gamma}{1-\gamma-f(\gamma)}-1\right),&\gamma<\theta\\ r^{2}\frac{\gamma/\theta-1}{\gamma/\theta}+\sigma^{2}\frac{1}{\gamma/\theta-1},\quad&\gamma>\theta\end{cases}{\quad\rm a.s.}$$ This result is consistent with Theorem 3.2 in Section 3. Essentially, Theorem 4.2 encompasses Theorem 3.2 as a special case. Moreover, Corollary 3.5 remains valid with correlated features, implying that Bernoulli sketching optimizes the limiting risk in Theorem 4.2. To comprehend the role of sketching in the correlated case, we juxtapose the Bernoulli sketched estimator, chosen due to its optimality, with the full-sample min-norm estimator. Let $v(z;x)$ denote the solution to the equation: $$\displaystyle v(z;x)$$ $$\displaystyle=\left(-z+x\int\frac{t\,dH(t)}{1+v(z;x)t}\right)^{-1},~{}\text{for any}~{}x>0.$$ (4.3) With this new notation, $v(z)$ defined via the self-consistent equation (4.1) can be rewritten as $v(z;\gamma/\theta)$. The limiting risk of the Bernoulli sketched estimator $\widehat{\beta}^{1}_{\rm{\scriptstyle Bern}}$ can then be expressed as: $$\displaystyle\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\begin{cases}\sigma^{2}\frac{\gamma/\theta}{1-\gamma/\theta},&\gamma/\theta<1\\ r^{2}\frac{1}{\gamma v(0;\gamma/\theta)}+\sigma^{2}\left(\frac{v^{\prime}(0;\gamma/\theta)}{v(0;\gamma/\theta)^{2}}-1\right),\quad&\gamma/\theta>1\end{cases}{\quad\rm a.s.}$$ (4.4) A variant of (Hastie et al.,, 2022, Theorem 3) characterizes the limiting risk of the full-sample min-norm estimator as $$\displaystyle\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{\rm mn})$$ $$\displaystyle=\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<1\\ r^{2}\frac{1}{\gamma v(0;\gamma)}+\sigma^{2}\left(\frac{v^{\prime}(0;\gamma)}{v(0;\gamma)^{2}}-1\right),\quad&\gamma>1\end{cases}{\quad\rm a.s.}$$ (4.5) By comparing the aforementioned limiting risks, it is evident that the limiting risk of $\widehat{\beta}^{1}$ corresponds to that of the full-sample min-norm estimator, albeit with the aspect ratio and interpolation threshold modified from $\gamma$ and $\gamma=1$ to $\gamma/\theta$ and $\gamma/\theta=1$ respectively. In other words, sketching alters the aspect ratio and shifts the interpolation threshold, which is consistent with findings in the isotropic case. Figure 7 plots the limiting risk curves for Bernoulli sketched, multinomial, and Jackknife sketched estimators with correlated features and $(r,\sigma)=(3,3)$. Each row of $X\in\mathbb{R}^{n\times d}$ is i.i.d drawn from $\mathcal{N}(0,\Sigma)$ and $\Sigma$ has empirical spectral distribution $F^{\Sigma}(x)=\frac{1}{d}\sum_{i=1}^{d}1(\lambda_{i}(\Sigma)\leq x)$ with $\lambda_{i}=2$ for $i=1,...,[d/2]$, and $\lambda_{i}=1$ for $i=[d/2]+1,...,d$. It shows that the limiting risks of Bernoulli sketched estimators have the same shapes as that of the full-sample min-norm estimator but with modified aspect ratios and interpolation thresholds. 4.2 Bagging under correlated features This subsection first studies the out-of-sample prediction risk of the bagged estimator under correlated features, and then establishes an equivalence between the bagged estimator and some full-sample min-norm estimator. We begin with the characterization of the limiting risk. Recall $v(0)$, $\tilde{v}(0)$, and $\tilde{v}^{\prime}(0)$ from Lemma 4.1. Theorem 4.3 (Bagging under correlated features). Assume Assumptions 1-4. Then the out-of-sample prediction risk of $\widehat{\beta}^{B}$ satisfies $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle=\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<\theta\\ r^{2}\frac{\theta}{\gamma v(0)}-r^{2}\frac{(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right)+\sigma^{2}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right),\quad&\gamma>\theta\end{cases}{\quad\rm a.s.}$$ Specifically, the bias and variance satisfy $$\displaystyle B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\begin{cases}0,&\gamma/\theta<1\\ r^{2}\frac{\theta}{\gamma v(0)}-r^{2}\frac{(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right),&\gamma/\theta>1\end{cases},$$ $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma/\theta<1\\ \sigma^{2}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right),&\gamma/\theta>1\end{cases}.$$ Similarly to the isotropic case, the limiting risk for the bagged estimator is independent of the choice of multipliers. Moreover, in contrast to the full-sample and sketched min-norm estimators whose limiting risks explode at the corresponding interpolation thresholds, the limiting risk of the bagged estimator remains bounded. Consequently, the bagged estimator is stabler than both estimators in terms of the generalization performance. This stability improvement comes from the variance reduction property of bagging, especially around the interpolation threshold. When compared with the sketched min-norm estimator, bagging helps reduce the variance by at least a factor of $\theta$ everywhere, and even more substantially around the interpolation threshold. This is characterized by the following lemma. Corollary 4.4. Assume Assumptions 1-2. Then $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}\frac{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})}{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})}$$ $$\displaystyle\leq\frac{\theta-\gamma}{1-\gamma}\cdot 1(\gamma<\theta)+\frac{{\tilde{v}^{\prime}(0)}/{\tilde{v}(0)^{2}}-1}{{v^{\prime}(0)}/{v(0)^{2}}-1}\cdot 1\left(\gamma>\theta\right)$$ $$\displaystyle\leq\theta.$$ When $\gamma/\theta\rightarrow 1$ and $\theta\neq 1$, we have $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}\frac{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})}{V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})}\rightarrow 0{\quad\rm a.s.}$$ In addition to variance reduction, bagging also contributes to the reduction of implicit bias in the overparameterized regime: $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle\overset{{\rm a.s.}}{=}\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})-\frac{r^{2}(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right)\leq\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ since ${\tilde{v}^{\prime}(0)}/{\tilde{v}(0)^{2}}-1>0$. Under isotropic features, the self-consistent equations can be readily solved, leading to $$\displaystyle v(0)=\frac{\theta}{\gamma-\theta},~{}\tilde{v}(0)=\frac{\theta^{2}}{\gamma-\theta},~{}\textnormal{and}~{}~{}\tilde{v}^{\prime}(0)=\frac{\gamma\theta^{4}}{(\gamma-\theta)^{2}(\gamma-\theta^{2})}.$$ Consequently, the implicit bias in the overparameterized regime is reduced by $$\displaystyle r^{2}\frac{\theta(1-\theta)(\gamma-\theta)}{\gamma(\gamma-\theta^{2})}.$$ Figure 8 plots the limiting risk curves for the Jackknife estimator, Bernoulli and classical bootstrap estimators with correlated features and $(r,\sigma)=(3,3)$. Similar to the isotropic case, bagged estimators with $\theta\neq 1$ have bounded limiting risks. 4.3 Bagging as implicit regularization In this subsection, we establish an equivalence between the bagged estimator under model (1) and the min-norm least square estimator under a different model. Specifically, let $\widehat{\beta}_{\theta}$ be the ridgeless least square estimator obtained using the following generative model $$\displaystyle\tilde{Y}=\tilde{X}\tilde{\beta}+E\in\mathbb{R}^{\lfloor\theta^{2}n\rfloor},$$ (4.6) where $\tilde{X}=(\tilde{x}_{1},\ldots,\tilde{x}_{[\theta^{2}n]})^{\top}\in\mathbb{R}^{[\theta^{2}n]\times d}$ consists of i.i.d. feature vectors $\tilde{x}_{i}$ with a size of $[\theta^{2}n]$ and a covariance matrix $\tilde{\Sigma}=(I+k(0)\Sigma)^{-1}\Sigma$, $\tilde{\beta}=(I+k(0)\Sigma)^{-1/2}\beta$, and $E=(\varepsilon_{1},\ldots,\varepsilon_{n})^{\top}\in\mathbb{R}^{n}$ is the same as in model (1). Let $\Sigma=\sum_{i}^{d}\lambda_{i}u_{i}u_{i}^{\top}$ be the eigenvalue decomposition of the covariance matrix. For a fixed deterministic signal $\beta$, define the eigenvector empirical spectral distribution (VESD) $G_{n}(s)$ as $$\displaystyle G_{n}(s)=\frac{1}{\tilde{r}^{2}}\sum_{i=1}^{d}\frac{1}{1+k(0)\lambda_{i}}\langle\beta,u_{i}\rangle^{2}\,1\left(\frac{\lambda_{i}}{1+k(0)\lambda_{i}}\leq s\right),$$ where $\tilde{r}^{2}$ = $\beta^{\top}(I+k(0)\Sigma)^{-1}\beta$. We need the following assumption. Assumption 5 (Deterministic signal). The signal $\beta$ is deterministic, and ${G_{n}}$ converges weakly to a probability distribution $G$. Let $R_{\tilde{X}}$ be defined similarly as $R_{X,\,\mathcal{W},\,\beta}$ but conditioning only on $\tilde{X}$. With these definitions, we are now ready to present our main result in this subsection. Corollary 4.5. Assume Assumptions 1-3, and 5. Further assume $\tilde{x}_{i}\,{\sim}\,\tilde{x}=\tilde{\Sigma}^{1/2}z$ with $z$ satisfying Assumption 2. When $\gamma>\theta$, we have $$\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\lim_{n,d\rightarrow+\infty}R_{\tilde{X}}(\widehat{\beta}_{\theta}){\quad\rm a.s.}$$ The above result reveals that when $\gamma>\theta$, the limiting risk of the bagged estimator under the original model (1) is equivalent to that of the full-sample min-norm estimator under the new generative model (4.6). In this new model, the aspect ratio $\gamma$, the true signal $\beta$, and the covariance matrix $\Sigma$ are replaced by $\gamma/\theta^{2}$, $(I+k(0)\Sigma)^{-1/2}\beta$, and $(I+k(0)\Sigma)^{-1}\Sigma$ respectively. In other words, the new model (4.6) represents features with a shrunken covariance matrix and a shrunken signal. This suggests that bagging serves as a form of implicit regularization. When $\gamma<\theta$, the limiting risk of the bagged estimator agrees with that of the full-sample min-norm estimator under the original model, regardless of the choice of multipliers. 5 Extensions This section studies the limiting risk under the deterministic signal case, characterizes the training error in terms of the out-of-sample limiting risk, and calculates the adversarial risk, all for the bagged estimator. 5.1 Deterministic signal We first present the limiting risk of the bagged least square estimator when the signal is deterministic as in Assumption 5. Recall $v(0)$, $\tilde{v}(0)$, and $\tilde{v}^{\prime}(0)$ from Lemma 4.1. Theorem 5.1 (Deterministic signal). Assume Assumptions 1-3, and 5. The out-of-sample prediction risk of $\widehat{\beta}^{B}$ satisfies $$\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<\theta\\ \tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}\,d{G}(s)+\sigma^{2}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right),\quad&\gamma>\theta\end{cases}{\quad\rm a.s.}$$ Specifically, the bias and variance satisfy $$\displaystyle B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\begin{cases}0,&\gamma/\theta<1\\ \tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}\,d{G}(s),&\gamma/\theta>1\end{cases},$$ $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle\overset{{\rm a.s.}}{\rightarrow}\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma/\theta<1\\ \sigma^{2}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right),&\gamma/\theta>1\end{cases}.$$ The theorem above can be viewed as a generalization of Theorem 4.3, accounting for the interaction between $\beta$ and $\Sigma$. In comparison to Theorem 4.3, the only differing term is the implicit bias term in the overparameterized regime. Assuming that $\beta$ satisfies Assumption 4, the following corollary shows that this bias term simplifies to the one in Theorem 4.3. Consequently, the above theorem recovers Theorem 4.3 as a special case. Corollary 5.2. In addition to the assumptions in Theorem 5.1, assume Assumption 4. Then, we have $$\tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}\,d{G}(s)=r^{2}\frac{\theta}{\gamma v(0)}-r^{2}\frac{(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right).$$ Consequently, Theorem 5.1 recovers Theorem 4.3 as a special case. 5.2 Training error Let $L(\widehat{\beta}^{B};X,Y)=\|Y-X\widehat{\beta}^{B}\|^{2}_{2}$ be the training error. Then the following result characterizes the training error in terms of the out-of-sample prediction risk. Theorem 5.3 (Training error). Assume Assumptions 1-4. When $\gamma>\theta$, the training error satisfies $$\lim_{B\rightarrow+\infty}\lim_{n,p\rightarrow+\infty}\mathbb{E}\left[L(\widehat{\beta}^{B};X,Y)\Big{|}X\right]=(1-\theta)^{2}\lim_{B\rightarrow+\infty}\lim_{n,p\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})+(1-\theta)^{2}\sigma^{2}{\quad\rm a.s.}$$ The above result implies that, in the overparameterized regime, the out-of-sample prediction risk can also be expressed in terms of the training error: $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,p\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\frac{1}{(1-\theta)^{2}}\lim_{B\rightarrow+\infty}\lim_{n,p\rightarrow+\infty}\mathbb{E}\left[L(\widehat{\beta}^{B};X,Y)\Big{|}X\right]-\sigma^{2}.$$ In other words, the out-of-sample prediction risk of the bagged least square interpolator is linear in the training error: Better the rescaled training error (rescaled by $1/(1-\theta)^{2}$), better the generalization performance. Hence, when employing bagged interpolators in practice, we can simply choose the bagged interpolator with the smallest rescaled training error. Computationally expensive procedurse such as the cross-validation is not necessary. 5.3 Adversarial risk This subsection examines the adversarial robustness of the bagged min-norm least square estimator under adversarial attacks, focusing on the case where the features are isotropic. For any estimator $\widehat{\beta}$, we introduce the $\ell_{2}$ adversarial risk as follows: $$R^{\rm{adv}}(\widehat{\beta};\delta):=\mathbb{E}\left[\max_{\|x\|_{2}\leq\delta}\left(x_{\textrm{new}}^{\top}\beta-(x_{\textrm{new}}+x)^{\top}\widehat{\beta}\right)^{2}\right],$$ where the expectation is taken with respect to the test feature $x_{\textrm{new}}$. Assuming that $x_{\textrm{new}}\sim\mathcal{N}(0,I)$ and using (Javanmard et al.,, 2020, Lemma 3.1), the $\ell_{2}$ adversarial risk of any estimator $\widehat{\beta}$ can be expressed as $$\displaystyle R^{\rm{adv}}(\widehat{\beta};\delta)=R(\widehat{\beta})+\delta^{2}\|\widehat{\beta}\|_{2}^{2}+2\sqrt{\frac{2}{\pi}}\delta\|\widehat{\beta}\|_{2}\left(\sigma^{2}+R(\widehat{\beta})\right)^{1/2},$$ (5.1) where $$\displaystyle R(\widehat{\beta}):=\mathbb{E}\left[\left(x_{\textrm{new}}^{\top}\beta-x_{\textrm{new}}^{\top}\widehat{\beta}\right)^{2}\;\middle|\;\beta,\widehat{\beta}\right]$$ with the conditional expectation taken with respect to the test feature $x_{\textrm{new}}$. Hence, the adversarial risk of any estimator $\widehat{\beta}$ depends on the risk $R(\widehat{\beta})$ and its norm $\|\widehat{\beta}\|_{2}$. This newly defined risk slightly differs from the one in equation (2.5). Our next result shows that, with an additional assumption, $R(\widehat{\beta}^{B})$ and $R(\widehat{\beta}^{\rm mn})$ are asymptotically equivalent to $R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$ and $R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{\rm mn})$ respectively, which holds in the general context of correlated features. Assumption 6. Assume that the noises $\varepsilon_{i}$ have bounded $(4+\epsilon)$-th moments for some $\epsilon>0$. Lemma 5.4. Assume Assumptions 1-4, and 6. Then, we have almost surely $$\displaystyle\lim_{n,d\rightarrow+\infty}R(\widehat{\beta}^{B})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})~{}~{}~{}\textrm{and}$$ $$\displaystyle\lim_{n,d\rightarrow+\infty}R(\widehat{\beta}^{\rm mn})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{\rm mn}).$$ We proceed to characterize the norm of the bagged least square estimator and illustrate how bagging leads to a reduction in the norm, resulting in enhanced adversarial robustness. Lemma 5.5. Assume Assumptions 1-4, 6, and $\Sigma=I$. Then we have $$\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}\|\widehat{\beta}^{B}\|_{2}^{2}=\begin{cases}r^{2}+\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<\theta\\ r^{2}\frac{\theta^{2}(\gamma+1-2\theta)}{\gamma(\gamma-\theta^{2})}+\sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}},&\gamma>\theta\end{cases}{\quad\rm a.s.}$$ We compare the norm of the bagged estimator with that of the min-norm estimator. Using a variant of (Hastie et al.,, 2022, Corollary 1), we obtain that the squared $\ell_{2}$-norm of the min-norm estimator satisfies $$\lim_{n,d\rightarrow+\infty}\|\widehat{\beta}^{\rm mn}\|_{2}^{2}=\begin{cases}r^{2}+\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<1\\ r^{2}\frac{1}{\gamma}+\sigma^{2}\frac{1}{\gamma-1},&\gamma>1\end{cases}{\quad\rm a.s.}$$ Bagging shrinks the norm of the full-sample min-norm estimator: $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}\frac{\|\widehat{\beta}^{B}\|_{2}^{2}}{\|\widehat{\beta}^{\rm mn}\|_{2}^{2}}\leq\begin{cases}\frac{\theta}{\gamma}<1,&1>\gamma>\theta\\ \max\left\{\frac{\theta}{\gamma^{2}},\theta^{2}\right\}<1,&\gamma>1>\theta\end{cases}.$$ With the limiting risks and squared norms of $\widehat{\beta}^{B}$ and $\widehat{\beta}^{\rm mn}$ available we can calculate the limiting adversarial risks of $\widehat{\beta}^{B}$ and $\widehat{\beta}^{\rm mn}$ according to equation (5.1). Figure 9 compares the $\ell_{2}$ norms and limiting adversarial risks of the bagged estimator and the full-sample min-norm estimator under isotropic features. In all of our cases, the bagged estimator exhibits smaller norms and smaller adversarial risks when compared with the full-sample min-norm estimator, demonstrating its superior adversarial robustness. 6 Conclusions and discussions Summary Interpolators often exhibit high instability as their test risks explode under certain model configurations. This paper delves into the mechanisms through which ensembling improves the stability and thus the generalization performance of individual interpolators. Specifically, we focus on bagging, a widely-used ensemble technique that can be implemented in parallel. Leveraging a form of multiplier bootstrap, we introduce the bagged min-norm least square estimator, which can then be formulated as an average of sketched min-norm least square estimators. Our multiplier bootstrap includes the classical bootstrap with replacement as a special case, and introduces an intriguing variant which we term the Bernoulli bootstrap. Focusing on the proportional regime $d\asymp n$, where $n$ denotes the sample size and $d$ signifies feature dimensionality, we precisely characterize the out-of-sample prediction risks of both sketched and bagged estimators in both underparameterized and overparameterized regimes. Our findings underscore the statistical roles of sketching and bagging. Specifically, sketching modifies the aspect ratio and shifts the interpolation threshold when compared with the full-sample min-norm estimator. Nevertheless, the risk of the sketched estimator is still unbounded around the interpolation threshold due to rapidly increasing variance. On the contrary, bagging effectively mitigates this variance escalation, resulting in bounded limiting risks. General models and loss functions We identify several promising avenues for future research. The presented multipiler-bootstrap-based bagged estimator $\widehat{\beta}^{B}$ holds potential for broader applicability beyond the linear regression model and squared loss. However, in such cases, the individual estimator (2.1) may not possess a closed-form representation as in (2.4). Analyzing these more general estimators poses a significant challenge. Random feature subsampling This study primarily focuses on bagging, yet another prominent ensemble learning technique is feature subsampling, notably employed in random forests (Breiman,, 2001). In the context of random forests, adaptive feature subsampling has demonstrated significant superiority over bagging in low signal-to-noise ratio settings (Mentch and Zhou,, 2020). Consequently, it would be immensely valuable to gain a comprehensive understanding of the statistical implications of adaptive feature subsampling, both individually and in combination with bagging. Recent work by LeJeune et al., (2020) has delved into the exact risk analysis of underparameterized least square estimators with uniformly random feature subsampling under isotropic features. However, it remains an open question how to investigate the effects of uniformly random feature subsampling, and even adaptive feature subsampling, under correlated features. Other sketching matrices While this paper has focused on sketched estimators with diagonal and predominantly singular sketching matrices, an extension of our results to encompass other sketching matrices is worth exploring. Notably, sketching matrices, such as i.i.d. and orthogonal sketching matrices, as investigated by Chen et al., (2023), could be considered. It would be intriguing to investigate whether the invariance and impilcit regularization effect of bagging hold under these alternative sketching procedures. Application to streaming data Finally, compared with the classical bootstrap, the proposed Bernoulli bootstrap is expected to lend itself well to streaming data and growing data sets, since the total number of samples does not need to be known in advance of beginning to take bootstrap samples. We shall explore this in future work. References Antun et al., (2020) Antun, V., Renna, F., Poon, C., Adcock, B., and Hansen, A. C. (2020). On instabilities of deep learning in image reconstruction and the potential costs of AI. Proceedings of the National Academy of Sciences, 117(48):30088–30095. Ba et al., (2020) Ba, J., Erdogdu, M., Suzuki, T., Wu, D., and Zhang, T. (2020). Generalization of two-layer neural networks: An asymptotic viewpoint. In International Conference on Learning Representations. Bai and Silverstein, (2010) Bai, Z. and Silverstein, J. W. (2010). Spectral analysis of large dimensional random matrices, volume 20. Springer, New York. Bai and Yin, (1993) Bai, Z. D. and Yin, Y. Q. (1993). Limit of the smallest eigenvalue of a large dimensional sample covariance matrix. The Annals of Probability, 21(3):1275 – 1294. Bartlett et al., (2020) Bartlett, P. L., Long, P. M., Lugosi, G., and Tsigler, A. (2020). Benign overfitting in linear regression. Proceedings of the National Academy of Sciences, 117(48):30063–30070. Belkin et al., (2019) Belkin, M., Hsu, D., Ma, S., and Mandal, S. (2019). Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proceedings of the National Academy of Sciences, 116(32):15849–15854. Breiman, (1996) Breiman, L. (1996). Bagging predictors. Machine learning, 24:123–140. Breiman, (2001) Breiman, L. (2001). Random forests. Machine learning, 45:5–32. Bühlmann and Yu, (2002) Bühlmann, P. and Yu, B. (2002). Analyzing bagging. The Annals of Statistics, 30(4):927 – 961. Canziani et al., (2016) Canziani, A., Paszke, A., and Culurciello, E. (2016). An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678. Chen et al., (2023) Chen, X., Zeng, Y., Yang, S., and Sun, Q. (2023). Sketched ridgeless linear regression: The role of downsampling. In Proceedings of the 40th International Conference on Machine Learning, pages 5296–5326. PMLR. Couillet and Liao, (2022) Couillet, R. and Liao, Z. (2022). Random Matrix Methods for Machine Learning. Cambridge University Press. Dobriban and Liu, (2019) Dobriban, E. and Liu, S. (2019). Asymptotics for sketching in least squares regression. In Advances in Neural Information Processing Systems, volume 32. Dobriban and Wager, (2018) Dobriban, E. and Wager, S. (2018). High-dimensional asymptotics of prediction: Ridge regression and classification. The Annals of Statistics, 46(1):247–279. Efron, (1979) Efron, B. (1979). Bootstrap methods: another look at the jackknife. The Annals of Statistics, 7(1):1 – 26. El Karoui, (2010) El Karoui, N. (2010). High-dimensionality effects in the Markowitz problem and other quadratic programs with linear constraints: Risk underestimation. The Annals of Statistics, 38(6):3487–3566. El Karoui and Purdom, (2018) El Karoui, N. and Purdom, E. (2018). Can we trust the bootstrap in high-dimensions? The case of linear models. The Journal of Machine Learning Research, 19(1):170–235. Goodfellow et al., (2018) Goodfellow, I., McDaniel, P., and Papernot, N. (2018). Making machine learning robust against adversarial inputs. Communications of the ACM, 61(7):56–66. Hastie et al., (2022) Hastie, T., Montanari, A., Rosset, S., and Tibshirani, R. J. (2022). Surprises in high-dimensional ridgeless least squares interpolation. The Annals of Statistics, 50(2):949–986. Hastie et al., (2009) Hastie, T., Tibshirani, R., Friedman, J. H., and Friedman, J. H. (2009). The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, New York. He et al., (2016) He, K., Zhang, X., Ren, S., and Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Javanmard et al., (2020) Javanmard, A., Soltanolkotabi, M., and Hassani, H. (2020). Precise tradeoffs in adversarial training for linear regression. In Conference on Learning Theory, pages 2034–2078. PMLR. LeCun et al., (2015) LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature, 521(7553):436–444. Ledoit and Péché, (2011) Ledoit, O. and Péché, S. (2011). Eigenvectors of some large sample covariance matrix ensembles. Probability Theory and Related Fields, 151(1):233–264. Lee et al., (2015) Lee, S., Purushwalkam, S., Cogswell, M., Crandall, D., and Batra, D. (2015). Why M heads are better than one: Training a diverse ensemble of deep networks. arXiv preprint arXiv:1511.06314. LeJeune et al., (2020) LeJeune, D., Javadi, H., and Baraniuk, R. (2020). The implicit regularization of ordinary least squares ensembles. In International Conference on Artificial Intelligence and Statistics, pages 3525–3535. PMLR. Li et al., (2021) Li, Z., Xie, C., and Wang, Q. (2021). Asymptotic normality and confidence intervals for prediction risk of the min-norm least squares estimator. In Proceedings of the thirty-eighth International Conference on Machine Learning, pages 6533–6542. PMLR. Liang and Rakhlin, (2020) Liang, T. and Rakhlin, A. (2020). Just interpolate: Kernel “ridgeless” regression can generalize. The Annals of Statistics, 48(3):1329–1347. Mei and Montanari, (2022) Mei, S. and Montanari, A. (2022). The generalization error of random features regression: Precise asymptotics and the double descent curve. Communications on Pure and Applied Mathematics, 75(4):667–766. Mentch and Zhou, (2020) Mentch, L. and Zhou, S. (2020). Randomization as regularization: A degrees of freedom explanation for random forest success. The Journal of Machine Learning Research, 21(1):6918–6953. Neyshabur et al., (2014) Neyshabur, B., Tomioka, R., and Srebro, N. (2014). In search of the real inductive bias: on the role of implicit regularization in deep learning. arXiv preprint arXiv:1412.6614. Novak et al., (2018) Novak, R., Bahri, Y., Abolafia, D. A., Pennington, J., and Sohl-Dickstein, J. (2018). Sensitivity and generalization in neural networks: An empirical study. In International Conference on Learning Representations. Quenouille, (1949) Quenouille, M. H. (1949). Problems in plane sampling. The Annals of Mathematical Statistics, 20(3):355 – 375. Raskutti and Mahoney, (2016) Raskutti, G. and Mahoney, M. W. (2016). A statistical perspective on randomized sketching for ordinary least-squares. Journal of Machine Learning Research, 17(1):7508–7538. Richards et al., (2021) Richards, D., Mourtada, J., and Rosasco, L. (2021). Asymptotics of ridge (less) regression under general source condition. In Proceedings of the twenty-fourth International Conference on Artificial Intelligence and Statistics, pages 3889–3897. PMLR. Schmidhuber, (2015) Schmidhuber, J. (2015). Deep learning in neural networks: An overview. Neural Networks, 61:85–117. Serdobolskii, (2007) Serdobolskii, V. I. (2007). Multiparametric Statistics. Elsevier. Wei et al., (2023) Wei, C., Wang, Y.-C., Wang, B., and Kuo, C.-C. J. (2023). An overview on language models: Recent developments and outlook. arXiv preprint arXiv:2303.05759. Wyner et al., (2017) Wyner, A. J., Olson, M., Bleich, J., and Mease, D. (2017). Explaining the success of adaboost and random forests as interpolating classifiers. The Journal of Machine Learning Research, 18(1):1558–1590. Zhang et al., (2021) Zhang, C., Bengio, S., Hardt, M., Recht, B., and Vinyals, O. (2021). Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115. Zhang, (2007) Zhang, L. (2007). Spectral Analysis of Large Dimensional Random Matrices. PhD thesis, National University of Singapore, Singapore. Appendix Appendix S.1 Basics This subsection introduces necessary concepts and results that will be used throughout the appendix. We first introduce the Stieltjes transform. Under certain conditions we can reconstitute the measure $\mu$ starting from its Stieltjes transformation thanks to the inverse formula of Stieltjes-Perron. Definition S.1.1 (Stieltjes transform). The Stieltjes transform $m_{\mu}(z)$ of a measure $\mu$ with support $\mathrm{supp}(\mu)$ is the function of the complex variable $z$ defined outside $\mathrm{supp}(\mu)$ by the formula $$\displaystyle m_{\mu}(z)=\int_{\mathrm{supp}(\mu)}\frac{\mu(dt)}{t-z},~{}z\in\mathbb{C}\setminus\mathrm{supp}(\mu).$$ When $\mu$ is clear from the context, we shall omit the subscript $\mu$ and write $m_{\mu}(z)$ as $m(z)$. We discuss three Stieljes transforms that will be used in the proofs for sketching, baggging, and ridge equivalence. Let $S_{k}$ be a possibly singular diagonal sketching matrix in the form of (2.3) whose diagonal entries consist of multipliers $\mathcal{W}$ satisfying Assumption 3. Recall that $X$ is the data matrix. Let $\widehat{\Sigma}_{k}=X^{\top}S_{k}^{\top}S_{k}X/n$ be the sketched covariance matrix, and $\mu_{\widehat{\Sigma}_{k}}=\sum_{i=1}^{d}\delta_{\lambda_{i}(\widehat{\Sigma}_{k})}$ be the empirical spectral measure of $\widehat{\Sigma}_{k}$. Then the associate Stieltjes transform is $$\displaystyle m_{1,n}(z)$$ $$\displaystyle=\int_{\mathrm{supp}(\mu_{\widehat{\Sigma}_{k}})}\frac{1}{t-z}\mu_{\widehat{\Sigma}_{k}}(dt)=\frac{1}{d}\sum_{i=1}^{d}\frac{1}{\lambda_{i}(\widehat{\Sigma}_{k})-z}=\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\right).$$ Define $m_{1}(z)$ as a solution to the following self-consistent equation: $$m_{1}(z)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]-zm_{1}(z)=1,~{}~{}~{}z<0.$$ (S.1.1) Our first lemma concerns the almost sure convergence of $\mu_{\widehat{\Sigma}}$ to some probability measure $\mu$ characterized by $m_{1}(z)$ and the existence of $m_{1}(0)$, which will be used for proving results on sketched estimators. Lemma S.1.2. Assume Assumptions 1-3, and $\Sigma=I$. As $n,d\rightarrow+\infty$, we have $$\displaystyle\mu_{\widehat{\Sigma}_{k}}\rightarrow\mu{\quad\rm a.s.}$$ where $\mu$ is the probability distribution defined by the Stieltjes transform $m_{1}(z)$ and $m_{1}(z)$ is the unique positive solution to (S.1.1). Consequently, $m_{1,n}(z)\rightarrow m_{1}(z)$ almost surely. When $\gamma<\theta$, $m_{1}(0):=\lim_{z\rightarrow 0-}m_{1}(z)$ exist and satisfies equation (S.1.1) at $z=0$. For two sketching matrices $S_{k}$ and $S_{\ell}$, let $\widehat{\Sigma}_{k}=X^{\top}S_{k}^{\top}S_{k}X/n$ and $\widehat{\Sigma}_{\ell}=X^{\top}S_{\ell}^{\top}S_{\ell}X/n$ be the corresponding sketched covariance matrices. Let $\mu_{\widehat{\Sigma}_{k}}=\sum_{i=1}^{d}\delta_{\lambda_{i}(\widehat{\Sigma}_{k})}/d$ and $\mu_{\widehat{\Sigma}_{\ell}}=\sum_{i=1}^{d}\delta_{\lambda_{i}(\widehat{\Sigma})}/d$ be the empirical spectral measures of $\widehat{\Sigma}_{k}$ and $\widehat{\Sigma}_{\ell}$ respectively. Define $m_{2,n}(z)$ as $$\displaystyle m_{2,n}(z):={d}^{-1}\textrm{tr}((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}).$$ Then the following result establishes the almost sure convergence of $m_{2,n}(z)$, which will be used for proving results on bagged estimators. Lemma S.1.3. Assume Assumptions 1-3 and $\Sigma=I$. For any $z<0$, as $n,d\rightarrow+\infty$, $$\lim_{n,d\rightarrow+\infty}m_{2,n}(z)=m_{2}(z){\quad\rm a.s.}$$ where $m_{2}(z)$ satisfies $$m_{2}(z)\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(z)}\right]\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]-zm_{2}(z)=m_{1}(z).$$ (S.1.2) The above result also holds for $z=0$ when $\gamma<\theta$. Let $m_{2,\lambda,n}(z):=\textrm{tr}((X^{\top}X/n+\lambda I)^{-1}(X^{\top}S_{k}^{\top}S_{k}X/n-z)^{-1})/d.$ Then the following result establishes the almost sure convergence of $m_{2,\lambda,n}(z)$ and will be used for proving the ridge equivalence result. Lemma S.1.4. Assume Assumptions 1-3 and $\Sigma=I$. Then, for any $z<0$, we have, as $n,d\rightarrow+\infty$, $$\lim_{n,d\rightarrow+\infty}m_{2,\lambda,n}(z)=m_{2,\lambda}(z){\quad\rm a.s.}$$ where $m_{2,\lambda}(z)$ satisfies the equation $$m(-\lambda)=m_{2,\lambda}(z)\frac{1}{1+\gamma m(-\lambda)}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]-zm_{2,\lambda}(z),$$ (S.1.3) where $m(-\lambda)$ is the Stieltjes transform of the limiting spectral distribution of $X^{\top}X/n$ evaluated at $-\lambda$. When $\gamma<\theta$, the above result also holds for $z=0$. S.1.1 Proofs for Section S.1 This subsection proves Lemmas S.1.2-S.1.4 in order. S.1.1.1 Proof of Lemma S.1.2 Proof of Lemma S.1.2. We prove this lemma in two steps. Almost sure convergence of $\mu_{n}$. If the multipliers are deterministic and bounded, Theorem 2.7 by Couillet and Liao, (2022) proves that equation (S.1.1) has the unique positive solution $m_{1}(z)$ for $z<0$ and $\mu_{\widehat{\Sigma}_{k}}$ converges almost surely to $\mu$ which is defined by the Stieltjes transform $m(z)$. The deterministic and bounded assumption on the multipliers can be relaxed to Assumption S.7 by using the arguments presented in Section 4.4 of (Zhang,, 2007). Proving that $m_{1}(0):=\lim_{z\rightarrow 0^{-}}m_{1}(z)$ exits. When $\gamma<\theta$, Lemma S.1.5 and Lemma S.1.6 imply that the sketched sample covariance matrix $X^{\top}S_{k}^{\top}S_{k}X/n$ is almost surely invertible as $n,d\rightarrow+\infty$, and its smallest eigenvalue is strictly greater than zero in the limit. Consequently, $m_{1}(0):=\lim_{z\rightarrow 0^{-}}m_{1}(z)$ exists and satisfies $$\displaystyle m_{1}(0)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}m_{1}(z)=\lim_{z\rightarrow 0^{-}}\int\frac{1}{t-z}d\mu(t)=\int\frac{1}{t}d\mu(t),$$ (S.1.4) where the last line follows from the dominated convergence theorem. Furthermore, for $z\leq 0$, when $\gamma<\theta$, $m_{1}(z)$ is bounded. Applying the dominated convergence theorem, we conclude that $m(0)=\lim_{z\rightarrow 0^{-}}m_{1}(z)$ satisfies equation (S.1.1). Hence, the desired result follows. ∎ S.1.1.2 Proof of Lemma S.1.3 Proof of Lemma S.1.3. Let $z<0$. We start by expressing $(\widehat{\Sigma}_{k}-zI)^{-1}$ as $$\displaystyle(\widehat{\Sigma}_{k}-zI)^{-1}$$ $$\displaystyle=(\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)$$ $$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}w_{\ell,i}(\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}x_{i}x_{i}^{\top}-z(\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}.$$ Taking the trace and applying the Sherman–Morrison formula, we obtain $$\displaystyle\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1}$$ $$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}w_{\ell,i}\frac{\textrm{tr}(x_{i}^{\top}(\widehat{\Sigma}_{k,-i}-zI)^{-1}(\widehat{\Sigma}_{\ell,-i}-zI)^{-1}x_{i})}{(1+\frac{1}{n}w_{k,i}x_{i}^{\top}(\widehat{\Sigma}_{k,-i}-zI)^{-1}x_{i})(1+\frac{1}{n}w_{\ell,i}x_{i}^{\top}(\widehat{\Sigma}_{\ell,-i}-zI)^{-1}x_{i})}$$ $$\displaystyle\quad\quad\quad-z\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}.$$ Applying Lemma S.1.7 and Lemma S.6.9, we obtain $$\displaystyle\frac{1}{d}\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1}-\frac{1}{n}\sum_{i=1}^{n}w_{\ell,i}\frac{\frac{1}{d}\textrm{tr}((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1})}{(1+\frac{1}{n}w_{k,i}\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1})(1+\frac{1}{n}w_{\ell,i}\textrm{tr}(\widehat{\Sigma}_{\ell}-zI)^{-1})}$$ $$\displaystyle\quad\quad\quad-z\,\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)\rightarrow 0{\quad\rm a.s.}$$ as $n,d\rightarrow+\infty$. Applying Lemma S.1.2 then acquires $$m_{1}(z)-\frac{1}{n}\sum_{i=1}^{n}w_{\ell,i}\frac{m_{2,n}(z)}{(1+\gamma w_{k,i}m_{1}(z))(1+\gamma w_{\ell,i}m_{1}(z))}+zm_{2,n}(z)\rightarrow 0{\quad\rm a.s.}$$ Using Lemma S.1.2, we obtain $$\frac{w_{\ell,i}}{(1+\gamma w_{k,i}m_{1}(z))(1+\gamma w_{\ell,i}m_{1}(z))}\leq\frac{1}{\gamma m_{1}(z)}<\infty.$$ Hence, using Assumption 3 and the dominated convergence theorem, we obtain $$m_{1}(z)-m_{2,n}(z)\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(z)}\right]\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]+zm_{2,n}(z)\rightarrow 0{\quad\rm a.s.}$$ Consequently, the limit of $m_{2,n}(z)$ exists and satisfies equation (S.1.2). This completes the proof for the case of $z<0$. In the case where $\gamma<\theta$ and $z=0$, we can assume both sample covariance matrices are invertible almost surely as $n,d\rightarrow+\infty$, as indicated by Lemma S.1.5. Then applying the same argument as in the case of $z<0$ finishes the proof. ∎ S.1.1.3 Proof of Lemma S.1.4 Proof of Lemma S.1.4. The proof of Lemma S.1.4 is similar to that of Lemma S.1.3. The only difference is that the ridge regression has term $(X^{\top}X+\lambda I)^{-1}$, which can be written as $(X^{\top}SSX+\lambda I)^{-1}$ with sketching matrix $S=I$. One can check that all arguments in Lemma S.1.3 carry through. ∎ S.1.2 Technical lemmas This subsection proves technical lemmas that are used in the proofs of the supporting lemmas in the previous subsection. Lemma S.1.5. Let $\Omega_{n}:=\{\omega\in\Omega:(X^{\top}S^{\top}SX/n)(\omega)~{}\text{is invertible}\}$. Assume Assumptions 1-3. Then $$\mathbb{P}\left(\lim_{n,d\rightarrow+\infty}\Omega_{n}\right)=\begin{cases}1,\quad&~{}\gamma<\theta,\\ 0,\quad&~{}\gamma>\theta.\end{cases}$$ Proof of Lemma S.1.5. Let $A_{0}:=\{i:~{}S_{i,i}\neq 0\}$. The smallest eigenvalue can be lower bounded as $$\displaystyle\lambda_{\min}(X^{\top}S^{\top}SX/n)$$ $$\displaystyle=\lambda_{\min}\left(\sum_{i=1}^{n}w_{i}x_{i}x_{i}^{\top}/n\right)$$ $$\displaystyle=\lambda_{\min}\left(\sum_{i\in A_{0}}w_{i}x_{i}x_{i}^{\top}/n\right)$$ $$\displaystyle\geq c_{w}\lambda_{\min}\left(\sum_{i\in A_{0}}x_{i}x_{i}^{\top}/n\right)$$ $$\displaystyle\geq c_{w}c_{\lambda}\lambda_{\min}\left(\sum_{i\in A_{0}}z_{i}z_{i}^{\top}/n\right).$$ Applying Lemma S.6.3, when $\lim|A_{0}|/d>1$, we obtain $$\lambda_{\min}(X^{\top}S^{\top}SX/n)\geq c_{w}c_{\lambda}\left(1-\sqrt{\lim d/|A_{0}|}\right)\lim|A_{0}|/n>0,$$ which implies that $X^{\top}S^{\top}SX/n$ is invertible in the limit. On the other hand, for any vector $v\in\mathbb{R}^{d}$, we have $$\displaystyle v^{\top}X^{\top}S^{\top}SXv$$ $$\displaystyle=\sum_{i\in A_{0}}w_{i}v^{\top}x_{i}x_{i}^{\top}v$$ $$\displaystyle=\sum_{i\in A_{0}}w_{i}(x_{i}^{\top}v)^{2}.$$ Therefore, when $|A_{0}|<d$, there exists a non-zero vector $v$ such that $v^{\top}XS^{\top}S^{\top}Xv=0$. The cardinality of $A_{0}$ can be expressed as: $$|A_{0}|=\sum_{i=1}^{n}1\left({w_{i}\neq 0}\right).$$ Using Assumption 3, we have $$|A_{0}|/n=\frac{1}{n}\sum_{i=1}^{n}1\left({w_{i}\neq 0}\right)\rightarrow\mathbb{E}_{\mu_{w}}1\left({w\neq 0}\right)=\theta~{}{\quad\rm a.s.}$$ (S.1.5) as $n,d\rightarrow+\infty$. This finishes the proof. ∎ Lemma S.1.6. Assume Assumptions 1-3. There exists a positive constant $c$ such that, as $n,d\rightarrow+\infty$, $$\lim_{n,d\rightarrow+\infty}\lambda^{+}_{\min}(X^{\top}S^{\top}SX/n)\geq c\left(1-\sqrt{\gamma/\theta}\right)^{2}{\quad\rm a.s.}$$ where $\lambda^{+}_{\min}(A)$ is the smallest non-zero eigenvalue of $A$. Proof of Lemma S.1.6. Given Assumption 2, we have $x_{i}=\Sigma^{1/2}_{d}z_{i}$. Let $A_{0}:=\{i~{}|~{}S_{i,i}\neq 0\}$, and $X_{A_{0}}^{\top}X_{A_{0}}=\sum_{i\in A_{0}}x_{i}x_{i}^{\top}$ and $Z_{A_{0}}^{\top}Z_{A_{0}}=\sum_{i\in A_{0}}z_{i}z_{i}^{\top}$. Let $v$ be the unit vector corresponding to the smallest non-zero eigenvalue of $X^{\top}S^{\top}SX/n$. Then we have $$\displaystyle\lim_{n,d\rightarrow+\infty}\lambda^{+}_{\min}(X^{\top}S^{\top}SX/n)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}{v^{\top}X^{\top}S^{\top}SXv/n}$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}{\sum_{i\in A_{0}}w_{i}(x_{i}^{\top}v)^{2}/n}$$ $$\displaystyle\geq\lim_{n,d\rightarrow+\infty}{c_{w}}{\sum_{i\in A_{0}}(x_{i}^{\top}v)^{2}/n}$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}{c_{w}}{v^{\top}X_{A_{0}}^{\top}X_{A_{0}}v/n}$$ $$\displaystyle\geq\lim_{n,d\rightarrow+\infty}{c_{w}}\lambda_{\min}(\Sigma)\lambda^{+}_{\min}(Z_{A_{0}}^{\top}Z_{A_{0}}/n)$$ $$\displaystyle\geq\lim_{n,d\rightarrow+\infty}c_{\lambda}{c_{w}}\left(1-\sqrt{\gamma/\theta}\right)^{2},$$ where the first inequality follows from Assumption 3, and the second inequality uses the fact that $v^{\top}X^{\top}S^{\top}SXv\neq 0$, which implies $v^{\top}X_{A_{0}}^{\top}X_{A_{0}}v/n\geq\lambda^{+}_{\min}(X_{A_{0}}^{\top}X_{A_{0}}/n)$. Finally, the last line uses Lemma S.6.3, along with Assumption 1 and equation (S.1.5). ∎ Lemma S.1.7. Consider two subsamples $X_{k}:=S_{k}X$ and $X_{\ell}:=S_{\ell}X$. Let $z<0$. Then the following holds $$\displaystyle\left|\textrm{tr}\left((\widehat{\Sigma}_{h,-i}-zI)^{-1}-(\widehat{\Sigma}_{h}-zI)^{-1}\right)\right|\leq-\frac{1}{z}~{}~{}~{}\textrm{and}$$ $$\displaystyle\left|\textrm{tr}((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1})-\textrm{tr}((\widehat{\Sigma}_{k,-i}-zI)^{-1}(\widehat{\Sigma}_{\ell,-i}-zI)^{-1})\right|\leq\frac{2}{z^{2}},$$ where $\widehat{\Sigma}_{h,-i}:=\widehat{\Sigma}_{h}-\frac{1}{n}w_{h,i}x_{i}x_{i}^{\top}$ for $h=k,\ell$. Furthermore, when $\gamma<\theta$, there exists a positive constant $c$ such that, under Assumptions 1-3, as $n,d\rightarrow+\infty$, we have $$\left|\textrm{tr}((\widehat{\Sigma}_{k})^{-1}(\widehat{\Sigma}_{\ell})^{-1})-\textrm{tr}((\widehat{\Sigma}_{k,-i})^{-1}(\widehat{\Sigma}_{\ell,-i})^{-1})\right|\leq c\left(1-\sqrt{\gamma/\theta}\right)^{-4}~{}{\quad\rm a.s.}$$ Proof of Lemma S.1.7. We first prove the first two results. For any real positive semi-definite matrix $A\in\mathbb{R}^{d\times d}$ and any $z<0$, we have $\big{\|}(A-zI)^{-1}\big{\|}_{2}\leq-1/z$. Using the Sherman-Morrison formula, we obtain that $$\displaystyle(\widehat{\Sigma}_{h,-i}-zI)^{-1}-(\widehat{\Sigma}_{h}-zI)^{-1}$$ $$\displaystyle=\frac{(\widehat{\Sigma}_{h,-i}-zI)^{-1}w_{h,i}x_{i}x_{i}^{\mathrm{\scriptscriptstyle T}}(\widehat{\Sigma}_{h,-i}-zI)^{-1}/n}{1+w_{h,i}x_{i}^{\top}(\widehat{\Sigma}_{h,-i}-zI)^{-1}x_{i}/n}$$ is positive semi-definite. Thus applying Lemma S.6.4 acquires $$\displaystyle 0\leq\textrm{tr}\left((\widehat{\Sigma}_{h,-i}-zI)^{-1}-(\widehat{\Sigma}_{h}-zI)^{-1}\right)\leq-\frac{1}{z}.$$ This finishes the proof for the first result. Let $\lambda_{1}(A),\ldots,\lambda_{d}(A)$ be the eigenvalues of $A$ in decreasing order. Then $$\displaystyle\left|\textrm{tr}((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1})-\textrm{tr}((\widehat{\Sigma}_{k,-i}-zI)^{-1}(\widehat{\Sigma}_{\ell,-i}-zI)^{-1})\right|$$ $$\displaystyle=\left|\textrm{tr}((\widehat{\Sigma}_{k}-zI)^{-1}((\widehat{\Sigma}_{\ell}-zI)^{-1}-(\widehat{\Sigma}_{\ell,-i}-zI)^{-1}))+\textrm{tr}((\widehat{\Sigma}_{\ell,-i}-zI)^{-1}((\widehat{\Sigma}_{k}-zI)^{-1}-(\widehat{\Sigma}_{k,-i}-zI)^{-1}))\right|$$ $$\displaystyle\leq\sum_{j=1}^{d}\lambda_{j}\left((\widehat{\Sigma}_{k}-zI)^{-1}\right)\lambda_{j}\left((\widehat{\Sigma}_{\ell,-i}-zI)^{-1}-(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)$$ $$\displaystyle\quad\quad\quad+\sum_{j=1}^{d}\lambda_{j}\left((\widehat{\Sigma}_{\ell,-i}-zI)^{-1}\right)\lambda_{j}\left((\widehat{\Sigma}_{k,-i}-zI)^{-1}-(\widehat{\Sigma}_{k}-zI)^{-1}\right)$$ $$\displaystyle\leq-\frac{1}{z}\left(\textrm{tr}((\widehat{\Sigma}_{\ell,-i}-zI)^{-1}-(\widehat{\Sigma}_{\ell}-zI)^{-1})+\textrm{tr}((\widehat{\Sigma}_{k,-i}-zI)^{-1}-(\widehat{\Sigma}_{k}-zI)^{-1})\right)$$ $$\displaystyle\leq\frac{2}{z^{2}},$$ where the last second line uses the Von Neumann’s trace inequality, aka Lemma S.6.5. We then prove the last result. Because $\gamma<\theta$ and by Lemma S.1.5, $\widehat{\Sigma}_{h}$ and $\widehat{\Sigma}_{h,-i}$ are invertible almost surely as $n,d\rightarrow+\infty$. Using the Sherman-Morrison formula, aka Lemma S.6.6, we have for either $h=k,\ell$, $$\displaystyle\left|\textrm{tr}((\widehat{\Sigma}_{h})^{-1}-(\widehat{\Sigma}_{h,-i})^{-1})\right|$$ $$\displaystyle=\left|\textrm{tr}\left(\left((\widehat{\Sigma}_{h,-i})^{-1}-\frac{(\widehat{\Sigma}_{h,-i})^{-1}w_{h,i}x_{i}x_{i}^{\mathrm{\scriptscriptstyle T}}(\widehat{\Sigma}_{h,-i})^{-1}/n}{1+w_{h,i}x_{i}^{\top}(\widehat{\Sigma}_{h,-i})^{-1}x_{i}/n}\right)-(\widehat{\Sigma}_{h,-i})^{-1}\right)\right|$$ $$\displaystyle=\textrm{tr}\left(\frac{(\widehat{\Sigma}_{h,-i})^{-1}w_{h,i}x_{i}x_{i}^{\top}(\widehat{\Sigma}_{h,-i})^{-1}/n}{1+w_{h,i}x_{i}^{\top}(\widehat{\Sigma}_{h,-i})^{-1}x_{i}/n}\right)$$ $$\displaystyle=\frac{w_{h,i}\|(\widehat{\Sigma}_{h,-i})^{-1}x_{i}\|_{2}^{2}/n}{1+w_{h,i}x_{i}^{\top}(\widehat{\Sigma}_{h,-i})^{-1}x_{i}/n}$$ $$\displaystyle\leq\frac{\|(\widehat{\Sigma}_{h,-i})^{-1}x_{i}\|_{2}^{2}}{x_{i}^{\top}(\widehat{\Sigma}_{h,-i})^{-1}x_{i}}$$ $$\displaystyle\leq\|(\widehat{\Sigma}_{h,-i})^{-1}\|_{2}$$ $$\displaystyle=\frac{1}{\lambda_{\min}\left(\widehat{\Sigma}_{h,-i}\right)}$$ $$\displaystyle\leq c_{0}\left(1-\sqrt{\gamma/\theta}\right)^{-2}{\quad\rm a.s.}$$ where the last line follows from Lemma S.1.6. Then following the same argument to the proof of the first result, we obtain for some constant $c$ $$\displaystyle\left|\textrm{tr}((\widehat{\Sigma}_{k})^{-1}(\widehat{\Sigma}_{\ell})^{-1})-\textrm{tr}((\widehat{\Sigma}_{k,-i})^{-1}(\widehat{\Sigma}_{\ell,-i})^{-1})\right|\leq c\left(1-\sqrt{\gamma/\theta}\right)^{-4}.$$ This finishes the proof. ∎ Appendix S.2 Proofs for Section 2 S.2.1 Proof of Lemma 2.1 Proof of Lemma 2.1. Using Definition (2.2), we obtain $$\displaystyle\widehat{\beta}^{B}$$ $$\displaystyle=\frac{1}{B}\sum_{k=1}^{B}\widehat{\beta}_{k}$$ $$\displaystyle=\frac{1}{B}\sum_{k=1}^{B}(X^{\top}S_{k}^{\top}S_{k}X)^{+}X^{\top}S_{k}^{\top}S_{k}X\beta+(X^{\top}S_{k}^{\top}S_{k}X)^{+}X^{\top}S_{k}^{\top}S_{k}E.$$ For the bias part, $$\displaystyle B_{X,\,\mathcal{W},\,\beta}\left(\widehat{\beta}^{B}\right)$$ $$\displaystyle=\left\|\mathbb{E}(\widehat{\beta}^{B}\;\middle|\;X,\,\mathcal{W},\,\beta)-\beta\right\|_{\Sigma}^{2}$$ $$\displaystyle=\left\|\frac{1}{B}\sum_{k=1}^{B}(\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}-I)\beta\right\|_{\Sigma}^{2}$$ $$\displaystyle=\frac{1}{B^{2}}\sum_{k,\ell}\beta^{\top}\Pi_{k}\Sigma\Pi_{\ell}\beta.$$ For the variance part, since $\varepsilon_{i}$ are i.i.d., we have $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle=\textrm{tr}\left[{\mathrm{cov}}\left(\widehat{\beta}^{B}\;\middle|\;X,\,\mathcal{W},\,\beta\right)\Sigma\right]$$ $$\displaystyle=\textrm{tr}\left[{\mathrm{cov}}\left(\frac{1}{B}\sum_{k=1}(X^{\top}S_{k}^{\top}S_{k}X)^{+}X^{\top}S_{k}^{\top}S_{k}E\;\middle|\;X,\,\mathcal{W},\,\beta\right)\Sigma\right]$$ $$\displaystyle=\frac{\sigma^{2}}{B^{2}}\sum_{k,\ell}\textrm{tr}\left(\frac{1}{n^{2}}\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma\right).$$ ∎ S.2.2 Proof for Lemma 2.2 Proof of Lemma 2.2. Let $\pi_{1},\dots,\pi_{n}$ be independent and identically distributed random variables following a Poisson distribution with parameter 1. Define $\Pi_{n}=\sum_{i=1}^{n}\pi_{i}$. Using the same arguments as in the proof of Proposition 4.10 in (El Karoui,, 2010), we can establish that $$(\pi_{1},\dots,\pi_{n})\,|\,\Pi_{n}=n\sim{\rm Multinormial}(n;1/n,\ldots,1/n).$$ Now, consider a bounded function $f$, and let $W\sim\text{Poisson}(1)$ be a random variable. We have $$\displaystyle\Pr\left(\left|\frac{1}{n}\sum_{i=1}^{n}f(w_{i})-\mathbb{E}f(W)\right|\geq\epsilon\right)$$ $$\displaystyle=\Pr\left(\left|\frac{1}{n}\sum_{i=1}^{n}f(\pi_{i})-\mathbb{E}f(W)\right|\geq\epsilon\;\middle|\;\Pi_{n}=n\right)$$ $$\displaystyle\leq\Pr\left(\left|\frac{1}{n}\sum_{i=1}^{n}f(\pi_{i})-\mathbb{E}f(W)\right|\geq\epsilon\right)/P(\Pi_{n}=n).$$ Since $f$ is bounded and $\mathbb{E}f(\pi_{i})=\mathbb{E}f(W)$, it can be shown that $$\displaystyle\mathbb{E}\left(\frac{1}{n}\sum_{i=1}^{n}f(\pi_{i})-\mathbb{E}f(W)\right)^{4}$$ $$\displaystyle=\sum_{i=1}^{n}\frac{1}{n^{4}}\mathbb{E}\left(f(\pi_{i})-\mathbb{E}f(W)\right)^{4}+\sum_{i\neq j}\frac{1}{n^{4}}\mathbb{E}\left(f(\pi_{i})-\mathbb{E}f(W)\right)^{2}\mathbb{E}\left(f(\pi_{j})-\mathbb{E}f(W)\right)^{2}$$ $$\displaystyle\leq O(n^{-2}).$$ Considering that $\Pi_{n}$ follows a Poisson distribution with parameter $n$, and $\Pr(\Pi_{n}=n)\sim 1/\sqrt{2\pi n}$, we can deduce that $\Pr\left(\left|\frac{1}{n}\sum_{i=1}^{n}f(w_{i})-\mathbb{E}f(W)\right|\geq\epsilon\right)\leq O(n^{-3/2})$. By applying the Borel–Cantelli lemma, we have $$\int f(w)d{\widetilde{\mu}_{w}}(w)\rightarrow\mathbb{E}f(W){\quad\rm a.s.}$$ as $n,d\rightarrow+\infty$. ∎ S.2.3 Proof of Lemma 2.3 Proof of Lemma 2.3. By Lemma 2.1, we have $$B_{X,\,\mathcal{W},\,\beta}\left(\widehat{\beta}^{B}\right)=\frac{1}{B^{2}}\sum_{k,\ell}\beta^{\top}\Pi_{k}\Sigma\Pi_{\ell}\beta.$$ Since the eigenvalues of the projection matrices $\Pi_{k}=I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}$ are either zero or one, and $\|\Sigma\|_{2}\leq C$ for all $d$, the norms of $\Pi_{k}\Sigma\Pi_{\ell}$ are uniformly bounded. The desired result then follows from Lemma S.6.8. ∎ Appendix S.3 Proofs for Section 3 This section proves the results in Section 3. S.3.1 Proof of Lemma 3.1 Lemma 3.1 follows from the following stronger result. Lemma S.3.1. When $\gamma<\theta$, $\gamma m_{1}(0)$ is the unique positive solution to equation (3.1). Proof of Lemma S.3.1. We first prove that $\gamma m_{1}(0)$ is a solution to (3.1). Because $\gamma<\theta$, we apply Lemma S.1.2 and obtain $$\displaystyle 1$$ $$\displaystyle=m_{1}(0)\cdot\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right],$$ which is equivalent to $$\displaystyle 1$$ $$\displaystyle=\frac{1}{\gamma}\left(1-\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(0)}\right]\right).$$ Comparing the equality above with equation (3.1), we conclude that $\gamma m_{1}(0)$ is a solution to equation (3.1). We then prove the uniqueness of the solution in the positive half line. Let $$\displaystyle f(x)=\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+wx}\right].$$ Then $f(x)$ is a continuous and decreasing function on the interval $[0,+\infty)$. Additionally, we have $f(0)=1>1-\gamma$, $\lim_{x\rightarrow+\infty}f(x)=0<1-\gamma$. Therefore, $\gamma m_{1}(0)$ is the unique positive solution to equation (3.1). ∎ S.3.2 Proof of Theorem 3.2 Proof of Theorem 3.2. Theorem 3.2 is a special case of Theorem 4.2. In the underparameterized regime, the limiting risk in Theorem 4.2 is independent of the covariance matrix and thus is the same in the isotropic case. It suffices to derive the limiting risk of the sketched estimator in the overparameterized regime. Recall that $\delta_{x}$ is the Dirac measure at $x$. By setting $\Sigma=I$ and $H=\delta_{1}$, the self-consistent equation (4.1) reduces to $$\displaystyle v(z)=\left(-z+\frac{\gamma}{\theta}\frac{1}{1+v(z)}\right)^{-1},$$ which further gives that $v(0)=\theta/(\gamma-\theta)$ and $$\displaystyle v(z)=\frac{\gamma/\theta-1-z-\sqrt{(z-\gamma/\theta+1)^{2}-4z}}{2z},\quad z<0.$$ Furthermore, by taking the derivative of $v(z)$, we have $$\displaystyle v^{\prime}(z)=\frac{\left(-1-\frac{z-\gamma/\theta-1}{\sqrt{(z-\gamma/\theta+1)^{2}-4z}}\right)2z-2\left(\gamma/\theta-1-z-\sqrt{(z-\gamma/\theta+1)^{2}-4z}\right)}{4z^{2}}.$$ By applying L’Hôpital’s rule, we get $$\displaystyle v^{\prime}(0)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{2\left(-1-\frac{z-\gamma/\theta-1}{\sqrt{(z-\gamma/\theta+1)^{2}-4z}}\right)+2z\left(-\frac{1}{\sqrt{(z-\gamma/\theta+1)^{2}-4z}}+\frac{\left(z-\gamma/\theta-1\right)^{2}}{\left((z-\gamma/\theta+1)^{2}-4z\right)^{3/2}}\right)+2+2\frac{z-\gamma/\theta-1}{\sqrt{(z-\gamma/\theta+1)^{2}-4z}}}{8z}$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{1}{4}\left(-\frac{1}{\sqrt{(z-\gamma/\theta+1)^{2}-4z}}+\frac{\left(z-\gamma/\theta-1\right)^{2}}{\left((z-\gamma/\theta+1)^{2}-4z\right)^{3/2}}\right)$$ $$\displaystyle=\frac{\theta}{4(\gamma-\theta)}\left(-1+\frac{\theta^{2}}{(\gamma-\theta)^{2}}\frac{(\gamma+\theta)^{2}}{\theta^{2}}\right)$$ $$\displaystyle=\frac{\theta^{2}\gamma}{(\gamma-\theta)^{3}}.$$ Therefore, the limiting risk satisfies almost surely that $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle\rightarrow\frac{r^{2}\theta}{\gamma v(0)}+\sigma^{2}\left(\frac{v^{\prime}(0)}{v(0)^{2}}-1\right)$$ $$\displaystyle=\frac{r^{2}\theta}{\gamma}\frac{\gamma-\theta}{\theta}+\sigma^{2}\left(\frac{\theta^{2}\gamma}{(\gamma-\theta)^{3}}\frac{(\gamma-\theta)^{2}}{\theta^{2}}-1\right)$$ $$\displaystyle=r^{2}\frac{\gamma-\theta}{\gamma}+\sigma^{2}\frac{\theta}{\gamma-\theta}.$$ ∎ S.3.3 Proof of Corollary 3.3 Proof of Corollary 3.3. Recall that $\theta=1-\mu_{w}(\{0\})$. We rewrite equation (3.1) as: $$\displaystyle 1-\gamma$$ $$\displaystyle=\int\frac{1}{1+ct}\mu_{w}(dt)$$ $$\displaystyle=1-\theta+\int\frac{1}{1+ct}1\left({t\neq 0}\right)\mu_{w}(dt).$$ Then, we obtain $$\lim_{\gamma/\theta\nearrow 1}\int\frac{1}{1+ct}1\left({t\neq 0}\right)\mu_{w}(dt)=0.$$ Since $\mu_{w}$ satisfies Assumption 3 and $c$ is positive, we have $$0\leq\lim_{\gamma/\theta\nearrow 1}\int\frac{1}{(1+ct)^{2}}1\left({t\neq 0}\right)\mu_{w}(dt)\leq\lim_{\gamma/\theta\nearrow 1}\int\frac{1}{1+ct}1\left({t\neq 0}\right)\mu_{w}(dt)=0.$$ Therefore, $$\displaystyle\lim_{\gamma/\theta\nearrow 1}f(\gamma)$$ $$\displaystyle=\lim_{\gamma/\theta\nearrow 1}\int\frac{1}{(1+ct)^{2}}\mu_{w}(dt)$$ $$\displaystyle=\lim_{\gamma/\theta\nearrow 1}\left(1-\theta+\int\frac{1}{(1+ct)^{2}}1\left({t\neq 0}\right)\mu_{w}(dt)\right)$$ $$\displaystyle=1-\gamma.$$ ∎ S.3.4 Proof of Corollary 3.4 Proof of Corollary 3.4. Following Theorem 3.2, it suffices to calculate $f(\gamma)=\int\frac{1}{(1+ct)^{2}}d\mu_{w}(t)$ for each case. (i) Full-sample. It is obvious that $\mu_{w}(\{1\})=1$. From equation (3.1), we have $$\displaystyle 1-\gamma=\int\frac{1}{1+ct}\mu_{w}(dt)=\frac{1}{1+c}.$$ Therefore, $$\displaystyle f(\gamma)$$ $$\displaystyle=\int\frac{1}{(1+ct)^{2}}\mu_{w}(dt)$$ $$\displaystyle=\frac{1}{(1+c)^{2}}$$ $$\displaystyle=(1-\gamma)^{2}.$$ (ii) Bernoulli multipliers. Since the multipliers $w_{i,j}$ are i.i.d, by the Glivenko–Cantelli theorem, $\mu_{w}\sim\text{Bernoulli}(\theta)$. Then, from equation (S.1.1), $$1-\gamma=1-\theta+\frac{\theta}{1+c}.$$ Therefore, $$\displaystyle f(\gamma)$$ $$\displaystyle=\int\frac{1}{(1+ct)^{2}}\mu_{w}(dt)$$ $$\displaystyle=1-\theta+\theta\frac{1}{(1+c)^{2}}$$ $$\displaystyle=1-\theta+\theta(1-\gamma/\theta)^{2}.$$ (iii) Multinomial multipliers. By Lemma 2.2, we have $\mu_{w}=\text{Poisson}(1)$. Then, from equation (3.1), $$1-\gamma=1-(1-1/e)+\int\frac{1}{1+ct}1\left({t\neq 0}\right)\mu_{w}(dt).$$ Therefore, $$\displaystyle f(\gamma)$$ $$\displaystyle=\int\frac{1}{(1+ct)^{2}}\mu_{w}(dt)$$ $$\displaystyle=1-(1-1/e)+\int\frac{1}{(1+ct)^{2}}1\left({t\neq 0}\right)\mu_{w}(dt)$$ $$\displaystyle\geq 1-(1-1/e)+\left(\int\frac{1}{1+ct}1\left({t\neq 0}\right)\mu_{w}(dt)\right)^{2}/\int 1\left({t\neq 0}\right)\mu_{w}(dt)$$ $$\displaystyle=1-(1-1/e)+(1-1/e)(1-\gamma/(1-1/e)),$$ where the third line follows from Holder’s inequality. (iv) Jackknife sketching. This is the same as item (i). ∎ S.3.5 Proof of Corollary 3.5 Proof of Corollary 3.5. For any probability measure $\mu_{w}$ that satisfies Assumption 3 with a sampling rate of $\theta$, from equation (S.1.1), $$1-\gamma=1-\theta+\int\frac{1}{1+ct}1\left({t\neq 0}\right)\mu_{w}(dt).$$ Then, $$\displaystyle f(\gamma)$$ $$\displaystyle=\int\frac{1}{(1+ct)^{2}}\mu_{w}(dt)$$ $$\displaystyle=1-\theta+\int\frac{1}{(1+ct)^{2}}1\left({t\neq 0}\right)\mu_{w}(dt)$$ $$\displaystyle\geq 1-\theta+\left(\int\frac{1}{1+ct}1\left({t\neq 0}\right)\mu_{w}(dt)\right)^{2}/\int 1\left({t\neq 0}\right)\mu_{w}(dt)$$ $$\displaystyle=1-\theta+\theta(1-\gamma/\theta),$$ where the third line follows from Holder’s inequality. The inequality becomes equality if and only if there exist real numbers $\alpha,\beta\geq 0$, not both of them zero, such that $$\alpha\frac{1}{(1+ct)^{2}}1\left({t\neq 0}\right)=\beta 1\left({t\neq 0}\right)\quad\mu_{w}-\rm{almost~{}everywhere.}$$ Therefore $\mu_{w}=(1-\theta)\delta_{0}+\theta\delta_{1}$ satisfies the above equation and minimizes the limiting risk of $\widehat{\beta}^{1}$ in Theorem 3.2 among all choices of multipliers satisfying Assumption 3 with $B=1$ and a sampling rate of $\theta$. ∎ S.3.6 Proof of Theorem 3.6 We first prove Theorem 3.6 directly and then prove it as a special case of Theorem 4.3. S.3.6.1 A direct proof Proof of Theorem 3.6. For notational simplicity, we drop the explicit dependence on $\{X,\mathcal{W},\widehat{\beta}^{B}\}$, and define $$\displaystyle B_{k,\ell}$$ $$\displaystyle=\frac{r^{2}}{d}\textrm{tr}\left(\left(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\right)\left(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\right)\right),$$ $$\displaystyle V_{k,\ell}$$ $$\displaystyle=\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\right).$$ Applying Lemma 2.1 and Lemma 2.3, we can rewrite the out-of-sample prediction risk as $$\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}\frac{1}{B^{2}}\sum_{k,\ell}^{B}\left(B_{k,\ell}+V_{k,\ell}\right).$$ The terms with $k=\ell$ correspond to the bias and variance terms for the sketched least square estimators. According to Theorem 3.2, these terms converge almost surely to constants for any $\gamma/\theta\neq 1$ as $n,d\rightarrow+\infty$. Therefore, under Assumption 3, taking the limit $B\rightarrow\infty$, we obtain $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\lim_{n,d\rightarrow+\infty}\left(B_{k,\ell}+V_{k,\ell}\right){\quad{\rm a.s.}}$$ (S.3.1) for $\gamma/\theta\neq 1$ and $k\neq\ell$. In what follows, we will compute the limits $\lim_{n\to\infty}V_{k,\ell}$ using Lemma S.3.2 for the underparameterized regime and Lemma S.3.3 for the overparameterized regime for $k\neq\ell$. Additionally, we will determine the limit $\lim_{n\to\infty}B_{k,\ell}$ in Lemma S.3.4 for $k\neq\ell$. Lemma S.3.2 (Underparameterized variance under isotropic features). Assume Assumptions 1-3, $\Sigma=I$, and $\gamma<\theta$. For $k\neq\ell$, the variance term $V_{k,\ell}$ of the estimator $\widehat{\beta}^{B}$ satisfies $$\lim_{n,d\rightarrow+\infty}V_{k,\ell}=\sigma^{2}\frac{\gamma}{1-\gamma}{\quad{\rm a.s.}}$$ Lemma S.3.3 (Overparameterized variance under isotropic features). Assume Assumptions 1-3, $\Sigma=I$, and $\gamma>\theta$. For $k\neq\ell$, the variance term $V_{k,\ell}$ of the estimator $\widehat{\beta}^{B}$ satisfies $$\lim_{n,d\rightarrow+\infty}V_{k,\ell}=\sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}}{\quad{\rm a.s.}}$$ Lemma S.3.4 (Bias under isotropic features). Assume Assumptions 1-4 and $\Sigma=I$. For $k\neq\ell$, the bias term $B_{k,\ell}$ of the estimator $\widehat{\beta}^{B}$ satisfies $$\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}B_{k,\ell}(\widehat{\beta}^{B})=\begin{cases}0,&\gamma<\theta\\ r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)},&\gamma>\theta\end{cases}{\quad{\rm a.s.}}$$ Putting the above lemmas together finishes the proof. ∎ S.3.6.2 As the isotropic case of Theorem 4.3 Proof of Theorem 3.6. In this subsection, we derive the results for isotropic features as a special case of Theorem 4.3. In the underparameterized regime $\gamma<\theta$, the risk is independent of the covariance matrix and thus is the same as that for the correlated case. Therefore, we focus on the overparameterized case, where $\gamma>\theta$. Since $\Sigma=I$, we obtain $H=\delta_{1}$. Equation (4.1) at $z=0$ simplifies to $$v(0)=\left(\frac{\gamma}{\theta}\frac{1}{1+v(0)}\right)^{-1}.$$ This equation has a unique positive solution given by $v(0)=\theta/(\gamma-\theta)>0$. Consequently, we obtain $k(0)=(1-\theta)v(0)=\theta(1-\theta)/(\gamma-\theta)$. Similarly, since $\Sigma=I$, equation (4.2) simplifies to $$\tilde{v}(z)=\left(-z+\frac{\gamma}{\theta^{2}}\frac{t}{1+\tilde{v}(z)t}\right)^{-1},$$ where $t=(1+k(0))^{-1}=(\gamma-\theta)/(\gamma-\theta^{2})$. After some calculations, we obtain that the unique positive solution to equation (4.2) is $\tilde{v}(0)=\theta^{2}/(\gamma-\theta)$ when $z=0$, and $$\tilde{v}(z)=\frac{(\gamma-\theta^{2})t/\theta-z-\sqrt{\left(z-{(\gamma-\theta^{2})}t/\theta^{2}\right)^{2}-4zt}}{2zt},~{}\text{when}~{}z<0.$$ Then, by taking derivative of $\tilde{v}(z)$, we have $$\displaystyle\tilde{v}^{\prime}(z)=\left(\left(-1-\frac{z-\frac{\gamma-\theta^{2}}{\theta^{2}}t-2t}{\sqrt{(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt}}\right)2z-2\left(\frac{\gamma-\theta^{2}}{\theta^{2}}t-z-\sqrt{(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt}\right)\right)/(4z^{2}t).$$ By applying L’Hôpital’s rule, we obtain $$\displaystyle\tilde{v}^{\prime}(0)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{2\left(-1-\frac{z-\frac{\gamma-\theta^{2}}{\theta^{2}}t-2t}{\sqrt{(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt}}\right)+2z\left(-\frac{1}{\sqrt{(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt}}+\frac{\left(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t-2t\right)^{2}}{\left((z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt\right)^{3/2}}\right)+2+2\frac{z-\frac{\gamma-\theta^{2}}{\theta^{2}}t-2t}{\sqrt{(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt}}}{8zt}$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{1}{4t}\left(-\frac{1}{\sqrt{(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt}}+\frac{\left(z-\frac{\gamma-\theta^{2}}{\theta^{2}}t-2t\right)^{2}}{\left((z-\frac{\gamma-\theta^{2}}{\theta^{2}}t)^{2}-4zt\right)^{3/2}}\right)$$ $$\displaystyle=\frac{1}{4t}\left(-\frac{\theta^{2}}{(\gamma-\theta^{2})t}+\frac{\theta^{6}}{(\gamma-\theta^{2})^{3}t^{3}}\frac{(\gamma+\theta^{2})^{2}}{\theta^{4}}t^{2}\right)$$ $$\displaystyle=\frac{(\gamma-\theta^{2})\theta^{2}}{4(\gamma-\theta)^{2}}\left(-1+\frac{(\gamma+\theta^{2})^{2}}{(\gamma-\theta^{2})^{2}}\right)$$ $$\displaystyle=\frac{\gamma\theta^{4}}{(\gamma-\theta)^{2}(\gamma-\theta^{2})}.$$ Therefore, the prediction risk satisfies, almost surely, $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$$ $$\displaystyle=\frac{r^{2}\theta}{\gamma v(0)}-\frac{r^{2}(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right)+\sigma^{2}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right)$$ $$\displaystyle=\frac{r^{2}\theta}{\gamma}\frac{\gamma-\theta}{\theta}-\frac{r^{2}(1-\theta)}{\gamma}\frac{\gamma-\theta}{\theta}\left(\frac{\gamma\theta^{4}}{(\gamma-\theta)^{2}(\gamma-\theta^{2})}\frac{(\gamma-\theta)^{2}}{\theta^{4}}-1\right)$$ $$\displaystyle\qquad+\sigma^{2}\left(\frac{\gamma\theta^{4}}{(\gamma-\theta)^{2}(\gamma-\theta^{2})}\frac{(\gamma-\theta)^{2}}{\theta^{4}}-1\right)$$ $$\displaystyle=r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)}+\sigma^{2}\frac{\theta^{2}}{\left(\gamma-\theta^{2}\right)}.$$ Thus Theorem 4.3 reduces to Theorem 3.6 when $\Sigma=I$. ∎ S.3.7 Proof of Lemma 3.7 Proof of Lemma 3.7. For $\gamma<\theta$, we have obviously that $$\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\sigma^{2}\frac{\gamma}{1-\gamma}\leq\sigma^{2}\frac{\theta}{1-\theta}.$$ In the case where $\gamma>\theta$, we first calculate the derivatives of the limiting risk respect to $\gamma$: $$\displaystyle\frac{d}{d\gamma}\left(r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)}+\sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}}\right)=r^{2}\frac{(\gamma-\theta)((2\theta-\theta^{2})\gamma-\theta^{3})}{\gamma^{2}(\gamma-\theta^{2})^{2}}-\sigma^{2}\frac{\gamma^{2}\theta^{2}}{\gamma^{2}(\gamma-\theta^{2})^{2}}=:\frac{f(\gamma)}{\gamma^{2}(\gamma-\theta^{2})^{2}}.$$ It can be shown that the function $$\displaystyle f(\gamma)=r^{2}(\gamma-\theta)((2\theta-\theta^{2})\gamma-\theta^{3})-\sigma^{2}\gamma^{2}\theta^{2}$$ satisfies $f(\theta)=-\sigma^{2}\theta^{4}<0$ and has at most one root on $[\theta,+\infty]$. Thus, $\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})$ is bounded from above by the larger value between its values at $\gamma=\theta$ and $\gamma=+\infty$. The result follows from that $\lim_{\gamma\rightarrow+\infty}\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=r^{2}$. ∎ S.3.8 Proof of Lemma 3.8 Proof of Lemma 3.8. First note that $\Sigma=I$. We begin by writing $$\displaystyle\mathbb{E}\left[\|\widehat{\beta}^{B}-\widehat{\beta}_{\lambda}\|_{2}^{2}\;\middle|\;X,\,\mathcal{W},\,\beta\right]$$ $$\displaystyle=\mathbb{E}\left[\|\widehat{\beta}^{B}-\beta+\beta-\widehat{\beta}_{\lambda}\|_{2}^{2}\;\middle|\;X,\,\mathcal{W},\,\beta\right]$$ $$\displaystyle=\mathbb{E}\left[\|\widehat{\beta}^{B}-\beta\|_{2}^{2}\;\middle|\;X,\,\mathcal{W},\,\beta\right]-2\mathbb{E}\left[(\widehat{\beta}^{B}-\beta)^{\top}(\widehat{\beta}_{\lambda}-\beta)\;\middle|\;X,\,\mathcal{W},\,\beta\right]$$ $$\displaystyle\qquad+\mathbb{E}\left[\|\widehat{\beta}_{\lambda}-\beta\|_{2}^{2}\;\middle|\;X,\beta\right],$$ where the first and the third term represent the risk of the bagged least square estimator and the ridge regression estimator, respectively. We decompose the second term of the above equation as $$\displaystyle\mathbb{E}\left[(\widehat{\beta}^{B}-\beta)^{\top}(\widehat{\beta}_{\lambda}-\beta)\;\middle|\;X,\,\mathcal{W},\,\beta\right]$$ $$\displaystyle=\frac{r^{2}}{B}\sum_{k=1}^{B}\beta^{\top}\left(X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X)^{+}-I\right)\left((X^{\top}X+n\lambda I)^{-1}X^{\top}X-I\right)\beta$$ $$\displaystyle\qquad+\frac{\sigma^{2}}{B}\sum_{k=1}^{B}\textrm{tr}\left((X^{\top}X+n\lambda I)^{-1}X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X)^{+}\right)$$ $$\displaystyle=:\frac{1}{B}\sum_{k=1}^{B}\left(B_{k,\lambda}+V_{k,\lambda}\right),$$ (S.3.2) which follows from the same argument in the proof of Lemma 2.1 and thus is omitted. Under Assumption 4, the bias term $B_{k,\lambda}$ satisfies $$\lim_{n,d\rightarrow+\infty}B_{k,\lambda}=\lim_{n,d\rightarrow+\infty}\textrm{tr}\left(\left(X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X)^{+}-I\right)\left((X^{\top}X+n\lambda I)^{-1}X^{\top}X-I\right)\right)/d{\quad{\rm a.s.}}$$ which follows from the same argument in the proof of Lemma 2.3. Let $\lambda=(1-\theta)(\gamma/\theta-1)\vee 0$. Then we can prove this lemma by showing that the ridge regression estimator and the bagged estimator share the same limiting risk as in Lemma S.3.5, $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{k,\lambda}=\lim_{B\rightarrow\infty}\lim_{n,d\rightarrow+\infty}V_{X,\,\mathcal{W},\,\beta}{\quad{\rm a.s.}}$$ as in Lemma S.3.6, and $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{k,\lambda}=\lim_{B\rightarrow\infty}\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}{\quad{\rm a.s.}}$$ as in Lemma S.3.7. We collect these lemmas below, with their proofs deferred later. Lemma S.3.5. Let $\lambda=(1-\theta)(\gamma/\theta-1)\vee 0$. Assume Assumptions 1-2, Assumption 4, and $\Sigma=I$. Then, almost surely, $$\lim_{n,d\rightarrow+\infty}\mathbb{E}\left[\|\widehat{\beta}_{\lambda}-\beta\|_{2}^{2}\;\middle|\;X,\beta\right]=\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<\theta,\\ r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)}+\sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}},&\gamma>\theta.\end{cases}$$ Lemma S.3.6. Let $\lambda=(1-\theta)(\gamma/\theta-1)\vee 0$. Assume Assumption 1-3, and $\Sigma=I$. Then, almost surely $$\lim_{n,d\rightarrow+\infty}V_{k,\lambda}=\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<\theta,\\ \sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}},&\gamma>\theta.\end{cases}$$ Lemma S.3.7. Let $\lambda=(1-\theta)(\gamma/\theta-1)\vee 0$. Assume Assumption 1-4, and $\Sigma=I$. Then, almost surely $$\lim_{n,d\rightarrow+\infty}B_{k,\lambda}=\begin{cases}0,&\gamma<\theta,\\ r^{2}\frac{(\gamma-\theta)^{2}}{\gamma\left(\gamma-\theta^{2}\right)},&\gamma>\theta.\end{cases}$$ ∎ S.3.9 Supporting lemmas This subsection proves the supporting lemmas used in previous subsections, aka Lemmas S.3.2-S.3.7. S.3.9.1 Proof of Lemma S.3.2 Proof of Lemma S.3.2. We assume $\sigma^{2}=1$ without loss of generality. Let $\Omega_{n}:=\{\omega\in\Omega:\lambda(\widehat{\Sigma})(\omega)>0\}$. Using Lemma S.1.5 and Lemma S.1.6, we obtain $$\mathbb{P}\left(\lim_{n,d\rightarrow+\infty}\Omega_{n}\right)=1,$$ when $\gamma<\theta$. Thus, without loss of generality, we shall assume $\lambda_{\min}(\widehat{\Sigma}_{k})>0$ and $\lambda_{\min}(\widehat{\Sigma}_{\ell})>0$. We begin by rewriting the variance term as $$\displaystyle V_{k,\ell}\left(\widehat{\beta}^{B}\right)=$$ $$\displaystyle\textrm{tr}\left((X^{\top}S_{k}S_{k}X)^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X(X^{\top}S_{\ell}S_{\ell}X)^{+}\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{n^{2}}\sum_{i=1}^{n}w_{k,i}w_{\ell,i}x_{i}^{\top}(X^{\top}S_{k}S_{k}X/n)^{-1}(X^{\top}S_{\ell}S_{\ell}X/n)^{-1}x_{i}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{n^{2}}\sum_{i=1}^{n}w_{k,i}w_{\ell,i}\frac{x_{i}^{\top}\widehat{\Sigma}_{k,-i}^{-1}\widehat{\Sigma}_{\ell,-i}^{-1}x_{i}}{(1+w_{k,i}x_{i}^{\top}\widehat{\Sigma}_{k,-i}^{-1}x_{i}/n)(1+w_{\ell,i}x_{i}^{\top}\widehat{\Sigma}_{\ell,-i}^{-1}x_{i}/n)},$$ where the last line uses the Sherman–Morrison formula twice. Recall that $m_{1}(0)$ and $m_{2}(0)$, from Lemmas S.1.2 and S.1.3, are solutions to the equations $$\displaystyle m_{1}(0)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right]=1,$$ $$\displaystyle m_{2}(0)\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(0)}\right]\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right]=m_{1}(0).$$ Using a similar argument as in the proof of Lemma S.4.1, and applying Lemma S.1.2, Lemma S.1.7, Lemma S.6.9, and Lemma S.1.3, we obtain $$\lim_{n,d\rightarrow+\infty}V_{k,\ell}\left(\widehat{\beta}^{B}\right)=\lim_{n,d\rightarrow+\infty}\frac{\gamma}{n}\sum_{i=1}^{n}w_{k,i}w_{\ell,i}\frac{m_{2}(0)}{(1+\gamma w_{k,i}m_{1}(0))(1+\gamma w_{\ell,i}m_{1}(0))}{\quad\rm a.s.}$$ Because $m_{1}(0)$ is positive by Lemma S.3.1, we have $$\frac{w_{k,i}w_{\ell,i}}{(1+\gamma w_{k,i}m_{1}(0))(1+\gamma w_{\ell,i}m_{1}(0))}\leq\frac{1}{(\gamma m_{1}(0))^{2}}<\infty.$$ Then, using Assumption 3 and the strong law of large numbers, we have $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{k,\ell}\left(\widehat{\beta}^{B}\right)$$ $$\displaystyle\overset{{\rm a.s.}}{=}\gamma m_{2}(0)\left(\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right]\right)^{2}$$ $$\displaystyle=\gamma m_{1}(0)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right]/\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(0)}\right]$$ $$\displaystyle=\gamma/\left(1-\gamma m_{1}(0)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right]\right)$$ $$\displaystyle=\frac{\gamma}{1-\gamma}.$$ ∎ S.3.9.2 Proof of Lemma S.3.3 Proof of Lemma S.3.3. We begin by expressing $V_{k,\ell}(\widehat{\beta}^{B})$ in terms of sketching matrices $D_{k}$ and $D_{\ell}$ as defined in equation (S.3.11) for the corresponding sketch matrices $S_{k}$ and $S_{\ell}$. Applying Lemma S.3.8, we assume without generality that $$\displaystyle V_{k,\ell}\left(\widehat{\beta}^{B}\right)=$$ $$\displaystyle\sigma^{2}\textrm{tr}\left((X^{\top}S_{k}S_{k}X)^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X(X^{\top}S_{\ell}S_{\ell}X)^{+}\right)$$ $$\displaystyle=$$ $$\displaystyle\sigma^{2}\textrm{tr}\left((X^{\top}D_{k}^{\top}D_{k}X)^{+}X^{\top}(D_{k}^{\top})^{2}(D_{\ell})^{2}(X^{\top}D_{\ell}^{\top}D_{\ell}X)^{+}X^{\top}\right).$$ Using the above equality, we shall further assume that the multipliers $w_{i,j},1\leq i\leq j\leq n$, are either one or zero and $\sigma^{2}=1$. Applying the identity (S.4.4), we can rewrite $V_{k,\ell}(\widehat{\beta}^{B})$ as $$\displaystyle V_{k,\ell}\left(\widehat{\beta}^{B}\right)=$$ $$\displaystyle\lim_{z\rightarrow 0^{-}}\frac{1}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)$$ $$\displaystyle=$$ $$\displaystyle\lim_{z\rightarrow 0^{-}}\frac{1}{n^{2}}\sum_{i=1}^{n}w_{k,i}w_{\ell,i}x_{i}^{\top}(X^{\top}S_{k}S_{k}X/n-zI)^{-1}(X^{\top}S_{\ell}S_{\ell}X/n-zI)^{-1}x_{i}$$ $$\displaystyle=$$ $$\displaystyle\lim_{z\rightarrow 0^{-}}\frac{1}{n^{2}}\sum_{i=1}^{n}w_{k,i}w_{\ell,i}\frac{x_{i}^{\top}(\widehat{\Sigma}_{k,-i}-zI)^{-1}(\widehat{\Sigma}_{\ell,-i}-zI)^{-1}x_{i}}{(1+w_{k,i}x_{i}^{\top}(\widehat{\Sigma}_{k,-i}-zI)^{-1}x_{i}/n)(1+w_{\ell,i}x_{i}^{\top}(\widehat{\Sigma}_{\ell,-i}-zI)^{-1}x_{i}/n)},$$ where, in the last line, we applied the Sherman–Morrison formula twice. By using similar arguments as in the proof of Lemma S.3.2, and applying Lemma S.1.2, Lemma S.1.7, Lemma S.6.9, and Lemma S.1.3, we obtain $$\lim_{n,d\rightarrow+\infty}\frac{1}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)=\gamma m_{2}(z)\left(\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]\right)^{2}$$ almost surely. According to Lemma S.1.3, for any $z<0$, $m_{2}(z)$ satisfies $$m_{2}(z)\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(z)}\right]\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]-zm_{2}(z)=m_{1}(z),$$ (S.3.3) where, according to Lemma S.1.2, $m_{1}(z)$ is the unique positive solution to $$m_{1}(z)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma m_{1}(z)w}\right]-zm_{1}(z)=1$$ (S.3.4) on $z<0$. When $\gamma>\theta$, Lemma S.1.5 implies that the sketched covariance matrix $X^{\top}S_{k}S_{k}X$ is almost surely singular, leading to $\lim_{z\rightarrow 0^{-}}m_{1}(z)=+\infty$. Taking the limit as $z\rightarrow 0^{-}$ in equation (S.3.4), we obtain $$\displaystyle\lim_{z\rightarrow 0^{-}}-zm_{1}(z)$$ $$\displaystyle=1-\lim_{z\rightarrow 0^{-}}m_{1}(z)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]$$ $$\displaystyle=1-\lim_{z\rightarrow 0^{-}}\frac{1}{\gamma}\left(1-\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(z)}\right]\right)$$ $$\displaystyle=\frac{\gamma-\theta}{\gamma},$$ and $$\displaystyle\lim_{z\rightarrow 0^{-}}\frac{1}{z}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]=\lim_{z\rightarrow 0^{-}}\frac{1}{zm_{1}(z)}+1=-\frac{\theta}{\gamma-\theta}.$$ (S.3.5) Combining the above calculations with equation (S.3.3) and multiplying both sides by $z$, we obtain $$\displaystyle\lim_{z\rightarrow 0^{-}}z^{2}m_{2}(z)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}zm_{2}(z)\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(z)}\right]\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]-zm_{1}(z)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}z^{2}m_{2}(z)\left(1-\theta+o(z)\right)\frac{1}{z}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]+\frac{\gamma-\theta}{\gamma}$$ $$\displaystyle=-\frac{\theta(1-\theta)}{\gamma-\theta}\,\lim_{z\rightarrow 0^{-}}z^{2}m_{2}(z)+\frac{\gamma-\theta}{\gamma},$$ (S.3.6) which then gives $$\displaystyle\lim_{z\rightarrow 0^{-}}z^{2}m_{2}(z)=\frac{(\gamma-\theta)^{2}}{\gamma(\gamma-\theta^{2})}.$$ (S.3.7) Therefore, we obtain $$\displaystyle\lim_{z\rightarrow 0^{-}}\gamma m_{2}(z)\left(\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle\gamma\frac{(\gamma-\theta)^{2}}{\gamma(\gamma-\theta^{2})}\left(-\frac{\theta}{\gamma-\theta}\right)^{2}$$ $$\displaystyle=$$ $$\displaystyle\frac{\theta^{2}}{\gamma-\theta^{2}}.$$ Finally, we prove that we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$ by applying the Arzela-Ascoli theorem and the Moore-Osgood theorem. To accomplish this, we establish that $\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)/n^{2}$ and its derivative are uniformly bounded for $z<0$. We assume that the diagonal elements of $S_{k}$ and $S_{\ell}$ are either zero or one. For $z<0$, we have the following inequality $$\frac{1}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)\leq\frac{d}{n}\cdot\frac{\lambda_{\max}\big{(}X^{\top}S_{k}^{2}S_{\ell}^{2}X/n\big{)}}{\lambda^{+}_{\min}\big{(}\widehat{\Sigma}_{k}\big{)}\lambda^{+}_{\min}\big{(}\widehat{\Sigma}_{\ell}\big{)}},$$ which is almost surely uniformly bounded by applying Lemma S.1.6 and Theorem 2 in (Bai and Yin,, 1993). Similarly, we can show that the derivative $$\displaystyle\frac{1}{n^{2}}\frac{\partial\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)}{\partial z}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-2}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)+\textrm{tr}\left((\widehat{\Sigma}_{i}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-2}\right)$$ is also almost surely uniformly bounded. Then, using the Arzela-Ascoli theorem and the Moore-Osgood theorem, we obtain almost surely that $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{k,\ell}\left(\widehat{\beta}^{B}\right)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\gamma m_{2}(z)\left(\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]\right)^{2}$$ $$\displaystyle=\sigma^{2}\frac{\theta^{2}}{\gamma-\theta^{2}}.$$ ∎ S.3.9.3 Proof of Lemma S.3.4 Proof of Lemma S.3.4. According to Lemma S.1.5, the sketched sample covariance matrices are almost surely invertible when $\gamma<\theta$. Thus, in the underparameterized regime, by Lemma 2.1, the bias term converges almost surely to zero. When $\gamma>\theta$, we apply Lemma 2.3 to rewrite the limit of $B_{k,\ell}(\widehat{\beta}^{B})$ as $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{k,\ell}(\widehat{\beta}^{B})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{r^{2}}{d}\textrm{tr}\left(\left(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\right)\left(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\right)\right)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\frac{r^{2}}{d}\textrm{tr}\left[\left(I-(\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{k}\right)\left(I-(\widehat{\Sigma}_{\ell}-zI)^{-1}\widehat{\Sigma}_{\ell}\right)\right]$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\frac{r^{2}z^{2}}{d}\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right),$$ where we used the identity (S.4.4). As $n,d\rightarrow+\infty$, ${d}^{-1}z^{2}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)$ and its derivative $$\displaystyle\left|\frac{1}{d}\frac{d\,z^{2}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)}{d\,z}\right|$$ $$\displaystyle=\left|\frac{1}{d}\left(z\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{k}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)+z\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\widehat{\Sigma}_{\ell}\right)\right)\right|$$ $$\displaystyle=\left|\frac{1}{d}\left(z\,\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1}+z^{2}\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)+z\,\textrm{tr}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)\right|$$ $$\displaystyle\leq 4$$ is almost surely bounded by Lemma S.1.6. Therefore, by the Arzela-Ascoli theorem and the Moore-Osgood theorem, we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$ to obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{k,\ell}(\widehat{\beta}^{B})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\frac{\sigma^{2}}{d}z^{2}\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{\sigma^{2}}{d}z^{2}\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{\ell}-zI)^{-1}\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}r^{2}z^{2}\,m_{2}(z)$$ $$\displaystyle=r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)},$$ where the last equality follows from equation (S.3.7). ∎ S.3.9.4 Proof of Lemma S.3.5 Proof of Lemma S.3.5. In the underparameterized regime, where $\gamma<\theta$, we have $\lambda=0$, which corresponds to the ridgeless regression. The result follows directly from equation (3.3). When $\gamma>\theta$, Dobriban and Liu, (2019) showed that $$\lim_{n,d\rightarrow+\infty}\mathbb{E}\left[\|\widehat{\beta}_{\lambda}-\beta\|_{2}^{2}\;\middle|\;X,\beta\right]=\frac{r^{2}}{\gamma v(-\lambda)}\left(1-\frac{\lambda v{{}^{\prime}}(-\lambda)}{v(-\lambda)}\right)+\sigma^{2}\left(\frac{v{{}^{\prime}}(-\lambda)}{v^{2}(-\lambda)}-1\right),$$ where $v(-\lambda)$ is the unique solution of equation (4.1) with $\gamma/\theta$ replaced by $\gamma$. We have the derivative $$\displaystyle v{{}^{\prime}}(-\lambda)=\frac{v^{2}(-\lambda)(1+v(-\lambda))^{2}}{(1+v(-\lambda))-\gamma v^{2}(-\lambda)}.$$ Therefore, the result holds if and only if $v(-\lambda)={\theta}/{(\gamma-\theta)}$, which is the unique positive solution of equation (4.1) ($\gamma/\theta$ replaced by $\gamma$) with $\lambda=(1-\theta)(\gamma/\theta-1)$. ∎ S.3.9.5 Proof of Lemma S.3.6 Proof of Lemma S.3.6. We begin with the underparameterized case, that is $\gamma<\theta$. Consider the subsample $S_{k}X$, that is $\gamma<\theta$. By Lemmas S.1.5–S.1.6, we can assume $\lambda_{\min}(\widehat{\Sigma}_{k})>0$ without loss of generality. We shall also assume $\sigma^{2}=1$. Following Lemma S.1.4, let $m_{2,\lambda}(z):=\lim_{n,d\rightarrow+\infty}d^{-1}\textrm{tr}((X^{\top}X/n+\lambda I)^{-1}(X^{\top}S_{k}^{\top}S_{k}X/n-zI)^{-1})$, and $m(-\lambda)$ be the Stieltjes transform of the limiting spectral distribution of the full-sample matrix $X$. Then, by Lemma S.1.4, $m_{2,\lambda}(0)$ exists and satisfies $$m(-\lambda)=m_{2,\lambda}(0)\frac{1}{1+\gamma m(-\lambda)}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right].$$ (S.3.8) Using equation (S.3.2) and a similar argument as in the proof of Lemma S.3.2, we obtain almost surely that $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{k,\lambda}$$ $$\displaystyle=\textrm{tr}\left((X^{\top}X+n\lambda I)^{-1}X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X)^{-1}\right)$$ $$\displaystyle=\gamma m_{2,\lambda}(0)\frac{1}{1+\gamma m(-\lambda)}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right]$$ $$\displaystyle=\gamma m(-\lambda),$$ where the last line follows from equation (S.3.8). In the underparameterized case, $\lambda=(1-\theta)(\gamma/\theta-1)\vee 0=0$. Thus it suffices to show $m(0)=1/(1-\gamma)$, whose proof can be found in Chapter 6 of Serdobolskii, (2007). In the overparameterized regime, using a similar argument in the proof of Lemma S.3.3, we shall assume that the multipliers $w_{i,j}$ are either zero or one and $\sigma^{2}=1$ without loss of generality. Using the definition of $V_{k,\lambda}$ in equation (S.3.2) and identity (S.4.4), we acquire $$V_{k,\lambda}=\lim_{z\rightarrow 0^{-}}\textrm{tr}\left((X^{\top}X+n\lambda I)^{-1}X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X-zI)^{-1}\right)$$ Using a similar argument as in the proof of Lemma S.3.3, we obtain almost surely that $$\displaystyle\lim_{n,d\rightarrow+\infty}\textrm{tr}\left((X^{\top}X+n\lambda I)^{-1}X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X-zI)^{-1}\right)$$ $$\displaystyle=\gamma m_{2,\lambda}(z)\frac{1}{1+\gamma m(-\lambda)}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]$$ where $m_{2,\lambda}(z)$ satisfies the equation $$\displaystyle m(-\lambda)=m_{2,\lambda}(z)\frac{1}{1+\gamma m(-\lambda)}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]-zm_{2,\lambda}(z).$$ (S.3.9) From the proof of Lemma S.3.5, we know that, for $\lambda=(1-\theta)(\gamma/\theta-1)$, the companion Stieltjes transform is $v(-\lambda)={\theta}/({\gamma-\theta})$. Thus we have $$\gamma m(-\lambda)=v(-\lambda)-1/\lambda+\gamma/\lambda=\frac{\theta}{1-\theta}.$$ Furthermore, equation (S.3.5) implies $$\lim_{z\rightarrow 0^{-}}\frac{1}{z}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]=-\frac{\theta}{\gamma-\theta}.$$ Plugging the above two equalities into equation (S.3.9), we obtain $$\displaystyle\frac{\theta}{\gamma(1-\theta)}$$ $$\displaystyle=\frac{1}{1+\theta/(1-\theta)}\cdot\left(-\frac{\theta}{\gamma-\theta}\right)\lim_{z\rightarrow 0^{-}}zm_{2,\lambda}(z)-\lim_{z\rightarrow 0^{-}}zm_{2,\lambda}(z)$$ which yields $$\lim_{z\rightarrow 0^{-}}-zm_{2,\lambda}(z)=\frac{\theta}{\gamma(1-\theta)}\frac{\gamma-\theta}{\gamma-\theta^{2}}.$$ (S.3.10) Using a similar argument as in the proof of Lemma S.3.3, we can verify that $$\displaystyle\textrm{tr}\left((X^{\top}X+n\lambda I)^{-1}X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X-zI)^{-1}\right)$$ and its derivative are uniform bounded on $R_{\leq 0}$. Therefore, by the Arzela-Ascoli theorem and the Moore-Osgood theorem, we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$ to obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{k,\lambda}$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\gamma m_{2,\lambda}(z)\frac{1}{1+\gamma m(-\lambda)}\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]$$ $$\displaystyle=\frac{\theta^{2}}{\gamma-\theta^{2}}.$$ ∎ S.3.9.6 Proof of Lemma S.3.7 Proof of Lemma S.3.7. In the underparameterized regime, according to Lemma S.1.5, the sketched sample covariance matrix $X^{\top}S_{k}S_{k}X$ is almost surely invertible. Therefore, the result follows directly from equation (S.3.2). In the overparameterized regime, by the definition of $B_{k,\ell}$ in equation (S.3.2), we have $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{k,\lambda}$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}r^{2}\textrm{tr}\left(\left(X^{\top}S_{k}^{\top}S_{k}X(X^{\top}S_{k}^{\top}S_{k}X)^{+}-I\right)\left((X^{\top}X+n\lambda I)^{-1}X^{\top}X-I\right)\right)/d$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}r^{2}-\frac{z\lambda}{d}\textrm{tr}\left((X^{\top}S_{k}^{\top}S_{k}X/n-zI)^{-1}(X^{\top}X/n+\lambda I)^{-1}\right).$$ Using Lemma S.1.4 and equation (S.3.10), we obtain almost surely that $$\displaystyle\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}-\frac{z\lambda}{d}\textrm{tr}\left((X^{\top}S_{k}^{\top}S_{k}X/n-zI)^{-1}(X^{\top}X/n+\lambda I)^{-1}\right)$$ $$\displaystyle=$$ $$\displaystyle\lambda\lim_{z\rightarrow 0^{-}}-zm_{2,\lambda}(z)$$ $$\displaystyle=$$ $$\displaystyle(1-\theta)\left(\frac{\gamma}{\theta}-1\right)\cdot\frac{\theta}{\gamma(1-\theta)}\frac{\gamma-\theta}{\gamma-\theta^{2}}$$ $$\displaystyle=$$ $$\displaystyle\frac{(\gamma-\theta)^{2}}{\gamma\left(\gamma-\theta^{2}\right)}.$$ Finally, as in the proof of Lemma S.3.4, one can easily check that $$\displaystyle\frac{1}{d}\,\textrm{tr}\left((X^{\top}S_{k}^{\top}S_{k}X/n-zI)^{-1}(X^{\top}X/n+\lambda I)^{-1}\right)$$ and its derivative with respect to $z$ are uniformly bounded for $z\leq 0$. By the Arzela-Ascoli theorem and the Moore-Osgood theorem, we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$, and the desired result follows. ∎ S.3.10 Technical lemmas This subsection proves a technical lemma that is used in the proofs of the supporting lemmas in the previous subsection. Let $S\in\mathbb{R}^{n\times n}$ be a sketching matrix, and $S_{ij}$ be its $(i,j)$-th element. Define $D$ as $$\displaystyle D_{ij}=\begin{cases}1(S_{ij}\neq 0),&\quad i=j\\ 0,&\quad i\neq j\end{cases}.$$ (S.3.11) Lemma S.3.8. Assume Assumptions 1, 2, and $\gamma>\theta$. Then, as $n,d\rightarrow+\infty$, $$(X^{\top}S^{\top}SX)^{+}X^{\top}S^{\top}S=(X^{\top}D^{\top}DX)^{+}X^{\top}D^{\top}D~{}{\quad\rm a.s.}$$ Proof of Lemma S.3.8. We begin by reordering the rows of $S$ and $SX$ as $$US=\begin{bmatrix}S_{1}&0\\ 0&0\end{bmatrix}\quad\text{and}\quad USX=\begin{bmatrix}S_{1}&0\\ 0&0\end{bmatrix}\begin{bmatrix}X_{1}\\ X_{2}\end{bmatrix}=\begin{bmatrix}S_{1}X_{1}\\ 0\end{bmatrix},$$ where $S_{1}\in\mathbb{R}^{|A_{0}|\times|A_{0}|}$ is a diagonal matrix with non-zero elements on its diagonal, corresponding to the non-zero diagonal elements of $S$, $A_{0}:=\{i:~{}S_{ii}\neq 0\}$, $S_{1}X_{1}\in\mathbb{R}^{|A_{0}|\times d}$ is the subsampled data matrix, and $U\in\mathbb{R}^{n\times n}$ is an orthogonal matrix. Then $$\displaystyle(X^{\top}S^{\top}SX)^{+}X^{\top}S^{\top}S$$ $$\displaystyle=(X_{1}^{\top}S_{1}^{\top}S_{1}X_{1})^{+}\begin{bmatrix}X_{1}^{\top}S_{1}^{\top}S_{1}&0\end{bmatrix}$$ $$\displaystyle=(X_{1}^{\top}S_{1}^{\top}S_{1}X_{1})^{+}X_{1}^{\top}S_{1}^{\top}S_{1}\begin{bmatrix}I&0\end{bmatrix}$$ $$\displaystyle=X_{1}^{\top}S_{1}^{\top}(S_{1}X_{1}X_{1}^{\top}S_{1}^{\top})^{+}S_{1}\begin{bmatrix}I&0\end{bmatrix},$$ where the last line uses the following property of the pseudoinverse of a matrix $A$: $$\displaystyle A^{+}$$ $$\displaystyle=(A^{\top}A)^{+}A^{\top}=A^{\top}(AA^{\top})^{+}.$$ Using equation (S.1.5), we can show that when $\gamma>\theta$, as $n,d\rightarrow+\infty$, it is almost surely that $|A_{0}|/d\leq\theta/\gamma<1$. Applying (Bai and Yin,, 1993, Theorem 2), we obtain almost surely that $$\displaystyle\lambda_{\min}(S_{1}X_{1}X_{1}^{\top}S_{1}^{\top}/d)$$ $$\displaystyle\geq c_{w}^{2}c_{\lambda}\left(1-\sqrt{\gamma/\theta}\right)>0{\quad\rm a.s.}$$ $$\displaystyle\lambda_{\min}(X_{1}X_{1}^{\top}/d)$$ $$\displaystyle\geq c_{\lambda}\left(1-\sqrt{\gamma/\theta}\right){\quad\rm a.s.}$$ as $n,d\rightarrow+\infty$. Therefore, we can assume that both $S_{1}X_{1}X_{1}^{\top}S_{1}^{\top}$ and $X_{1}X_{1}^{\top}$ are invertible. Then $$\displaystyle(X^{\top}S^{\top}SX)^{+}X^{\top}S^{\top}S$$ $$\displaystyle=X_{1}^{\top}S_{1}^{\top}(S_{1}X_{1}X_{1}^{\top}S_{1}^{\top})^{-1}S_{1}\begin{bmatrix}I&0\end{bmatrix}$$ $$\displaystyle=X_{1}^{\top}(X_{1}X_{1}^{\top})^{-1}\begin{bmatrix}I&0\end{bmatrix}$$ $$\displaystyle=(X_{1}X_{1}^{\top})^{-1}X_{1}^{\top}\begin{bmatrix}I&0\end{bmatrix}$$ $$\displaystyle=(X^{\top}D^{\top}DX)^{+}X^{\top}D^{\top}D,$$ which holds almost surely as $n,d\rightarrow+\infty$. This finishes the proof. ∎ Appendix S.4 Proofs for Section 4 This section proves the results in Section 4. S.4.1 Proof of Lemma 4.1 Proof of Lemma 4.1. We only prove the results for $v(z)$. The results for $\tilde{v}(z)$ follow similarly. Proving equation (4.1) has a unique positive solution. Let us define the function $$\displaystyle f(x,z):=\frac{\theta}{\gamma}zx+\int\frac{1}{1+xt}H(dt).$$ Using equation (4.1), we have $f(v(z),z)=1-{\theta}/{\gamma}>0$. Moreover, for $z\leq 0$, $f(x,z)$ is a continuous decreasing function with respect to $x$ on $(0,+\infty)$, where $f(0,z)=1$ and $f(+\infty,z)=-\infty$. Thus, we conclude that equation (4.1) has a unique positive solution. Differentiability of $v(z)$. According to (Couillet and Liao,, 2022, Theorem 2.7), $v(z)$ is the companion Stieltjes transform of $m_{1}(z)$, and is also the Stieltjes transform of the limiting empirical spectral distribution $\mu$ of $XX^{\top}/n$ with $X\in\mathbb{R}^{n\times d}$ satisfying Assumption 2 and $d/n\rightarrow\gamma/\theta$. Because the Stieltjes transform is analytic outside the support of $\mu$, we apply Lemma S.1.5 and Lemma S.1.6 to conclude that $\lambda_{\min}(XX^{\top}/n)>0$ almost surely as $n,d\rightarrow+\infty$ when $\gamma>\theta$. Therefore, when $\gamma>\theta$, the function $v(z)$ is differentiable for $z<0$. Proving that $v(0):=\lim_{z\rightarrow 0^{-}}v(z)$ and $\tilde{v}(0):=\lim_{z\rightarrow 0^{-}}\tilde{v}(z)$ exist. The proof follows from a similar argument used in the proof of Lemma S.1.2 and thus is omitted. ∎ S.4.2 Proof of Theorem 4.2 Proof of Theorem 4.2. We begin by simplifying the expressions for the bias and variance of $\widehat{\beta}^{1}$ in Lemma 2.1 and Lemma 4 $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}_{1}^{+}X^{\top}S_{1}^{2}S_{1}^{2}X\widehat{\Sigma}_{1}^{+}\Sigma\right),$$ $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}\left(\widehat{\beta}^{1}\right)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{r^{2}}{d}\textrm{tr}\left(\Pi_{1}\Sigma\Pi_{1}\right),$$ where $\Pi_{1}=I-\widehat{\Sigma}_{1}^{+}\widehat{\Sigma}_{1}$. We discuss the underparameterized and overparameterized regimes separately. The underparameterized regime We first derive the underparameterized bias and then underparameterized variance. When $\gamma<\theta$, by Lemma S.1.5, $\widehat{\Sigma}_{1}$ is almost surely invertible as $n,d\rightarrow+\infty$. This implies $\Pi_{1}=0$. Therefore, when $\gamma<\theta$, the bias convergences to zero almost surely as $n,d\rightarrow+\infty$. The following lemma derives the underparameterized variance. Lemma S.4.1 (Underparameterized variance under correlated features). Assume Assumptions 1-3, and $\gamma<\theta$. The variance satisfies $$\lim_{n,d\rightarrow+\infty}V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})=\sigma^{2}\left(\frac{\gamma}{1-\gamma-f(\gamma)}-1\right){{\quad\rm a.s.}}$$ where $$\displaystyle f(\gamma)=\int\frac{1}{(1+ct)^{2}}d\mu_{w}(t),$$ and the constant $c$ is the unique positive solution to equation (3.1). The overparameterized regime We first derive the overparameterized variance and then overparameterized bias. Recall that, by Lemma (4.1), there is an unique positive solution to equation 4.1 for any $z\leq 0$. Lemma S.4.2. Assume Assumptions 1-3 and $\gamma>\theta$. The variance satisfies $$\lim_{n,d\rightarrow+\infty}V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})=\sigma^{2}\left(\frac{v^{\prime}(0)}{v(0)^{2}}-1\right)~{}{{\quad\rm a.s.}}$$ where $v_{1}(0)$ is the unique positive solution to equation (4.1) for any $z=0$. Our next lemma collects the limiting bias of $\widehat{\beta}^{1}$ in the overparameterized regime. Lemma S.4.3. Assume Assumptions 1-4, and $\gamma>\theta$. Then the bias satisfies $$\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})=\frac{r^{2}\theta}{\gamma v(0)}{{\quad\rm a.s.}}$$ where $v(z)$ is the unique positive solution to equation (4.1) at $z=0$. Putting above results together finishes the proof. ∎ S.4.3 Proof of Theorem 4.3 Proof of Theorem 4.3. For notational simplicity, let $$\displaystyle B_{k,\ell}$$ $$\displaystyle=\frac{r^{2}}{d}\textrm{tr}\left(\left(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\right)\Sigma\left(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\right)\right),$$ $$\displaystyle V_{k,\ell}$$ $$\displaystyle=\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma\right).$$ Following the same argument as in the proof of Theorem 3.6, we have $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\lim_{n,d\rightarrow+\infty}B_{k,\ell}+V_{k,\ell}{\quad\rm a.s.}$$ (S.4.1) In the following two lemmas, we characterize the variance and bias respectively. Recall the definitions of $v(0)$, $\tilde{v}(0)$, and $\tilde{v}^{\prime}(0)$ from Lemma 4.1. Lemma S.4.4 (Variance under correlated features). Assume Assumptions 1-3, and $\gamma<\theta$. For $k\neq\ell$, the variance term $V_{k,\ell}$ of the estimator $\widehat{\beta}^{B}$ satisfies $$\lim_{n,d\rightarrow+\infty}V_{k,\ell}=\begin{cases}\sigma^{2}\frac{\gamma}{1-\gamma},&\gamma<\theta\\ \sigma^{2}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right),\quad&\gamma>\theta{\quad\rm a.s.}\end{cases}$$ Lemma S.4.5 (Bias under correlated features). Assume Assumptions 1-4. For $k\neq\ell$, the bias term $B_{k,\ell}$ of the estimator $\widehat{\beta}^{B}$ satisfies $$\lim_{n,d\rightarrow+\infty}B_{k,\ell}=\begin{cases}0,&\gamma<\theta\\ r^{2}\frac{\theta}{\gamma v(0)}-r^{2}\frac{(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right),\quad&\gamma>\theta\end{cases}{\quad{\rm a.s.}}$$ Putting the above results together finishes the proof. ∎ S.4.4 Proof of Corollary 4.4 Proof of Corollary 4.4. It suffices to show $$\frac{({\tilde{v}^{\prime}(0)}/{\tilde{v}(0)^{2}}-1)}{\left({v^{\prime}(0)}/{v(0)^{2}}-1\right)}\leq\theta$$ when $\gamma>\theta$, and $$\lim_{\gamma/\theta\rightarrow 1}\frac{({\tilde{v}^{\prime}(0)}/{\tilde{v}(0)^{2}}-1)}{\left({v^{\prime}(0)}/{v(0)^{2}}-1\right)}=0$$ when $\theta\neq 1$. Using equation (S.4.20), we obtain $$\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1=\frac{\frac{\gamma}{\theta^{2}}\int\frac{\tilde{v}(0)^{2}t^{2}}{(1+\tilde{v}(0)t)^{2}}\tilde{H}(dt)}{1-\frac{\gamma}{\theta^{2}}\int\frac{\tilde{v}(0)^{2}t^{2}}{(1+\tilde{v}(0)t)^{2}}\tilde{H}(dt)}.$$ To proceed, we need the following two lemmas. Lemma S.4.6. Assume Assumption 2. Then $\tilde{v}(0)=\theta v(0)$. Lemma S.4.7. Assume Assumption 2 and $x\leq 0$. Let $H$ and $\tilde{H}_{x}$ be the limiting empirical spectral distribution of $\Sigma$ and $(I+k(x)\Sigma)^{-1}\Sigma$, where $k(z):=(1-\theta)v(z/\theta)$. Then, for any continuous function $f\in\mathcal{C}\left(\text{supp}(H)\cup\text{supp}(\tilde{H}_{x})\right)$, it holds that $$\int f(t)d\tilde{H}_{x}(t)=\int f\left(\frac{t}{1+k(x)t}\right)dH(t).$$ Then, using Lemma S.4.6 and Lemma S.4.7, we have $$\displaystyle\frac{\gamma}{\theta^{2}}\int\frac{\tilde{v}(0)^{2}t^{2}}{(1+\tilde{v}(0)t)^{2}}\tilde{H}(dt)$$ $$\displaystyle=\gamma\int\frac{v(0)^{2}t^{2}}{(1+\theta v(0)t)^{2}}\tilde{H}(dt)$$ $$\displaystyle=\gamma\int\frac{v(0)^{2}t^{2}}{(1+\theta v(0)t(1-\theta)v(0)t)^{2}}H(dt)$$ $$\displaystyle=\gamma\int\frac{v(0)^{2}t^{2}}{(1+v(0)t}H(dt).$$ Similarly, using equation (S.4.20), we have $$\frac{v^{\prime}(0)}{v(0)^{2}}-1=\frac{\frac{\gamma}{\theta}\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}{1-\frac{\gamma}{\theta}\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}.$$ Therefore, $$\displaystyle\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1$$ $$\displaystyle=\frac{\gamma\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}{1-\gamma\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}$$ $$\displaystyle\leq\theta\frac{\frac{\gamma}{\theta}\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}{1-\frac{\gamma}{\theta}\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}$$ $$\displaystyle=\theta\left(\frac{v^{\prime}(0)}{v(0)^{2}}-1\right).$$ This finishes the proof for the first result. Taking $\gamma/\theta\rightarrow 1$, we have $$\displaystyle\lim_{\gamma/\theta\rightarrow 1}\frac{({\tilde{v}^{\prime}(0)}/{\tilde{v}(0)^{2}}-1)}{\left({v^{\prime}(0)}/{v(0)^{2}}-1\right)}=\lim_{\gamma/\theta\rightarrow 1}\theta\frac{1-\gamma/\theta\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}{1-\gamma\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}.$$ (S.4.2) To proceed, we first calculate $$\displaystyle\lim_{\gamma/\theta\rightarrow 1}\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt).$$ Using equation (4.1), we have $$\int\frac{1}{1+v(0)t}dH(t)=1-\gamma/\theta,$$ which implies $\lim_{\gamma/\theta\rightarrow 1}v(0)=+\infty$. Therefore, by the dominated convergence theorem, we have $$\lim_{\gamma/\theta\rightarrow 1}\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)=\int\lim_{\gamma/\theta\rightarrow 1}\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)=1.$$ Plugging the above equality into (S.4.2), we obtain $$\displaystyle\lim_{\gamma/\theta\rightarrow 1}\frac{({\tilde{v}^{\prime}(0)}/{\tilde{v}(0)^{2}}-1)}{\left({v^{\prime}(0)}/{v(0)^{2}}-1\right)}=\lim_{\gamma/\theta\rightarrow 1}\theta\frac{1-\gamma/\theta\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}{1-\gamma\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}H(dt)}=\theta\cdot\frac{0}{1-\gamma}=0.$$ This finishes the proof. ∎ S.4.5 Proof of Corollary 4.5 Proof of Corollary 4.5. The result can be observed from equations (S.4.6) and (S.5.3). As $n,d\rightarrow+\infty$, $B_{k,\ell}$ and $V_{k,\ell}$ converge to the limiting bias and variance terms of ridge regression with the covariance matrix of $(I+k(z)\Sigma)^{-1}\Sigma$ and the coefficient vector $(I+k(z)\Sigma)^{-1/2}\beta$, respectively. Therefore, the proof is omitted. ∎ S.4.6 Supporting lemmas This subsection proves the supporting lemmas in previous subsections, aka Lemmas S.4.1-S.4.3. S.4.6.1 Proof of Lemma S.4.1 Proof of Lemma S.4.1. For simplicity, we assume $\sigma^{2}=1$, and omit the subscript $1$ in $\widehat{\Sigma}_{1}$ and $S_{1}$, which become $\widehat{\Sigma}$ and $S$ respectively. Let $\Omega_{n}:=\{\omega\in\Omega:\lambda(\widehat{\Sigma})(\omega)>0\}$. Using Lemma S.1.5 and Lemma S.1.6, we obtain $$\mathbb{P}\left(\lim_{n,d\rightarrow+\infty}\Omega_{n}\right)=1,$$ when $\gamma<\theta$. Thus, without loss of generality, we assume $\lambda_{\min}(\widehat{\Sigma})>0$. Then we can write $V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$ as $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\frac{1}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}^{+}X^{\top}S^{2}S^{2}X\widehat{\Sigma}^{+}\Sigma\right),$$ $$\displaystyle=\frac{1}{n^{2}}\textrm{tr}\left((\Sigma^{-1/2}\widehat{\Sigma}\Sigma^{-1/2})^{-1}\Sigma^{-1/2}X^{\top}S^{2}S^{2}X\Sigma^{-1/2}(\Sigma^{-1/2}\widehat{\Sigma}\Sigma^{-1/2})^{-1}\right).$$ Hence, given Assumption 2, we can assume $\Sigma=I$ without loss of generality. Assuming $\Sigma=I$, we can simplify the variance term as $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\frac{1}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}^{-1}X^{\top}S^{2}S^{2}X\widehat{\Sigma}^{-1}\right),$$ $$\displaystyle=\frac{1}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}^{-1}\sum_{i=1}^{n}w_{i}^{2}x_{i}^{\top}x_{i}\widehat{\Sigma}^{-1}\right)$$ $$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}w_{i}^{2}x_{i}^{\top}\widehat{\Sigma}^{-2}x_{i}$$ $$\displaystyle=\frac{1}{n^{2}}\sum_{i=1}^{n}w_{i}^{2}\frac{x_{i}^{\top}\widehat{\Sigma}_{-i}^{-2}x_{i}}{\left(1+w_{i}x_{i}^{\top}\widehat{\Sigma}_{-i}^{-1}x_{i}\right)^{2}},$$ where $\widehat{\Sigma}_{-i}:=\widehat{\Sigma}-\frac{1}{n}w_{i}x_{i}x_{i}^{\top}$, and the last line uses the Sherman–Morrison formula twice. Define $m_{1,n}(z)$, the Stieltjes transform of $\widehat{\Sigma}$, and $m_{1,n}^{\prime}(z)$ as $$\displaystyle m_{1,n}(z)$$ $$\displaystyle=\int\frac{1}{t-z}dF_{\widehat{\Sigma}}(t),~{}~{}~{}m_{1,n}^{\prime}(z)=\int\frac{1}{(t-z)^{2}}dF_{\widehat{\Sigma}}(t),$$ which are well-defined for any $z\leq 0$. Applying Lemma S.1.7 and Lemma S.6.9, we obtain almostly surely that $$\lim_{n,d\rightarrow+\infty}V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})=\lim_{n,d\rightarrow+\infty}\frac{1}{n}\sum_{i=1}^{n}w_{i}^{2}\frac{\gamma m_{1,n}^{\prime}(0)}{\left(1+\gamma w_{i}m_{1,n}(0)\right)^{2}}.$$ Lemma S.1.2 shows that the empirical spectral distribution of $\widehat{\Sigma}$ converges weakly to a deterministic distribution $\mu$ almost surely as $n,d\rightarrow+\infty$, characterized by its Stieltjes transform $m_{1}(z)$ that satisfies the following equation: $$m_{1}(z)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]-zm_{1}(z)=1,$$ (S.4.3) for any $z\leq 0$. Now since $\lambda_{\min}(\widehat{\Sigma}_{k})>0$, we have almost surely $$\displaystyle\lim_{n,d\rightarrow+\infty}m_{1,n}(0)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\int\frac{1}{t}dF_{\widehat{\Sigma}}(t)=m_{1}(0),$$ $$\displaystyle\lim_{n,d\rightarrow+\infty}m_{1,n}^{\prime}(0)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\int\frac{1}{t^{2}}dF_{\widehat{\Sigma}}(t)=m_{1}^{\prime}(0),$$ where the second line follows from the fact that $m_{1,n}(z)$ is analytic and bounded on $\mathbb{R}^{-}\cup\{0\}$, and we apply the Vitali’s convergence theorem. Since $m_{1}(0)>0$ according to Lemma S.1.2, we have $$\frac{w_{i}^{2}}{\left(1+\gamma w_{i}m_{1}(0)\right)^{2}}\leq\frac{1}{\gamma^{2}(m_{1}(0))^{2}}<\infty.$$ Thus, under Assumption 3, we use the dominated convergence theorem to obtain $$\lim_{n,d\rightarrow+\infty}V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})=\gamma m_{1}^{\prime}(0)\,\mathbb{E}_{\mu_{w}}\left[\frac{w^{2}}{\left(1+\gamma wm_{1}(0)\right)^{2}}\right],$$ almost surely. We can simplify this result by using equation (S.4.3). Since the Stieltjes transform $m_{1}(z)=\int\frac{1}{t-z}d\mu(t)$ is strictly increasing and positive for any $z\leq 0$, we obtain for any $z\leq 0$ $$\frac{w^{2}m_{1}^{\prime}(z)}{(1+wm_{1}(z))^{2}}\leq\frac{m_{1}^{\prime}(z)}{(\gamma m_{1}(z))^{2}}\leq\frac{1}{(\gamma\lambda_{\min}(\widehat{\Sigma}_{k})m_{1}(z))^{2}},$$ which is uniformly bounded over any compact interval $I\subset(-\infty,0]$. Applying the dominated convergence theorem, we take derivatives on both sides of equation (S.4.3) to obtain $$m_{1}^{\prime}(z)\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(z)}\right]+\gamma m_{1}^{\prime}(z)m_{1}(z)\mathbb{E}_{\mu_{w}}\left[\frac{w^{2}}{(1+\gamma wm_{1}(z))^{2}}\right]-m_{1}(z)-zm_{1}^{\prime}(z)=0.$$ Taking the limit as $z\rightarrow 0^{-}$, we obtain $$\displaystyle m_{1}^{\prime}(0)\mathbb{E}_{\mu_{w}}\left[\frac{w}{(1+\gamma wm_{1}(0))^{2}}\right]=m_{1}(0),$$ which leads to $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\gamma m_{1}^{\prime}(0)\mathbb{E}_{\mu_{w}}\left[\frac{w^{2}}{\left(1+\gamma wm_{1}(0)\right)^{2}}\right]$$ $$\displaystyle=\mathbb{E}_{\mu_{w}}\left[\frac{\gamma w^{2}m_{1}(0)}{(1+\gamma wm_{1}(0))^{2}}\right]/\mathbb{E}_{\mu_{w}}\left[\frac{w}{(1+\gamma wm_{1}(0))^{2}}\right]$$ $$\displaystyle=\left(\mathbb{E}_{\mu_{w}}\left[\frac{w}{1+\gamma wm_{1}(0)}\right]-\mathbb{E}_{\mu_{w}}\left[\frac{w}{(1+\gamma wm_{1}(0))^{2}}\right]\right)/\mathbb{E}_{\mu_{w}}\left[\frac{w}{(1+\gamma wm_{1}(0))^{2}}\right]$$ $$\displaystyle=\frac{1}{\mathbb{E}_{\mu_{w}}\left[\frac{wm_{1}(0)}{(1+\gamma wm_{1}(0))^{2}}\right]}-1$$ $$\displaystyle=\frac{\gamma}{\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\gamma wm_{1}(0)}\right]-\mathbb{E}_{\mu_{w}}\left[\frac{1}{(1+\gamma wm_{1}(0))^{2}}\right]}-1$$ $$\displaystyle=\frac{\gamma}{1-\gamma-\mathbb{E}_{\mu_{w}}\left[\frac{1}{(1+\gamma wm_{1}(0))^{2}}\right]}-1~{}~{}~{}\text{almost surely.}$$ The fourth and the last lines use equation (S.4.3). Finally, by Lemma S.1.2, $m_{1}(0)$ is the unique positive solution of equation (3.1). This finishes the proof. ∎ S.4.6.2 Proof of Lemma S.4.2 Proof of Lemma S.4.2. Without loss of generality, we assume $\sigma^{2}=1$, and omit the subscript $1$ in $\widehat{\Sigma}_{1}$ and $S_{1}$. Let $S_{ij}$ be the $(i,j)$-th element of $S$, and $$\displaystyle D_{ij}=\begin{cases}1(S_{ij}\neq 0),&\quad i=j\\ 0,&\quad i\neq j\end{cases}.$$ Applying Lemma S.3.8, we can rewrite $V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$ as $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\textrm{tr}\left((X^{\top}SSX)^{+}X^{\top}S^{2}S^{2}X(X^{\top}SSX)^{+}\Sigma\right)$$ $$\displaystyle=\textrm{tr}\left((X^{\top}D^{\top}DX)^{+}X^{\top}(D^{\top})^{2}D^{2}X(X^{\top}D^{\top}DX)^{+}\Sigma\right).$$ Without loss of generality, we assume that each $w_{i,j},\,1\leq i\leq j\leq n$, is either one or zero. Applying the following identity of the pseudoinverse of a matrix $A$ $$(A^{\top}A)^{+}A^{\top}=\lim_{z\rightarrow 0^{-}}(A^{\top}A-zI)^{-1}A^{\top},$$ (S.4.4) we obtain $$\displaystyle V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{1}{n}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\frac{1}{n}\lim_{z\rightarrow 0^{-}}\left\{\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)+z\,\textrm{tr}\left((\widehat{\Sigma}-zI)^{-2}\Sigma\right)\right\}.$$ (S.4.5) For any $z<0$, Lemma S.4.8 shows $$\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)=\Theta_{1}(z)~{}{{\quad\rm a.s.}}$$ The second term in equation (S.4.5) is the derivative of $\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)$. For any small constant $\epsilon>0$, $\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)$ is almost surely uniformly bounded on all $z<-\epsilon$. Moreover, it is analytic with respect to $z$. Thus we can apply Vitali convergence theorem, aka Lemma S.6.1, to obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)+z\,\textrm{tr}\left((\widehat{\Sigma}-zI)^{-2}\Sigma\right)=\Theta_{1}(z)+z\Theta^{\prime}_{1}(z){{\quad\rm a.s.}}$$ for any $z<0$. Finally, we show that we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$ by applying the Arzela-Ascoli theorem and the Moore-Osgood theorem (Lemma S.6.2). To achieve this, we first establish that $\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)/d$ and its derivative are uniformly bounded for all $z<0$. By taking the derivative, we obtain $$\displaystyle\frac{1}{d}\frac{{\rm d}\,\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)}{{\rm d}z}$$ $$\displaystyle=\frac{2}{d}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-2}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\leq\frac{2\lambda_{\max}(\Sigma)}{\lambda^{+}_{\min}(\widehat{\Sigma})^{2}},$$ which is almost surely bounded by Lemma S.1.6. For $\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)/d$, a similar calculation leads to $$\displaystyle\frac{\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)}{d}\leq\frac{\lambda_{\max}(\Sigma)}{\lambda^{+}_{\min}(\widehat{\Sigma})}.$$ Then, using the Arzela-Ascoli theorem, we obtain the uniform convergence of $V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$. Applying the Moore-Osgood theorem, we obtain almost surely that $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\frac{1}{n}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{1}{n}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma}(\widehat{\Sigma}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\gamma(\Theta_{1}(z)+z\Theta^{\prime}(z))$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{v^{\prime}(z)}{v(z)^{2}}-1$$ $$\displaystyle=\frac{v^{\prime}(0)}{v(0)^{2}}-1,$$ where the existence of $v^{\prime}(z)$ and the last line follow from Lemma 4.1. The fourth line follows from Lemma S.4.8. ∎ S.4.6.3 Proof of Lemma S.4.3 Proof of Lemma S.4.3. For notational simplicity, we assume $r^{2}=1$, omit the subscript $i$, and write $\widehat{\Sigma}_{i}$ and $S_{i}$ as $\widehat{\Sigma}$ and $S$ respectively. Applying Lemma 2.1, we can rewrite the bias term as $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}\left(\Pi\Sigma\Pi\right)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}\left((I-\widehat{\Sigma}^{+}\widehat{\Sigma})\Sigma\right)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\frac{1}{d}\,\textrm{tr}\left((I-(\widehat{\Sigma}-zI)^{-1}\widehat{\Sigma})\Sigma\right)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}-\frac{z}{d}\,\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right),$$ where we used equation (S.4.4). According to Lemma S.4.8, it holds almost surely that $$\lim_{n,d\rightarrow+\infty}\frac{1}{d}\,\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)=\,\Theta_{1}(z).$$ By using similar arguments as in the proof of Lemma S.4.2, we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$. By applying Lemma S.4.8, the following holds almost surely $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{1})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}-\frac{z}{d}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}-\frac{z}{d}\textrm{tr}\left((\widehat{\Sigma}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}-z\Theta(z)$$ $$\displaystyle=\frac{\theta}{\gamma v_{1}(0)},$$ where the last line follows from Lemma 4.1. ∎ S.4.6.4 Proof of Lemma S.4.4 Proof of Lemma S.4.4. We start with the underparameterized regime. Using a similar argument as in the proof of Lemma (S.4.1), we assume $\lambda_{\min}(\widehat{\Sigma}_{k})>0$ and $\lambda_{\min}(\widehat{\Sigma}_{\ell})>0$. We first rewrite the variance term as $$\displaystyle V_{k,\ell}$$ $$\displaystyle=\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}_{k}^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{-1}\Sigma\right)$$ $$\displaystyle=\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left((\Sigma^{-1/2}X^{\top}S_{k}S_{k}X\Sigma^{-1/2})^{-1}\Sigma^{-1/2}X^{\top}S_{k}^{2}S_{\ell}^{2}X\Sigma^{-1/2}(\Sigma^{-1/2}X^{\top}S_{\ell}S_{\ell}X\Sigma^{-1/2})^{-1}\right).$$ Under Assumption 2, $X\Sigma^{-1/2}$ has isotropic features. Therefore, the limiting variance $V_{k,\ell}$ is the same as in the case of the isotropic features in Lemma S.3.2. In the overparameterized regime, following the same argument as in Lemma S.3.3, we can assume that the multipliers $w_{i,j},~{}1\leq i\leq j\leq n$ are either one or zero, without loss of generality. We further assume $\sigma^{2}=1$. Using identity (S.4.4), we obtain $$\displaystyle V_{k,\ell}=\lim_{z\rightarrow 0^{-}}\frac{1}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right).$$ Let $$C:=\{i~{}|~{}w_{k,i}\neq 0,w_{\ell,i}\neq 0\},\quad\widehat{\Sigma}_{C}=:\sum_{i\in C}\frac{1}{n}x_{i}x_{i}^{\top},\quad k(z):=(1-\theta)v(z/\theta).$$ Using Lemma S.4.9 acquires $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{1}{n^{2}}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad=\lim_{n,d\rightarrow+\infty}\frac{\gamma}{d}\textrm{tr}\left((-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad=\lim_{n,d\rightarrow+\infty}\frac{\gamma}{d}\textrm{tr}\left((\tilde{\Sigma}_{C}-zI)^{-1}\tilde{\Sigma}_{C}(\tilde{\Sigma}_{C}-zI)^{-1}\Sigma(I+k(z)\Sigma)^{-1}\right){\quad{\rm a.s.}}$$ (S.4.6) where $\tilde{\Sigma}_{C}(z):=(I+k(z)\Sigma)^{-1/2}\widehat{\Sigma}_{C}(I+k(z)\Sigma)^{-1/2}$. Furthermore, under Assumption 3, the cardinality of $C$ satisfies $$|C|/n=\frac{1}{n}\sum_{i=1}^{n}1\left({w_{k,i}\neq 0,w_{\ell,i}\neq 0}\right)\rightarrow\theta^{2}{\quad{\rm a.s.}}$$ (S.4.7) Thus, we can see $\tilde{\Sigma}_{C}(z)$ as a sample covariance matrix with sample size $\theta^{2}n$ and a population covariance matrix $(I+k(z)\Sigma)^{-1}\Sigma$. Define $\tilde{H}_{x}$ as the limiting empirical spectral distribution of $(I+k(x)\Sigma)^{-1}\Sigma$, which exists under Assumption 2. Let $\tilde{v}(z,x)$ be the unique positive solution of the following equation $$\tilde{v}(z,x)=\left(-z+\frac{\gamma}{\theta^{2}}\int\frac{t\tilde{H}_{x}(dt)}{1+\tilde{v}(z,x)t}\right)^{-1}.$$ (S.4.8) The existence and uniqueness of the positive solution to equation (S.4.8) follows from the same argument as in the proof of Lemma 4.1. Note that the term in equation (S.4.6) can be seen as the variance of the sketched estimator with aspect ratio $\gamma/\theta^{2}$ and covariance matrix $(I+k(x)\Sigma)^{-1}\Sigma$. Then, it has been proved in Lemma S.4.2 that $$\lim_{n,d\rightarrow+\infty}\frac{\gamma}{d}\textrm{tr}\left((\tilde{\Sigma}_{C}-zI)^{-1}\tilde{\Sigma}_{C}(\tilde{\Sigma}_{C}-zI)^{-1}\Sigma(I+k(z)\Sigma)^{-1}\right)=\frac{\tilde{v}^{\prime}(z,z)}{\tilde{v}(z,z)^{2}}-1{\quad{\rm a.s.}}$$ Following the same argument as in the proof of Lemma S.3.3, we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$, and obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}V_{k,\ell}$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\frac{\gamma}{d}\textrm{tr}\left((\tilde{\Sigma}_{C}-zI)^{-1}\tilde{\Sigma}_{C}(\tilde{\Sigma}_{C}-zI)^{-1}\Sigma(I+k(z)\Sigma)^{-1}\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{\gamma}{d}\textrm{tr}\left((\tilde{\Sigma}_{C}-zI)^{-1}\tilde{\Sigma}_{C}(\tilde{\Sigma}_{C}-zI)^{-1}\Sigma(I+k(z)\Sigma)^{-1}\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{\tilde{v}^{\prime}(z,z)}{\tilde{v}(z,z)^{2}}-1{\quad{\rm a.s.}}$$ $$\displaystyle=\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1,$$ where the last line uses Lemma S.4.10. ∎ S.4.6.5 Proof of Lemma S.4.5 Proof of Lemma S.4.5. Using a similar argument as in the proof of Lemma S.3.4, in the underparameterized regime, the bias term converges almost surely to zero. When $\gamma>\theta$, using the argument as in the proof of Lemma S.3.3, we can assume that the multipliers $w_{i,j}$ are either zero or one and $\sigma^{2}=1$ without loss of generality. We rewrite the bias term as $$\displaystyle B_{k,\ell}$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{r^{2}}{d}\textrm{tr}\left((I-(\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{k})\Sigma(I-(\widehat{\Sigma}_{\ell}-zI)^{-1}\widehat{\Sigma}_{\ell})\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\left(-\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\Sigma\right)+\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{\ell}(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right)\right).$$ Assuming we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$, we obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}B_{k,\ell}$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{z\rightarrow 0^{-}}\left(-\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\Sigma\right)+\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{\ell}(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right)\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}-\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad\quad+\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{\ell}(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\text{I}+\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\text{II}.$$ The first term I in the above equation has already appeared in Lemma S.4.3 and satisfies $$\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\text{I}=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}-\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\Sigma\right)=r^{2}\frac{\theta}{\gamma v(0)}{\quad{\rm a.s.}}$$ Recall the definitions of $\widehat{\Sigma}_{A}$, $\widehat{\Sigma}_{B}$, and $\widehat{\Sigma}_{C}$ defined in Lemma S.4.9 and its proof. We rewrite the second term as II $$\displaystyle=\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{\ell}(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C})(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{B}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\qquad+\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=:\text{II}_{1}+\text{II}_{2}.$$ We derive the limits of $\text{II}_{1}$ and $\text{II}_{2}$ respectively. We start with $\text{II}_{2}$. Following the proof of Lemma S.4.4, we obtain $$\displaystyle\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\text{II}_{2}$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{r^{2}z}{\gamma}V_{k,\ell}=0{\quad{\rm a.s.}}$$ For term $\text{II}_{1}$, using Lemma S.4.11, we obtain $$\displaystyle\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\text{II}_{1}$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{r^{2}z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{B}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}-\frac{r^{2}(1-\theta)}{v(z/\theta)}\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}-\frac{r^{2}(1-\theta)}{\gamma v(z/\theta)}V_{k,\ell}$$ $$\displaystyle=-\frac{r^{2}(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right){\quad{\rm a.s.}}$$ Lastly, the validity of exchanging the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$ follows from the argument as in the proof of Lemma S.3.4. This finishes the proof. ∎ S.4.6.6 Proof of Lemma S.4.6 Proof of Lemma S.4.6. By Lemma S.4.7, we have $$\displaystyle\frac{\gamma}{\theta^{2}}\int\frac{\theta v(0)t\tilde{H}(dt)}{1+\theta v(0)t}$$ $$\displaystyle=\frac{\gamma}{\theta^{2}}\int\frac{\theta v(0)tH(dt)}{1+(\theta v(0)+(1-\theta)v(0))t}$$ $$\displaystyle=\frac{\gamma}{\theta}\int\frac{v(0)tH(dt)}{1+v(0)t}$$ $$\displaystyle=1,$$ where the last line follows from the fact that $v(0)$ is a solution to equation (4.1) at $z=0$. The desired result follows from that $\tilde{v}(0)$ is the unique positive solution of equation (4.2). ∎ S.4.6.7 Proof of Lemma S.4.7 Proof of Lemma S.4.7. Using Assumption 2, we have $$c_{\lambda}\leq\lambda_{\min}(\Sigma)\leq\lambda_{\max}(\Sigma)\leq C_{\lambda},\quad 0\leq\lambda_{\min}((I+k(x)\Sigma)^{-1}\Sigma)\leq\lambda_{\max}((I+k(x)\Sigma)^{-1}\Sigma)\leq 1/k(x).$$ Therefore, for any $x\leq 0$ and continuous function $f$, $f$ is bounded on $\text{supp}(H)\cup\text{supp}(\tilde{H}_{x})$. By the definitions of $H$ and $\tilde{H}_{x}$, we have $$\displaystyle\int f(t)d\tilde{H}_{x}(t)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{1}{d}\sum_{i=1}^{d}f\left(\frac{\lambda_{i}(\Sigma)}{1+k(x)\lambda_{i}(\Sigma)}\right)$$ $$\displaystyle=\int f\left(\frac{t}{(1+k(x)t)}\right)dH(t).$$ This completes the proof. ∎ S.4.7 Technical lemmas This subsection proves technical lemmas that are used in the proofs of the supporting lemmas in the previous subsection. Our first lemma provides an extension of (Ledoit and Péché,, 2011, Lemma 2.1) to the sketched covariance matrix $X^{\top}S^{\top}SX/n$. Lemma S.4.8. Assume Assumptions 1-3, and $\gamma>\theta$. Let $X_{k}:=S_{k}X$ be a subsample with multipliers of either zero or one. For any $z<0$, it holds $$\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}((\widehat{\Sigma}_{k}-zI)^{-1}\Sigma)=\Theta_{1}(z){\quad\rm a.s.}$$ where $$\displaystyle\Theta_{1}(z)$$ $$\displaystyle=\frac{\theta/\gamma^{2}}{\theta/\gamma-1-zm_{1}(z)}-1/\gamma=\frac{1}{\gamma}\left(\frac{\theta}{-z\,v(z/\theta)}-1\right).$$ Proof of Lemma S.4.8. Let $z<0$. We begin with the following identity $$(\widehat{\Sigma}_{k}-zI)^{-1}(\widehat{\Sigma}_{k}-zI)=I.$$ Taking the trace and then multiplying both sides by $1/d$, we obtain for any $z<0$ that $$\displaystyle 1$$ $$\displaystyle=\frac{1}{d}\,\textrm{tr}((\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{k})-\frac{z}{d}\,\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1}$$ $$\displaystyle=\frac{1}{d}\,\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}\frac{1}{n}\sum_{i=1}^{n}w_{k,i}x_{i}x_{i}^{\top}\right)-\frac{z}{d}\,\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1}$$ $$\displaystyle=\frac{1}{d}\,\textrm{tr}\left(\sum_{i=1}^{n}\frac{(\widehat{\Sigma}_{k,-i}-zI)^{-1}w_{k,i}x_{i}x_{i}^{\top}/n}{1+w_{k,i}x_{i}^{\top}(\widehat{\Sigma}_{k,-i}-zI)^{-1}x_{i}/{n}}\right)-\frac{z}{d}\,\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1},$$ where the last line uses the Sherman–Morrison formula. Using a similar argument as in the proof of Lemma S.1.3, we obtain $$\displaystyle 1$$ $$\displaystyle=\frac{1}{d}\,\textrm{tr}\left(\sum_{i=1}^{n}\frac{(\widehat{\Sigma}_{k,-i}-zI)^{-1}w_{k,i}x_{i}x_{i}^{\top}/n}{1+w_{k,i}x_{i}^{\top}(\widehat{\Sigma}_{k,-i}-zI)^{-1}x_{i}/{n}}\right)-\frac{z}{d}\,\textrm{tr}(\widehat{\Sigma}_{k}-zI)^{-1}$$ $$\displaystyle\overset{{\rm a.s.}}{=}\frac{\theta\Theta_{1}(z)}{1+\gamma\Theta_{1}(z)}-zm_{1}(z)$$ $$\displaystyle=\frac{-\theta/\gamma}{1+\gamma\Theta_{1}(z)}+\theta/\gamma-zm_{1}(z).$$ This leads to $$\Theta_{1}(z)=\frac{\theta/\gamma^{2}}{\theta/\gamma-1-zm_{1}(z)}-1/\gamma.$$ (S.4.9) Let $v_{1}(z)$ be the Stieltjes transform of the limiting empirical spectral distribution of the matrix $S_{k}XXS_{k}^{\top}/n$. Since the matrices $X^{\top}S_{k}S_{k}X/n$ and $S_{k}XX^{\top}S_{k}/n$ share the same non-zero eigenvalues, we can establish $$v_{1}(z)+\frac{1-\theta}{z}=\gamma\left(m_{1}(z)+\frac{\gamma-\theta}{\gamma}\frac{1}{z}\right).$$ (S.4.10) According to (Couillet and Liao,, 2022, Theorem 2.7) and since the multipliers are either zero or one, we obtain for $z<0$ that $$v_{1}(z)=-\frac{1}{z}\mathbb{E}_{\mu_{w}}\left[\frac{1}{1+\delta(z)w}\right]=-\frac{1-\theta}{z}+\frac{\theta}{-z-z\delta(z)},$$ where $\delta(z)$ and $\tilde{\delta}(z)$ are the unique positive solution of the following equations for any $z<0$ $$\displaystyle\delta(z)$$ $$\displaystyle=-\frac{\gamma}{z}\int\frac{tH(dt)}{1+\tilde{\delta}(z)t},$$ $$\displaystyle\tilde{\delta}(z)$$ $$\displaystyle=-\frac{1}{z}\mathbb{E}_{\mu_{w}}\left[\frac{t}{1+\delta(z)w}\right]=\frac{\theta}{-z-z\delta(z)}.$$ Using these facts, we obtain $$\displaystyle\frac{1}{v_{1}(z)+\frac{1-\theta}{z}}$$ $$\displaystyle=-\frac{z}{\theta}(1+\delta(z))$$ $$\displaystyle=-\frac{z}{\theta}+\frac{\gamma}{\theta}\int\frac{tH(dt)}{1+\tilde{\delta}(z)t}$$ $$\displaystyle=-\frac{z}{\theta}+\frac{\gamma}{\theta}\int\frac{tH(dt)}{1+\left(v_{1}(z)+\frac{1-\theta}{z}\right)t}.$$ Comparing the above results with equation (4.1), we conclude $$v\left(\frac{z}{\theta}\right)=v_{1}(z)+\frac{1-\theta}{z}.$$ (S.4.11) The result follows from equations (S.4.9), (S.4.10), and (S.4.11). ∎ Lemma S.4.9. Let $C:=\{i~{}|~{}w_{k,i}\neq 0,w_{\ell,i}\neq 0\}$ and $\widehat{\Sigma}_{C}:=\sum_{i\in C}x_{i}x_{i}^{\top}/n.$ Assume Assumption 1-3, and the multipliers are either zero or one. Then, for any $z<0$, we have $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{1}{nd}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad=\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}\left((-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right){\quad\rm a.s.}$$ (S.4.12) where $k(z):=(1-\theta)v(z/\theta).$ Proof of Lemma S.4.9. In addition to $C$ and $\widehat{\Sigma}_{C}$, we define $$\displaystyle A:$$ $$\displaystyle=\{i~{}|~{}w_{k,i}\neq 0,w_{\ell,i}=0\},\quad B:=\{i~{}|~{}w_{k,i}=0,w_{\ell,i}\neq 0\},$$ $$\displaystyle\widehat{\Sigma}_{A}$$ $$\displaystyle:=\sum_{i\in A}\frac{1}{n}x_{i}x_{i}^{\top},\quad\widehat{\Sigma}_{B}:=\sum_{i\in B}\frac{1}{n}x_{i}x_{i}^{\top}.$$ It is straightforward to see that $A,B$, and $C$ are disjoint sets. Since the multipliers are either zero or one, we have $$\displaystyle\widehat{\Sigma}_{k}$$ $$\displaystyle=\sum_{i:w_{k,i}\neq 0}\frac{1}{n}x_{i}x_{i}^{\top}=\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C},$$ $$\displaystyle\widehat{\Sigma}_{\ell}$$ $$\displaystyle=\sum_{i:w_{\ell,i}\neq 0}\frac{1}{n}x_{i}x_{i}^{\top}=\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C},$$ $$\displaystyle\frac{1}{n}X^{\top}S_{k}^{2}S_{\ell}^{2}X$$ $$\displaystyle=\sum_{i:w_{k,i}\neq 0,w_{\ell,i}\neq 0}\frac{1}{n}x_{i}x_{i}^{\top}=\widehat{\Sigma}_{C}.$$ Therefore, we have $$\frac{1}{nd}\textrm{tr}\left((\widehat{\Sigma}_{k}-zI)^{-1}X^{\top}S_{k}^{2}S_{\ell}^{2}X(\widehat{\Sigma}_{\ell}-zI)^{-1}\Sigma\right)=\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right).$$ Let $$\displaystyle T_{1}:$$ $$\displaystyle=\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad-\frac{1}{d}\textrm{tr}\left((-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right),$$ $$\displaystyle T_{2}:$$ $$\displaystyle=\frac{1}{d}\textrm{tr}\left((-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad-\frac{1}{d}\textrm{tr}\left((-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right).$$ To prove (S.4.12), we compare both sides by writing their difference as $$\displaystyle\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad-\frac{1}{d}\textrm{tr}\left((-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=T_{1}+T_{2}.$$ Therefore, it suffices to show $$\displaystyle\lim_{n,d\rightarrow+\infty}T_{1}=0,\qquad\lim_{n,d\rightarrow+\infty}T_{2}=0{\quad\rm a.s.}$$ For $T_{1}$, we have $$\displaystyle T_{1}$$ $$\displaystyle=\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}(-zk(z)\Sigma-\widehat{\Sigma}_{A})(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=-\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{A}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\qquad-\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}zk(z)\Sigma(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=-T_{11}-T_{12},$$ where the first equality uses the identity $A^{-1}-B^{-1}=A^{-1}(B-A)B^{-1}$ for any invertible matrices $A$ and $B$. For $T_{11}$, we have $$\displaystyle T_{11}$$ $$\displaystyle=\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{A}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\sum_{i\in A}\frac{1}{n}x_{i}x_{i}^{\top}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{d}\textrm{tr}\Bigg{(}\sum_{i\in A}\left((\widehat{\Sigma}_{A,-i}+\widehat{\Sigma}_{C}-zI)^{-1}\frac{1}{n}\frac{x_{i}x_{i}^{\top}}{1+\frac{1}{n}x_{i}^{\top}(\widehat{\Sigma}_{A,-i}+\widehat{\Sigma}_{C}-zI)^{-1}x_{i}}\right)$$ $$\displaystyle\qquad\qquad\qquad\cdot(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\Bigg{)},$$ where $\widehat{\Sigma}_{A,-i}:=\widehat{\Sigma}_{A}-\frac{1}{n}x_{i}x_{i}^{\top}$, and the last line uses the Sherman–Morrison formula. Applying Lemma S.6.9 and Lemma S.6.4, we obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{A}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad\quad\quad-\frac{1}{d}\textrm{tr}\Bigg{(}(\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\sum_{i\in A}\frac{1}{n}\frac{\Sigma}{1+\gamma\frac{1}{d}\textrm{tr}\big{(}(\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\big{)}}$$ $$\displaystyle\qquad\qquad\qquad\qquad\cdot(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\Bigg{)}$$ $$\displaystyle=0{\quad\rm a.s.}$$ (S.4.13) Furthermore, by Lemma S.4.8, we obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{1}{1+\gamma\frac{1}{d}\textrm{tr}(\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma}$$ $$\displaystyle=\frac{1}{1+\gamma\Theta_{1}(z)}{\quad\rm a.s.}$$ $$\displaystyle=\frac{1}{1+\left(-\frac{\theta}{zv(z/\theta)-1}\right)}$$ $$\displaystyle=-\frac{zv(z/\theta)}{\theta}$$ $$\displaystyle=-\frac{zk(z)}{\theta(1-\theta)}$$ (S.4.14) Moreover, under Assumption 3, the cardinality of $A$ satisfies $$|A|/n=\frac{1}{n}\sum_{i=1}^{n}1\left({w_{k,i}\neq 0,w_{\ell,i}=0}\right)\rightarrow\theta(1-\theta){\quad\rm a.s.}$$ (S.4.15) Therefore, by combining (S.4.7)-(S.4.15), we obtain $$\lim_{n,d\rightarrow+\infty}T_{11}+T_{12}=0{\quad\rm a.s.},$$ and thus $\lim_{n,d\rightarrow+\infty}T_{1}=0$ almost surely. The above argument can be applied to $T_{2}$ to obtain $$\lim_{n,d\rightarrow+\infty}T_{2}=0{\quad\rm a.s.}$$ This completes the proof. ∎ Lemma S.4.10. Assume Assumption 2, $\gamma/\theta>1$ and $x\leq 0$. Let $\tilde{v}(z,x)$ be the unique positive solution of equation (S.4.8) for $z\leq 0$. Then, $$\lim_{z\rightarrow 0^{-}}\tilde{v}(z,z)=\tilde{v}(0),\quad\lim_{z\rightarrow 0^{-}}\tilde{v}^{\prime}(z,z)=\tilde{v}^{\prime}(0),$$ where the derivative is taken with respect to the first variable, $\tilde{v}(0)$ is the unique positive solution of equation (4.2), and $\tilde{v}^{\prime}(0)=\lim_{z\rightarrow 0^{-}}\tilde{v}^{\prime}(z)$. Proof of Lemma S.4.10. Recall from the proof of Lemma S.4.4 that $k(x)=(1-\theta)v(x/\theta)$ for $x\leq 0$, $\tilde{H}_{x}$ is the limiting empirical spectral distribution of $(I+k(x)\Sigma)^{-1}\Sigma$, and recall equation (S.4.8) $$\displaystyle\tilde{v}(z,x)=\left(-z+\frac{\gamma}{\theta^{2}}\int\frac{t\tilde{H}_{x}(dt)}{1+\tilde{v}(z,x)t}\right)^{-1}.$$ (S.4.16) We start by rewriting the above equation as $$\displaystyle 1-\frac{\theta^{2}}{\gamma}$$ $$\displaystyle=\frac{\theta^{2}}{\gamma}z\tilde{v}(z,x)+\int\frac{1}{1+\tilde{v}(z,x)t}d\tilde{H}_{x}(t)$$ $$\displaystyle=\frac{\theta^{2}}{\gamma}z\tilde{v}(z,x)+\int\frac{1+k(x)t}{1+k(x)t+\tilde{v}(z,x)t}dH(t),$$ where the last line follows from Lemma S.4.7. Let $$f(c,x)=\frac{\theta^{2}}{\gamma}zc+\int\frac{1+k(x)t}{1+k(x)t+ct}dH(t),$$ and thus $f(\tilde{v}(z,x),x)=1-\theta^{2}/\gamma$. In what follows, we upper and lower bound $\tilde{v}(z,x)$ in terms of $\tilde{v}(z,0)$. Upper bound Recall that $v(x/\theta)$ is the limiting Stieltjes transform of $SXX^{\top}S/n$, which is an increasing function of $x$ on $(-\infty,0]$. Then $k(x)=(1-\theta)v(x/\theta)$ is also an increasing function of $x$ on $(-\infty,0]$. Therefore, for any fixed $c$, $f(c,x)$ is an increasing function of $x$ on $(-\infty,0]$. Furthermore, for $z\leq 0$ and any fixed $x$, $f(c,x)$ is a decreasing function of $c$ on $[0,+\infty)$. Then, fixing some $z\leq 0$ and for any $x\leq 0$, we have $$\displaystyle f(\tilde{v}(z,0),x)=\frac{\theta^{2}}{\gamma}z\tilde{v}(z,0)+\int\frac{1+k(x)t}{1+k(x)t+\tilde{v}(z,0)t}dH(t)\leq 1-\frac{\theta^{2}}{\gamma}.$$ Thus we obtain $$\displaystyle\tilde{v}(z,x)\leq\tilde{v}(z,0).$$ (S.4.17) Lower bound For $z\leq 0$ and $x\leq 0$, we have $$\displaystyle f\left(\frac{k(x)}{k(0)}\tilde{v}(z,0),x\right)$$ $$\displaystyle=\frac{\theta^{2}}{\gamma}z\frac{k(x)}{k(0)}\tilde{v}(z,0)+\int\frac{1+k(x)t}{1+k(x)t+\frac{k(x)}{k(0)}\tilde{v}(z,0)t}dH(t)$$ $$\displaystyle\geq\frac{\theta^{2}}{\gamma}z\tilde{v}(z,0)+\int\frac{\frac{k(0)}{k(x)}+k(0)t}{\frac{k(0)}{k(x)}+k(0)t+\tilde{v}(z,0)t}dH(t)$$ $$\displaystyle\geq 1-\frac{\theta^{2}}{\gamma}$$ $$\displaystyle=f(\tilde{v}(z,x),x),$$ which implies $$\displaystyle\tilde{v}(z,0)\frac{k(x)}{k(0)}\leq\tilde{v}(z,x).$$ (S.4.18) Combining the lower and upper bounds for $\tilde{v}(z,x)$ and plugging $x=z$ into $\tilde{v}(z,x)$, we obtain $$\displaystyle\tilde{v}(z,0)\frac{k(z)}{k(0)}\leq\tilde{v}(z,z)\leq\tilde{v}(z,0).$$ Taking $z\rightarrow 0^{-}$, we obtain $$\lim_{z\rightarrow 0^{-}}\tilde{v}(z,z)=\lim_{z\rightarrow 0^{-}}\tilde{v}(z,0)=\tilde{v}(0).$$ (S.4.19) For the derivative of $\tilde{v}(z,x)$, Using equation (S.4.16) and Lemma S.4.7, we have $$\displaystyle\tilde{v}^{\prime}(z,x)$$ $$\displaystyle=\frac{\tilde{v}(z,x)^{2}}{1-\frac{\gamma}{\theta^{2}}\int\frac{\tilde{v}(z,x)^{2}t^{2}}{(1+\tilde{v}(z,x)t)^{2}}d\tilde{H}_{x}(t)}$$ $$\displaystyle=\frac{\tilde{v}(z,x)^{2}}{1-\frac{\gamma}{\theta^{2}}\int\frac{\tilde{v}(z,x)^{2}t^{2}}{(1+k(x)t+\tilde{v}(z,x)t)^{2}}dH(t)}.$$ (S.4.20) The result follows from equation (S.4.19), the fact that $\lim_{z\rightarrow 0^{-}}\tilde{v}^{\prime}(0)=\lim_{z\rightarrow 0^{-}}\tilde{v}^{\prime}(z)$ , and the dominated convergence theorem. ∎ Let $\widehat{\Sigma}_{A},\widehat{\Sigma}_{B},$ and $\widehat{\Sigma}_{C}$ be the same as in Lemma S.4.9 and its proof. Lemma S.4.11. Assume Assumptions 1- 3. For any $z<0$, we have $$\displaystyle\lim_{n,d\rightarrow+\infty}\Bigg{(}\frac{z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{B}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad\qquad+\frac{(1-\theta)}{v(z/\theta)}\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)\Bigg{)}=0{\quad\rm a.s.}$$ where $v(0)$ is the unique positive solution to equation (4.1). Proof of Lemma S.4.11. We start by writing $$\displaystyle\frac{z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{B}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\frac{z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\frac{1}{n}\sum_{i\in B}x_{i}x_{i}^{\top}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=\frac{z}{d}\textrm{tr}\left(\frac{1}{n}\sum_{i\in B}(\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\frac{x_{i}x_{i}^{\top}}{1+\frac{1}{n}x_{i}^{\top}(\widehat{\Sigma}_{B,-i}+\widehat{\Sigma}_{C}-zI)^{-1}x_{i}}(\widehat{\Sigma}_{B,-i}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right),$$ where the last equality uses the Sherman–Morrison formula. Following the same argument as in the proof of Lemma S.4.9, we obtain $$\displaystyle\lim_{n,d\rightarrow+\infty}\Bigg{(}\frac{z}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{B}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad\qquad-\frac{z}{d}\frac{\theta(1-\theta)}{1+\gamma\Theta_{1}(z)}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)\Bigg{)}=0{\quad\rm a.s.}$$ (S.4.21) Similarly, we have $$\displaystyle\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\frac{1}{n}\sum_{i\in C}x_{i}x_{i}^{\top}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{d}\textrm{tr}\left(\frac{1}{n}\sum_{i\in C}\frac{(\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C,-i}-zI)^{-1}x_{i}x_{i}^{\top}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C,-i}-zI)^{-1}\Sigma}{(1+\frac{1}{n}x_{i}^{\top}(\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C,-i}-zI)^{-1}x_{i})(1+\frac{1}{n}x_{i}^{\top}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C,-i}-zI)^{-1}x_{i})}\right),$$ and thus $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{1}{d}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\widehat{\Sigma}_{C}(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)$$ $$\displaystyle\quad\quad-\frac{1}{d}\frac{\theta^{2}}{(1+\gamma\Theta_{1}(z))^{2}}\textrm{tr}\left((\widehat{\Sigma}_{A}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma(\widehat{\Sigma}_{B}+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma\right)=0{\quad\rm a.s.}$$ (S.4.22) Combining (S.4.7) and (S.4.7) and using Lemma S.4.8, that is $$\Theta_{1}(z)=\frac{1}{\gamma}\left(\frac{\theta}{-z\,v(z/\theta)}-1\right),$$ we complete the proof. ∎ Appendix S.5 Proofs for Section 5 S.5.1 Proof of Theorem 5.1 Proof of Theorem 5.1. Let $$\displaystyle B_{k,\ell}$$ $$\displaystyle=\frac{1}{d}\beta^{\top}\left(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\right)\Sigma\left(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\right)\beta,$$ (S.5.1) $$\displaystyle V_{k,\ell}$$ $$\displaystyle=\frac{\sigma^{2}}{n^{2}}\textrm{tr}\left(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma\right).$$ (S.5.2) Applying Lemma 2.1, we can rewrite the out-of-sample prediction risk as $$R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\frac{1}{B^{2}}\sum_{k,\ell}^{B}\left(B_{k,\ell}+V_{k,\ell}\right).$$ Note that the eigenvalues of $I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}$ are either zero or one. For $k\neq\ell$, under Assumption 2, we have $$\displaystyle B_{k,\ell}$$ $$\displaystyle=\frac{1}{d}\beta^{\top}\left(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\right)\Sigma\left(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\right)\beta^{\top}\leq C_{\lambda}\|\beta\|_{2}^{2}/d.$$ Furthermore, $V_{k,k}$ corresponds to the variance of the sketched least square estimator. Therefore, $$\displaystyle\lim_{B\rightarrow+\infty}\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\lim_{n,d\rightarrow+\infty}\left(B_{k,\ell}+V_{k,\ell}\right){\quad{\rm a.s.}}$$ for $k\neq\ell$. By equation (S.5.2), the variance term does not depend on $\beta$, and thus is the same as in the random signal case in Theorem 4.3. Moreover, using a similar argument as in the proof of Lemma S.3.4, the bias term in the underparameterized regime converges almost surely to zero. In what follows, we will prove the almost sure convergence of the bias term $B_{k,\ell}$ with $k\neq\ell$ under the overparameterized regime, aka $$\lim_{n,d\rightarrow+\infty}B_{k,\ell}=\tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}d\tilde{G}(s){\quad{\rm a.s.}}$$ When $\gamma/\theta>1$ and $k\neq\ell$, we rewrite $B_{k,\ell}$ in (S.5.1) as $$\displaystyle B_{k,\ell}$$ $$\displaystyle=\frac{1}{d}\beta^{\top}\left(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\right)\Sigma\left(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\right)\beta^{\top}$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{1}{d}\beta^{\top}(I-(\widehat{\Sigma}_{k}-zI)^{-1}\widehat{\Sigma}_{k})\Sigma(I-(\widehat{\Sigma}_{\ell}-zI)^{-1}\widehat{\Sigma}_{\ell})\beta$$ $$\displaystyle=\lim_{z\rightarrow 0^{-}}\frac{z^{2}}{d}\beta^{\top}(\widehat{\Sigma}_{k}-zI)^{-1}\Sigma(\widehat{\Sigma}_{\ell}-zI)^{-1}\beta$$ where the second line uses the identity (S.4.4). Furthermore, using Lemma S.3.8, we can assume that the multipliers $w_{i,j}$ are either zero or one and $\sigma^{2}=1$ without loss of generality. Let $$\displaystyle k(z):=(1-\theta)v(z/\theta),\quad\widehat{\Sigma}_{C}:=\sum_{i:w_{k,i}\neq 0,w_{\ell,i}\neq 0}\frac{1}{n}x_{i}x_{i}^{\top},\tilde{\Sigma}_{C}(z):=(I+k(z)\Sigma)^{-1/2}\widehat{\Sigma}_{C}(I+k(z)\Sigma)^{-1/2},$$ $$\displaystyle\tilde{\Sigma}:=(I+k(z)\Sigma)^{-1/2}\Sigma(I+k(z)\Sigma)^{-1/2},\quad\tilde{\beta}:=(I+k(z)\Sigma)^{-1/2}\beta,$$ where $v(z)$ is the unique positive solution of equation (4.1). Using a similar argument as in the proof of Lemma S.4.9, we have $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{z^{2}}{d}\beta^{\top}(\widehat{\Sigma}_{k}-zI)^{-1}\Sigma(\widehat{\Sigma}_{\ell}-zI)^{-1}\beta$$ $$\displaystyle=$$ $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{z^{2}}{d}\beta^{\top}z^{2}(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\Sigma(-zk(z)\Sigma+\widehat{\Sigma}_{C}-zI)^{-1}\beta$$ $$\displaystyle=$$ $$\displaystyle\lim_{n,d\rightarrow+\infty}\frac{z^{2}}{d}\tilde{\beta}^{\top}(\tilde{\Sigma}_{C}(z)-zI)^{-1}\tilde{\Sigma}(\tilde{\Sigma}_{C}(z)-zI)^{-1}\tilde{\beta}.{\quad{\rm a.s.}}$$ (S.5.3) The above is equivalent to the bias term of ridge regression with the covariance matrix of $(I+k(z)\Sigma)^{-1}\Sigma$, the coefficient vector $(I+k(z)\Sigma)^{-1/2}\beta$, and the ridge regularization parameter $-z$. Using a similar argument as in (Hastie et al.,, 2022, Theorem 5), we obtain $$\lim_{d\rightarrow\infty}\frac{z^{2}}{d}\beta^{\top}(\widehat{\Sigma}_{k}-zI)^{-1}\Sigma(\widehat{\Sigma}_{\ell}-zI)^{-1}\beta=\tilde{r_{z}}^{2}\frac{\tilde{v}^{\prime}(z,z)}{\tilde{v}(z,z)^{2}}\int\frac{s}{(1+\tilde{v}(z,z)s)^{2}}d\tilde{G}_{z}(s){\quad{\rm a.s.}}$$ where $\tilde{v}(z,x)$ is the unique positive solution of equation (S.4.8), $\tilde{r_{x}}^{2}=\beta^{\top}(I+k(x)\Sigma)^{-1}\beta$, and $\tilde{G}_{x}$ is the weak convergence limit of $$\tilde{G}_{x,d}(s)=\frac{1}{\tilde{r_{x}}^{2}}\sum_{i=1}^{d}\frac{1}{1+k(x)\lambda_{i}}\langle\beta,u_{i}\rangle^{2}\,1\left\{s\geq\frac{\lambda_{i}}{1+k(x)\lambda_{i}}\right\},$$ (S.5.4) which exists under Assumption 5. By Lemma S.5.1, we obtain $$\displaystyle\lim_{z\rightarrow 0^{-}}\tilde{r_{z}}^{2}\frac{\tilde{v}^{\prime}(z,z)}{\tilde{v}(z,z)^{2}}\int\frac{s}{(1+\tilde{v}(z,z)s)^{2}}d\tilde{G}_{z}(s)$$ $$\displaystyle=\tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\lim_{z\rightarrow 0^{-}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}d\tilde{G}_{z}(s)$$ $$\displaystyle=\tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}d\tilde{G}(s),$$ where in the second line we used the following inequality: $$\displaystyle\lim_{z\rightarrow 0^{-}}\left|\int\frac{s}{(1+\tilde{v}(z,z)s)^{2}}d\tilde{G}_{z}(s)-\int\frac{s}{(1+\tilde{v}(0)s)^{2}}d\tilde{G}_{z}(s)\right|\leq\lim_{z\rightarrow 0^{-}}\sup_{s\in[0,C_{\lambda}]}\left|\frac{s}{(1+\tilde{v}(z)s)^{2}}-\frac{s}{(1+\tilde{v}(0)s)^{2}}\right|=0.$$ Using a similar argument as in the proof of Lemma S.3.4, we can exchange the limits between $n,d\rightarrow+\infty$ and $z\rightarrow 0^{-}$. This completes the proof. ∎ S.5.2 Proof of Corollary 5.2 Proof of Corollary 5.2. We have $$\displaystyle\tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}\,d{G}(s)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\tilde{r}^{2}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{s}{(1+\tilde{v}(0)s)^{2}}\,d{G}_{n}(s)$$ (Assumption 5) $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\sum_{i=1}^{d}\frac{\lambda_{i}}{(1+k(0)\lambda_{i}+\tilde{v}(0)\lambda_{i})^{2}}\beta^{\top}u_{i}u_{i}^{\top}\beta$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\beta^{\top}\sum_{i=1}^{d}\frac{\lambda_{i}}{(1+v(0)\lambda_{i})^{2}}u_{i}u_{i}^{\top}\beta$$ (Lemma S.4.6) $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{r^{2}}{d}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\textrm{tr}\left(\sum_{i=1}^{d}\frac{\lambda_{i}}{(1+v(0)\lambda_{i})^{2}}u_{i}u_{i}^{\top}\right){\quad\rm a.s.}$$ (Lemma S.6.9) $$\displaystyle={r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{t}{(1+v(0)t)^{2}}\,dH(t)$$ (Assumption 2) $$\displaystyle={r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{t(1+v(0)t)}{(1+v(0)t)^{2}}\,dH(t)-{r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{v(0)t^{2}}{(1+v(0)t)^{2}}\,dH(t)$$ $$\displaystyle={r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\frac{\theta}{\gamma}\frac{1}{v(0)}-{r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\int\frac{v(0)t^{2}}{(1+v(0)t)^{2}}\,dH(t)$$ (equation (4.1)) $$\displaystyle={r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\frac{1}{\gamma v(0)}\left(1-\gamma\int\frac{v(0)^{2}t^{2}}{(1+v(0)t)^{2}}\,dH(t)\right)-{r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\frac{1-\theta}{\gamma v(0)}$$ $$\displaystyle={r^{2}}\frac{1}{\gamma v(0)}-{r^{2}}\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}\frac{1-\theta}{\gamma v(0)}$$ (equation (S.4.20)) $$\displaystyle=r^{2}\frac{\theta}{\gamma v(0)}-r^{2}\frac{(1-\theta)}{\gamma v(0)}\left(\frac{\tilde{v}^{\prime}(0)}{\tilde{v}(0)^{2}}-1\right).$$ This finishes the proof. ∎ S.5.3 Proof of Theorem 5.3 Proof of Theorem 5.3. By Lemma S.1.5, we can assume, without loss of generality, that $X^{\top}S_{k}S_{k}X$ are singular. Consequently, the sketched ridgeless least square estimators $\widehat{\beta}_{k}$ interpolate the sketched data $(S_{k}X,S_{k}Y)$. This enables us to express the training error $L(\widehat{\beta}^{B};X,Y)$ as follows: $$\displaystyle L(\widehat{\beta}^{B};X,Y)$$ $$\displaystyle=\frac{1}{n}\|X(\beta-\widehat{\beta}^{B})+E\|^{2}_{2}$$ $$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}(x_{i}^{\top}(\beta-\widehat{\beta}^{B})+\varepsilon_{i})^{2}$$ $$\displaystyle=\frac{1}{n}\sum_{i=1}^{n}\frac{1}{B^{2}}\sum_{k,\ell}(x_{i}^{\top}(\beta-\widehat{\beta}_{k})+\varepsilon_{i})(x_{i}^{\top}(\beta-\widehat{\beta_{\ell}})+\varepsilon_{i})$$ $$\displaystyle=\frac{1}{B^{2}}\frac{1}{n}\sum_{k,\ell}\sum_{i:w_{k,i}=w_{\ell,i}=0}(x_{i}^{\top}(\beta-\widehat{\beta}_{k})+\varepsilon_{i})(x_{i}^{\top}(\beta-\widehat{\beta_{\ell}})+\varepsilon_{i}),$$ where the last equality uses the fact that the sketched estimators are interpolators. Applying similar arguments as in Lemma 2.1, we obtain the following decomposition: $$\displaystyle E\left[(x_{i}^{\top}(\beta-\widehat{\beta}_{k})+\varepsilon_{i})(x_{i}^{\top}(\beta-\widehat{\beta_{\ell}})+\varepsilon_{i})\;\middle|\;X,\,\mathcal{W},\,\beta\right]$$ $$\displaystyle=\beta^{\top}\Pi_{k}x_{i}^{\top}x_{i}\Pi_{\ell}\beta+\sigma^{2}(1+x_{i}^{\top}\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}x_{i}).$$ Furthermore, by Lemma S.6.9, we have $$\displaystyle\lim_{n,d\rightarrow+\infty}E\left[L(\widehat{\beta}^{B};X,Y)\;\middle|\;X,\,\mathcal{W},\,\beta\right]$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{1}{B^{2}}\frac{1}{n}\sum_{k,\ell}\sum_{i:w_{k,i}=w_{\ell,i}=0}\beta^{\top}\Pi_{k}\Sigma\Pi_{\ell}\beta+\sigma^{2}(1+\textrm{tr}(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma)){\quad\rm a.s.}$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{1}{B^{2}}\sum_{k,\ell}(1-\theta)^{2}[\beta^{\top}\Pi_{k}\Sigma\Pi_{\ell}\beta+\sigma^{2}(1+\textrm{tr}(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma))].$$ The last line follows from the same argument as in equation (S.4.15). The result then follows from equation (S.4.1). ∎ S.5.4 Proof of Lemma 5.4 Proof of Lemma 5.4. Following the proof of Lemma 2.1, we can decompose the risk $R_{X}(\widehat{\beta}^{B})$ as: $$R_{X}(\widehat{\beta}^{B})=\frac{1}{B^{2}}\sum_{k,\ell}\beta^{\top}(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k})\Sigma(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell})\beta+\beta^{\top}(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k})\Sigma\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}S_{\ell}E+E^{\top}S_{k}^{2}X\widehat{\Sigma}_{k}^{+}\Sigma\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}^{2}E.$$ For any $k$ and $\ell$, we define $$\displaystyle T_{1,k,\ell}$$ $$\displaystyle:=\beta^{\top}(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k})\Sigma(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell})\beta,$$ $$\displaystyle T_{2,k,\ell}$$ $$\displaystyle:=\beta^{\top}(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k})\Sigma\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}S_{\ell}E$$ $$\displaystyle T_{3,k,\ell}$$ $$\displaystyle:=E^{\top}S_{k}^{2}X\widehat{\Sigma}_{k}^{+}\Sigma\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}^{2}E.$$ Suppose the following equations hold: $$\displaystyle\lim_{n,d\rightarrow+\infty}T_{1,k,\ell}$$ $$\displaystyle\overset{{\rm a.s.}}{=}\lim_{n,d\rightarrow+\infty}\frac{r^{2}}{d}\textrm{tr}((I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k})\Sigma(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell})),$$ (S.5.5) $$\displaystyle\lim_{n,d\rightarrow+\infty}T_{2,k,\ell}$$ $$\displaystyle\overset{{\rm a.s.}}{=}0,$$ (S.5.6) $$\displaystyle\lim_{n,d\rightarrow+\infty}T_{3,k,\ell}$$ $$\displaystyle\overset{{\rm a.s.}}{=}\lim_{n,d\rightarrow+\infty}\frac{\sigma^{2}}{n^{2}}\textrm{tr}(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma).$$ (S.5.7) From equation (S.3.1), we have $$\lim_{n,d\rightarrow+\infty}R_{X,\,\mathcal{W},\,\beta}(\widehat{\beta}^{B})=\lim_{n,d\rightarrow+\infty}T_{1,k,\ell}+\lim_{n,d\rightarrow+\infty}T_{3,k,\ell}{\quad\rm a.s.}$$ Therefore, the desired result follows. It suffices to show equations (S.5.5)-(S.5.7). We begin with $T_{3,k,\ell}$. By Lemma S.6.8, it suffices to show that $\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}\Sigma/n$ has bounded spectral norm. By Lemma S.1.5, when $\gamma<1$, we can, without loss of generality, assume that $(X^{\top}X)^{-1}$ exists. Then, we have $$\displaystyle\|\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}/n\|_{2}$$ $$\displaystyle=\|\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}X(X^{\top}X)^{-1}X^{\top}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}/n\|_{2}$$ $$\displaystyle\leq\|(X^{\top}X/n)^{-1}\|_{2}$$ $$\displaystyle\leq\left(1-\sqrt{\gamma}\right)^{-1},$$ where the last line follows from Lemma S.6.3. When $\gamma\geq 1$, by Lemma S.3.8, we can, without loss of generality, assume that the multipliers$w_{i,j}$ are either zero or one. Then, $$\displaystyle\|\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+}/n\|_{2}$$ $$\displaystyle=\|(X^{\top}S_{k}S_{k}X)^{+}X^{\top}S_{k}S_{k}S_{\ell}S_{\ell}X(X^{\top}S_{\ell}S_{\ell}X)^{+}\Sigma/n\|_{2}$$ $$\displaystyle\leq\frac{\|S_{k}S_{\ell}\|_{2}\|\Sigma\|_{2}}{\sqrt{\lambda^{+}_{\min}(X^{\top}S_{k}^{\top}S_{k}X/n)}\sqrt{\lambda^{+}_{\min}(X^{\top}S_{\ell}^{\top}S_{\ell}X/n)}}$$ $$\displaystyle\leq C\left(1-\sqrt{\gamma/\theta}\right)^{-2}{\quad\rm a.s.}$$ where the last line follows from Lemma S.1.6 and Assumption 2. This argument can be directly applied to $T_{1,k,\ell}$, and equation (S.5.5) holds. Finally, $T_{2,k,\ell}$ follows from Lemma S.6.10. ∎ S.5.5 Proof of Lemma 5.5 Proof of Lemma 5.5. Following the proof of Lemma 2.1, we can decompose the norm of the bagged least square estimator $\|\widehat{\beta}^{B}\|_{2}^{2}$ as: $$\displaystyle\|\widehat{\beta}^{B}\|_{2}^{2}$$ $$\displaystyle=\frac{1}{B^{2}}\sum_{k,\ell}\beta^{\top}\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\beta+\beta^{\top}\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}S_{\ell}E+E^{\top}S_{k}^{2}X\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}^{2}E.$$ Using a similar argument as in the proof of Lemma 5.4, we obtain that $\lim_{n,d\rightarrow+\infty}\beta^{\top}\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}S_{\ell}E\overset{{\rm a.s.}}{=}0$, $\lim_{n,d\rightarrow+\infty}E^{\top}S_{k}^{2}X\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{\ell}^{+}X^{\top}S_{\ell}^{2}E\overset{{\rm a.s.}}{=}\lim_{n,d\rightarrow+\infty}\frac{\sigma^{2}}{n^{2}}\textrm{tr}(\widehat{\Sigma}_{k}^{+}X^{\top}S_{k}^{2}S_{\ell}^{2}X\widehat{\Sigma}_{\ell}^{+})$ equals the limiting variance of the bagged least square estimator, and $$\displaystyle\lim_{n,d\rightarrow+\infty}\beta^{\top}\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}\beta$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{r^{2}}{d}\textrm{tr}(\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k}\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell})$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\frac{r^{2}}{d}\textrm{tr}((I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k})(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell}))-\frac{r^{2}}{d}\textrm{tr}(I-\widehat{\Sigma}_{k}^{+}\widehat{\Sigma}_{k})-\frac{r^{2}}{d}\textrm{tr}(I-\widehat{\Sigma}_{\ell}^{+}\widehat{\Sigma}_{\ell})+r^{2}$$ $$\displaystyle=\begin{cases}r^{2},&\gamma/\theta<1\\ r^{2}\frac{\left(\gamma-\theta\right)^{2}}{\gamma\left(\gamma-\theta^{2}\right)}-2r^{2}\frac{\gamma/\theta-1}{\gamma/\theta}+r^{2},&\gamma/\theta>1\end{cases}$$ $$\displaystyle=\begin{cases}r^{2},&\gamma/\theta<1\\ r^{2}\frac{\theta^{2}(\gamma+1-2\theta)}{\gamma(\gamma-\theta^{2})},&\gamma/\theta>1\end{cases}{\quad\rm a.s.}$$ where the third line follows from the proof of Lemma S.3.4 and Lemma S.4.3. Therefore, the desired result follows. ∎ S.5.6 Technical lemmas Lemma S.5.1. Assume Assumption 2 and Assumption 5. Let $\tilde{G}_{x}$ be the weak convergence limit of $\tilde{G}_{x,d}$ defined in equation (S.5.4). Then, as $x\rightarrow 0^{-}$, $$\tilde{G}_{x}\rightsquigarrow\tilde{G},$$ where $\rightsquigarrow$ denotes weak convergence. Proof of Lemma S.5.1. Recall $\tilde{r_{x}}^{2}=\beta^{\top}(I+k(x)\Sigma)^{-1}\beta$. To prove the weak convergence, it suffices to show, for any bounded continuous function $f\in\mathcal{C}_{b}(\mathbb{R})$, that $$\displaystyle\lim_{x\rightarrow 0^{-}}\int f(t)d\tilde{G}_{x}(t)$$ $$\displaystyle=\lim_{x\rightarrow 0^{-}}\lim_{n,d\rightarrow+\infty}\frac{1}{\tilde{r_{x}}^{2}}\sum_{i=1}^{d}\frac{1}{1+k(x)\lambda_{i}}\langle\beta,u_{i}\rangle^{2}\,f\left(\frac{\lambda_{i}}{1+k(x)\lambda_{i}}\right)$$ $$\displaystyle=\lim_{n,d\rightarrow+\infty}\lim_{x\rightarrow 0^{-}}\frac{1}{\tilde{r_{x}}^{2}}\sum_{i=1}^{d}\frac{1}{1+k(x)\lambda_{i}}\langle\beta,u_{i}\rangle^{2}\,f\left(\frac{\lambda_{i}}{1+k(x)\lambda_{i}}\right)$$ $$\displaystyle=\int f(t)d\tilde{G}(t),$$ where the first and third equality follow from the definition of $\tilde{G}_{x}$ and continuity of $k(x)=(1-\theta)v(x/\theta)$, and we exchange the limits between $x\rightarrow 0^{-}$ and $n,d\rightarrow+\infty$ in the second equality. This obtains the desired result. It remains to prove that the limits in the above displayed equality are exchangeable. By the Moore-Osgood theorem, it suffices to show, as $n,d\rightarrow+\infty$, that $$T_{d}(x):=\frac{1}{\tilde{r_{x}}^{2}}\sum_{i=1}^{d}\frac{1}{1+k(x)\lambda_{i}}\langle\beta,u_{i}\rangle^{2}\,f\left(\frac{\lambda_{i}}{1+k(x)\lambda_{i}}\right)\rightarrow\int f(t)d\tilde{G}_{x}(t)$$ uniformly for all $x$ on $[0,c]$, for some $c$ such that $0<c<+\infty$. By the Arzela–Ascoli theorem, we only need to prove that $T_{d}(x)$ is uniformly bounded and equicontinuous on $[0,c]$. Uniform boundedness Since we assume that $f$ is a bounded function such that $f\leq M$ for some constant $M$. Then, $$\displaystyle T_{d}(x)$$ $$\displaystyle=\frac{1}{\tilde{r_{x}}^{2}}\sum_{i=1}^{d}\frac{1}{1+k(x)\lambda_{i}}\langle\beta,u_{i}\rangle^{2}\,f\left(\frac{\lambda_{i}}{1+k(x)\lambda_{i}}\right)$$ $$\displaystyle\leq\frac{1}{\tilde{r_{x}}^{2}}\sum_{i=1}^{d}\frac{1}{1+k(x)\lambda_{i}}\langle\beta,u_{i}\rangle^{2}\,M$$ $$\displaystyle\leq M$$ where the last line follows from the definition of $\tilde{r_{x}}^{2}$. Uniform equicontinuity First, since $k(x)=(1-\theta)v(x/\theta)$ is a continuous function on the compact interval $[0,c]$, then $k(x)$ is equicontinuous. Furthermore, since $T_{d}(x)$ can be written as a function of $k(x)$. It suffices to prove that $T_{d}(x)$ is equicontinuous with respect to $k(x)$ on $\{x:x\in[0,c]\}$. For any $0\leq x\leq c$, we have $$\displaystyle\frac{\lambda_{i}}{1+k(x)\lambda_{i}}-\frac{\lambda_{i}}{1+k(0)\lambda_{i}}\leq\frac{\lambda_{i}^{2}(k(0)-k(x))}{(1+k(x)\lambda_{i})(1+k(0)\lambda_{i})}\leq\frac{k(0)-k(x)}{k(0)k(x)}\leq C(k(0)-k(x)),$$ for some constant $C>0$, where we used the fact that the continuous function $k(x)$ is bounded on the compact set $[0,c]$. Thus, $\lambda_{i}/(1+k(x)\lambda_{i})$ is equicontinuous with respect to $k(x)$. Similarly, we can see $1/(1+k(x)\lambda_{i})$ is equicontinuous with respect to $k(x)$. Since we assumed $f$ is continuous, we obtain $f(\lambda_{i}/(1+k(x)\lambda_{i}))$ is equicontinuous with respect to $k(x)$. Recall $r^{2}=\|\beta\|_{2}^{2}$. Then $$\displaystyle\frac{1}{\tilde{r_{0}}^{2}}-\frac{1}{\tilde{r_{x}}^{2}}$$ $$\displaystyle=\frac{\tilde{r_{x}}^{2}-\tilde{r_{0}}^{2}}{\tilde{r_{x}}^{2}\tilde{r_{0}}^{2}}$$ $$\displaystyle=\frac{\beta^{\top}(I+k(x)\Sigma)^{-1}(k(0)-k(x))\Sigma(I+k(0)\Sigma)^{-1}\beta}{\beta^{\top}(I+k(x)\Sigma)^{-1}\beta\beta^{\top}(I+k(0)\Sigma)^{-1}\beta}$$ $$\displaystyle\leq\frac{(1+k(x)c_{\lambda})^{-1}C_{\lambda}(1+k(0)c_{\lambda})^{-1}}{r^{2}}(k(0)-k(x))$$ $$\displaystyle\leq C(k(0)-k(x))$$ for some constant $C>0$, where the second line uses the identity $A^{-1}-B^{-1}=A^{-1}(B-A)B^{-1}$, the first inequality uses Assumption 2, and boundedness of $k(x)$ on a compact interval. This completes the proof. ∎ Appendix S.6 Preliminary lemmas This section collects preliminary results. Assumption S.7 (Assumption 4.4.1 in Zhang, (2007)). We assume the followings. (i) $\left\|T_{1n}\right\|$ and $\left\|T_{2n}\right\|$ are uniformly bounded for $n$, where $\|\cdot\|$ denotes the spectral norm of a matrix. (ii) $Ex_{ij}=0,E\left|x_{ij}\right|^{2}\leq 1,\left|x_{ij}\right|\leq\delta_{n}\sqrt{n}$, with $\delta_{n}\rightarrow 0$, $$\frac{1}{\delta_{n}^{2}nN}\sum_{ij}\left(1-E\left|x_{ij}\right|^{2}\right)\rightarrow 0,$$ as $n\rightarrow\infty$. (iii) $T_{1n}$ and $T_{2n}$ are non-random. Lemma S.6.1 (Vitali convergence theorem). Let $f_{1},f_{2},\cdots$ be analytic on the domain $D$, satisfying $\left|f_{n}(z)\right|\leq M$ for every $n$ and $z\in D$. Suppose that there is an analytic function $f$ on $D$ such that $f_{n}(z)\rightarrow f(z)$ for all $z\in D$. Then it also holds that $f_{n}^{\prime}(z)\rightarrow f^{\prime}(z)$ for all $z\in D$. Lemma S.6.2 (Moore-Osgood theorem). If $\lim_{x\rightarrow a}f(x,y)=g(y)$ uniformly (in $y$) on $Y\backslash\{b\}$, and $\lim_{y\rightarrow b}f(x,y)=h(x)$ for each $x$ near $a$, then both $\lim_{y\rightarrow b}g(y)$ and $\lim_{x\rightarrow a}h(x)$ exists and $$\lim_{y\rightarrow b}\lim_{x\rightarrow a}f(x,y)=\lim_{x\rightarrow a}\lim_{y\rightarrow b}f(x,y)=\lim_{\begin{subarray}{c}x\rightarrow a\\ y\rightarrow b\end{subarray}}f(x,y).$$ The $a$ and $b$ here can possibly be infinity. By combining Theorem 2 and Remark 1 of Bai and Yin, (1993), we obtain the following lemma. Lemma S.6.3. Assume that the feature vector $x$ has i.i.d. entries with zero mean, unit variance, and bounded $4$-th moment. As $n,d\rightarrow+\infty$, $0<\gamma<\infty$, $$\displaystyle\lim_{n,d\rightarrow+\infty}\lambda_{\min}^{+}(X^{\top}X/n)$$ $$\displaystyle=(1-\sqrt{\gamma})^{2}{\quad\rm a.s.}$$ $$\displaystyle\lim_{n,d\rightarrow+\infty}\lambda_{\max}(X^{\top}X/n)$$ $$\displaystyle=(1+\sqrt{\gamma})^{2}{\quad\rm a.s.}$$ where $\lambda_{\min}^{+}$ denotes the smallest positive eigenvalue. Lemma S.6.4 (Theorem A.43 in Bai and Silverstein, (2010)). Let $A$ and $B$ be two $n\times n$ Hermitian matrices with their empirical spectral distributions $F^{A}$ and $F^{B}$. Then $$\left\|F^{A}-F^{B}\right\|_{\infty}\leq\frac{1}{n}\operatorname{rank}(A-B),$$ where $\|F\|_{\infty}=\sup_{x}|F(x)|$. Lemma S.6.5 (Von Neumann’s trace inequality). If $A,B$ are complex $n\times n$ matrices with singular values, $$\displaystyle\alpha_{1}\geq\cdots\geq\alpha_{n},\quad\beta_{1}\geq\cdots\geq\beta_{n},$$ respectively, then $$|\operatorname{tr}(AB)|\leq\sum_{i=1}^{n}\alpha_{i}\beta_{i}$$ with equality if and only if $A$ and $B$ share singular vectors. Lemma S.6.6 (Sherman–Morrison formula). Suppose $A\in\mathbb{R}^{n\times n}$ is an invertible square matrix and $u,v\in\mathbb{R}^{n}$ are column vectors. Then $A+uv^{\top}$ is invertible iff $1+v^{\top}A^{-1}u\neq 0$. In this case, $$\left(A+uv^{\top}\right)^{-1}=A^{-1}-\frac{A^{-1}uv^{\top}A^{-1}}{1+v^{\top}A^{-1}u}.$$ Additionally, we will frequently use the following form: $$\left(A+uv^{\top}\right)^{-1}u=\frac{A^{-1}u}{1+v^{\top}A^{-1}u}.$$ Lemma S.6.7 (Burkholder inequality, Lemma B.26 in Bai and Silverstein, (2010)). Let $A=\left(a_{ij}\right)$ be an $n\times n$ nonrandom matrix and $X=$ $\left(x_{1},\cdots,x_{n}\right)^{\prime}$ be a random vector of independent entries. Assume that $\mathrm{E}x_{i}=0$, $\mathrm{E}\left|x_{i}\right|^{2}=1$, and $\mathrm{E}\left|x_{j}\right|^{\ell}\leq\nu_{\ell}$. Then, for any $p\geq 1$, $$\mathbf{E}\left|X^{*}AX-\operatorname{tr}A\right|^{p}\leq C_{p}\left(\left(\nu_{4}\operatorname{tr}\left(AA^{*}\right)\right)^{p/2}+\nu_{2p}\operatorname{tr}\left(AA^{*}\right)^{p/2}\right),$$ where $C_{p}$ is a constant depending on $p$ only. The following two lemmas are direct consequences of Lemma S.6.7 and the Borel-Cantelli Lemma. Lemma S.6.8 (Lemma C.3 in Dobriban and Wager, (2018)). Let $x\in\mathbb{R}^{d}$ be a random vector with i.i.d. entries and $\mathbb{E}[x]=0$, for which $\mathbb{E}\left[\left(\sqrt{d}x_{i}\right)^{2}\right]=\sigma^{2}$ and $\sup_{i}\mathbb{E}\left[\left|\sqrt{d}x_{i}\right|^{4+\eta}\right]$ $<C$ for some $\eta>0$ and $C<\infty$. Moreover, let $A_{d}$ be a sequence of random $d\times d$ symmetric matrices independent of $x$, with uniformly bounded eigenvalues. Then the quadratic forms $x^{\top}A_{d}x$ concentrate around their means: $x^{\top}A_{d}x-d^{-1}\sigma^{2}\operatorname{tr}A_{d}\overset{{\rm a.s.}}{\rightarrow}0$. Lemma S.6.9. Assume Assumptions 1-2. Then, for any triangular array of matrices $M_{d,i}$ with bounded spectral norm and independent with $x_{i}$, $1\leq i\leq n$, it holds that $$\lim_{n,d\rightarrow+\infty}\max_{i\in\{1,\dots,n\}}\left|\frac{1}{n}x_{i}^{\top}M_{d,i}x_{i}-\frac{1}{n}\textrm{tr}(M_{d,i}\Sigma)\right|=0~{}{\quad\rm a.s.}$$ Lemma S.6.10 (Lemma C.1 in Dobriban and Wager, (2018)). Let $x_{n}\in$ $\mathbb{R}^{n}$ and $y_{n}\in\mathbb{R}^{n}$ be independent sequences of random vectors, such that for each $n$ the coordinates of $x_{n}$ and $y_{n}$ are independent random variables. Moreover, suppose that the coordinates of $x_{n}$ are identically distributed with mean 0 , variance $C/n$ for some $C>0$ and fourth moment of order $1/n^{2}$. Suppose the same conditions hold for $y_{n}$, where the distribution of the coordinates of $y_{n}$ can be different from those of $x_{n}$. Let $A_{n}$ be a sequence of $n\times n$ random matrices such that $\left\|A_{n}\right\|$ is uniformly bounded. Then $x_{n}^{\top}A_{n}y_{n}\overset{{\rm a.s.}}{\rightarrow}0$.
OpenQL : A Portable Quantum Programming Framework for Quantum Accelerators N. Khammassi§ ‡ *, I. Ashraf§ ‡, J. v. Someren§ ‡, R. Nane§ ‡, A. M. Krol§ ‡, M.A. Rol¶ ‡, L. Lao§ ‡ **, K. Bertels§, C. G. Almudever§ ‡ § Quantum & Computer Engineering Dept., Delft University of Technology Delft, The Netherlands ‡ QuTech, Delft University of Technology, The Netherlands ¶ Kavli Institute of Nanoscience, Delft University of Technology, The Netherlands * Currently Affiliated to Intel Labs, Intel Corporation, Oregon, USA** Currently Affiliated to Department of Physics and Astronomy, University College London, UK Abstract With the potential of quantum algorithms to solve intractable classical problems, quantum computing is rapidly evolving and more algorithms are being developed and optimized. Expressing these quantum algorithms using a high-level language and making them executable on a quantum processor while abstracting away hardware details is a challenging task. Firstly, a quantum programming language should provide an intuitive programming interface to describe those algorithms. Then a compiler has to transform the program into a quantum circuit, optimize it and map it to the target quantum processor respecting the hardware constraints such as the supported quantum operations, the qubit connectivity, and the control electronics limitations. In this paper, we propose a quantum programming framework named OpenQL, which includes a high-level quantum programming language and its associated quantum compiler. We present the programming interface of OpenQL, we describe the different layers of the compiler and how we can provide portability over different qubit technologies. Our experiments show that OpenQL allows the execution of the same high-level algorithm on two different qubit technologies, namely superconducting qubits and Si-Spin qubits. Besides the executable code, OpenQL also produces an intermediate quantum assembly code (cQASM), which is technology-independent and can be simulated using the QX simulator. Quantum Compiler, Quantum Computing, Quantum Circuit, Quantum Processor. I Introduction Since the early formulation of the foundations of quantum computing, several quantum algorithms have been designed for solving intractable classical problems in different application domains. For instance, the introduction of Shor’s algorithm [1] outlined the significant potential of quantum computing in speeding up prime factorization. Later, Grover’s search algorithm [2] demonstrated quadratic speedup over its classical implementation counterpart. The discovery of these algorithms boosted the development of different physical qubit implementations such as superconducting qubits [3], trapped ions [4] and semiconducting qubits [5]. In the absence of a fully programmable quantum computer, the implementation of these algorithms on real quantum processors is a tedious task for the algorithm designer, especially in the absence of deep expertise in qubit control electronics. In order to make a quantum computer programmable and more accessible to quantum algorithm designers similarly to classical computers, several software and hardware layers are required [6]: at the highest level, an intuitive quantum programming language is needed to allow the programmer to express the quantum algorithm without worrying about the hardware details. Then, a compiler transforms the algorithm into a quantum circuit and maps and optimizes it for a given quantum processor. Ultimately, the compiler produces an executable code which can be executed on the target micro-architecture controlling the qubits. A modular quantum compiler would ideally not expose low-level hardware details and its constraints to the programmer to allow portability of the algorithm over a wide range of quantum processors and qubit technologies. In this paper we introduce OpenQL111OpenQL documentation: https://openql.readthedocs.io, an open-source222OpenQL source code: https://github.com/QE-Lab/OpenQL high-level quantum programming framework. OpenQL is mainly composed of a quantum programming interface for implementing quantum algorithms independently from the target platform, and a compiler which can compile the algorithm into executable code for various target platforms and qubit technologies such as superconducting qubits and semiconducting qubits. The rest of the paper is organized as follows. Section II provides a brief account of the related work. The necessary background for the quantum accelerator model is given in Section III. OpenQL architecture is detailed in Section IV, followed by the discussion of quantum programming interface provided by OpenQL in Section V. OpenQL compilation passes are presented in Section VI, where it is shown how the quantum code is decomposed, optimized, scheduled, and mapped on the target platform. Some of the works in which we utilized OpenQL to compile quantum algorithms on different quantum processors using different qubit technologies, are briefly mentioned in Section VII. Finally, Section VIII concludes the paper. II Related Work Some of the initial work in the field of quantum compilation has been theoretical [7, 8, 9, 10, 11, 12]. Now that quantum computers are a reality, various compilation and simulation software frameworks have been developed. A list of open-source compilation projects is available at [13], and a list of quantum simulators is available at [14]. In the following, we provide a brief list of recent active works in the field of quantum compilation in chronological order. The reader is referred to a recent overview and comparison of gate-level quantum software platforms [15]. • ScaffCC has been presented as a scalable compilation and analysis tool for quantum programs [16, 17]. It is based on LLVM compilation framework. ScaffCC compiles Scaffold language [18], which is a pure quantum language embedded into the classical C language. • Microsoft proposed a domain-specific language Q#[19] and Quantum Development Kit (QDK) to compile and simulate quantum programs. At the moment, QDK does not target a real quantum computer, however, programs can be executed on the provided software backend. • ProjectQ [20] is an open-source software framework that allows the expression of a quantum program targeting IBM backend computers as well as simulators. ProjectQ allows programmers to express their programs in a language embedded in python. Apart from low-level gate description, meta-instructions are provided to add conditional control, compute, un-compute, and repeating sections of code a certain number of times. • IBM’s Qiskit [21] is an open-source quantum software framework that allows users to express their programs in python and compiles them to OpenQASM targeting the IBM Q Experience[22]. Qiskit allows users to explicitly allocate quantum and classical registers. Quantum operations are performed on quantum registers, and after measurement, classical results are stored in classical registers. • Quilc [23] is an open-source quantum compiler for compiling Rigetti’s Quil language [24]. The focus of the authors is on the noisy intermediate scale quantum programs, allowing the programmers to compile quantum programs to byte code, which can be interpreted by control electronics. This allows programmers to execute programs not only on a software simulator but also on real quantum processor. OpenQL has some common characteristics with the compilers above, such as being an open-source, modular quantum compilation framework that is capable of targeting different hardware backends. However, the distinctive and, at the same time, the primary motivation behind OpenQL is that it is a generic and flexible compiler framework. These requirements directly translated into the OpenQL design to support multiple configurable backends through its platform configuration file (Section V-C). Finally, OpenQL is one of the engines behind QuTech’s Quantum Inspire [25] platform, where the user can gain access to various technologies to perform quantum experiments enabled through the use of OpenQL’s plugin-able backends and its ability to generate executable code (Section VI-E). III Quantum Accelerator Model Accelerators are used in classical computers to speed up specific types of computation that can take advantage of the execution capabilities of the accelerator such as massive parallelism, vectorization or fast digital signal processing… OpenQL adopts this heterogeneous computing model while using the quantum processor as an accelerator and provides a programming interface for implementing quantum algorithms involving both classical computation and quantum computation. III-A Heterogeneous Computing Heterogeneous computing [26, 27] is a computing model where a program is executed jointly on a general-purpose processor or host processor and an accelerator or co-processor. The general-purpose processor is capable of executing not only general computations such as arithmetic, logic or floating point operations, but also controlling various accelerators or co-processors. The accelerators or co-processors are specialized processors designed to accelerate specific types of computation such as graphic processing, digital signal processing and other workloads that can take advantage of vectorization or massive thread-level parallelism. Therefore the accelerator can speedup a part of the computation traditionally executed on a general purpose processor. The computation is then offloaded to the accelerator to speed up the overall execution of the target program. Examples of accelerators are the Intel Xeon Phi co-processor [28], Digital Signal Processors (DSP)  [29], Field Programmable Gate Array (FPGA) [30, 31] that can be also utilized as accelerators to parallelize computations and speed up their execution. Finally General-Purpose Computation on Graphics Processing Units (GPGPU) uses GPU as accelerator  [32] to speed up certain types of computations. III-B Quantum Processors as Accelerators The OpenQL programming framework follows a heterogeneous programming model which aims to use the quantum processor as a co-processor to accelerate the part of the computation which can benefit from the quantum speedup. A quantum algorithm is generally composed of classical and quantum computations. For instance Shor’s algorithm is a famous quantum algorithm for prime number factoring; as shown in Figure 1 the algorithm includes classical computations such as the Greatest Common Divisor (GCD) computation which can be executed efficiently in a traditional processor, and a quantum part such as the Quantum Fourier Transform which should be executed on a quantum processor. OpenQL uses traditional host languages, namely C++ and Python, to define a programming interface which allows the expression of the quantum computation and the communication with the quantum accelerator: the quantum operations are executed on the quantum processor using a dedicated micro-architecture and the measurement results are collected and sent back to the host program running on the classical processor. While non time-critical classical operations can be executed on the host processor, time-critical classical operations that need to be executed within the coherence time of the qubits, such as in error correction quantum circuits, can be offloaded to the accelerator to provide fast reaction time and avoid communication overhead between the host PC and the accelerator. IV OpenQL Architecture Figure 2 depicts OpenQL framework which exposes a high-level programming interface to the user at the top. The compiler implements a layered architecture which is composed mainly of two parts: a set of hardware-agnostic compilation passes that operate at the quantum gate level, and a set of low-level technology-specific backends which can target different quantum processors with specific control hardware. The goal of those backends is to enable compiling the same quantum algorithm for a specific qubit technology without any change in the high-level code and making the hardware details transparent to the programmer. Moreover, this architecture allows the implementation of new backends to extend the support to other qubit technologies and new control hardware whenever needed. As the qubit control hardware is constantly evolving in the last years, this flexibility and portability over a wide range of hardware is crucial. This enhances the productivity and ensures the continuity of the research efforts towards a full-stack quantum computer integration. The Quantum Assembly Language (QASM) is the intermediate layer which draws the abstraction line between the high-level hardware-agnostic layers (gate-level compilation stages) and the low-level hardware-specific layers. The low-level layers are implemented inside a set of interchangeable backends each targeting a different microarchitecture and/or a different qubit technology. The OpenQL framework is composed mainly of the following layers: • A High-level programming interface using a standard host language namely C++ or Python to express the target quantum algorithm as a quantum program. • A quantum gate-level compiler that transforms the quantum program into a quantum circuit, optimizes it, schedules it and maps it to the target quantum processor to comply to the different hardware constraints such as the limited qubit connectivity. • The last stage of the gate-level compilation produces a technology-independent Common Quantum Assembly code (cQASM) [33] which describes the final quantum circuit while abstracting away the low-level hardware details such as the target instruction set architecture, or the quantum gate implementation which differ across the different qubit technologies For now, our compiler targets Superconducting qubits and Si-Spin qubits but can be easily extended to other qubit technologies. The produced QASM code complies with the Common QASM 1.0 syntax and can be simulated in our QX simulator [34] to debug the quantum algorithm and evaluate its performance for different quantum error rates. • At the lowest level, different eQASM [35] (executable QASM) backends can be used to compile the QASM code into instructions which can be executed on a specific micro-architecture, e.g. the QuMA micro-architecture described in [36]. At this compilation level, very detailed information about the target hardware setup, stored in a hardware configuration file, is used to generate an executable code which takes into account various hardware details such as the implementation of the quantum gates, the connectivity between the qubits and the control instruments, the hardware resource dependencies, the quantum operation latencies and the operational constraints. V Quantum Programming Interface OpenQL provides three main interfaces to the developer, namely Quantum Kernel, Quantum Program and Quantum Platform. V-A Quantum Kernel A Quantum Kernel is a quantum functional block which consists of a set of quantum or classical instructions and performs a specific quantum operation. For instance, the kernel could be dedicated to creating a bell pair while another could be dedicated to teleportation or decoding. In OpenQL a Quantum Kernel can be created as shown in Code Example 1 where three kernels are created: i) the ”init” kernel for initializing the qubits, ii) the ”epr” kernel to create a Bell pair, iii) the ”measure” kernel to measure the qubits. These kernels are then added to the main program, and compiled while enabling the compiler optimizations and the As Late As Possible (ALAP) scheduling scheme. In code example 2, the same code is written in the C++ programming language. Note that the programming API of C++ is identical to the Python API. OpenQL supports standard quantum operations as listed in Table I. To allow for further flexibility in implementing the quantum algorithms, custom operations can also be defined in a hardware configuration file. These operations can either be independent physical quantum operations supported by the target hardware or a composition of a set of physical operations. Once defined in the configuration file of the platform, the new operation can be used in composing a kernel as any other predefined standard operation. This allows for more flexibility when designing a quantum algorithm or a standard experiment used for calibration or other purposes. V-B Quantum Program As the quantum kernels implement functional blocks of a given quantum algorithm, a ”quantum_program” is the container holding those quantum kernels and implementing the complete quantum algorithm. For instance, if our target algorithm is a quantum error correction circuit which includes the encoding of the logical qubit, the error syndrome measurement, the error correction and finally the decoding, we can create four distinct kernels which implement these four blocks, and we can add these kernels to our program. The program can then be compiled and executed on the target platform. V-C Quantum Platform A ”quantum_platform” is a specification of the target hardware setup including the quantum processor and its control electronics. The specification includes the description of the supported quantum operations and their attributes such as the duration, the built-in latency of each operation and the mathematical description of the supported quantum operation such as its associated unitary matrix. VI Quantum Gate-Level Compilation The first compilation stages of OpenQL are performed at the quantum gate-level while abstracting the low-level hardware implementation on the target device as much as possible. The high-level compilation stages include the decomposition of the quantum operations, the optimization and the scheduling of the decomposed quantum circuit. The gate-level compilation layers can produce a technology-agnostic quantum assembly code called common QASM (cQASM) that can be simulated using the QX Simulator [37]. VI-A Gate Decomposition OpenQL supports decomposition of multi-qubit gates to 1 and 2 qubit gates, as well as control decomposition of multiple gates which are controlled by 1 or more qubits. Gates which are expressed as unitary matrices can also be decomposed to rotation and controlled-not gates. VI-A1 Multi-qubit Gate Decomposition In the first step, quantum gates are decomposed into a set of elementary operations from a universal gate set. For instance, as shown in Fig. 3, the Toffoli gate can be decomposed into a set of single and two-qubit gates using different schemes such as in [38] or [39]. The decomposition of gates with more than two qubit operands is necessary to enable the later mapping stage which can only deal with available single and two-qubit gates that are available on the target physical implementation. Furthermore, this decomposition allows us to perform fine-grain optimization through fusing operations and extracting parallelism using gate dependency analysis. When a physical target platform and its supported physical operations are specified in the configuration file, by doing this decomposition the compiler makes sure that the remaining operations are the target primitive operations that are supported by the target platform. The hardware configuration specification is detailed in Section VI-G. We note that we can disable this decomposition stage when the QX simulator backend [34] is targeted as QX can simulate composite gates such as the Toffoli gate or arbitrary controlled rotations that are not necessarily available for many physical devices. Multi-qubit controlled gates can also be decomposed to 2-qubit controlled gates as discussed in [38] based on the scheme shown in Figure 4. OpenQL further extends the facility of control decomposition to multiple gates (kernel). This is achieved by generating the controlled version of a kernel by using the $controlled()$ API as depicted in Code example  3 and then applying decomposition. VI-A2 Unitary Gate Decomposition It has been demonstrated that a universal quantum computer can simulate any Turing machine [40] and any local quantum system [41]. A set of gates is called universal if they can be used to constitute a quantum circuit that can approximate any unitary operation to arbitrary accuracy. $$\begin{split}\displaystyle H=\frac{1}{\sqrt{2}}\begin{bmatrix}1&1\\ 1&-1\end{bmatrix}\end{split}\quad\begin{split}\displaystyle T=\begin{bmatrix}1% &0\\ 0&e^{i\pi/4}\end{bmatrix}\end{split}$$ (1) $$\begin{split}\displaystyle X=\begin{bmatrix}0&1\\ 1&0\end{bmatrix}\end{split}\quad\begin{split}\displaystyle Y=\begin{bmatrix}0&% -i\\ i&0\end{bmatrix}\end{split}\quad\begin{split}\displaystyle Z=\begin{bmatrix}1&% 0\\ 0&-1\end{bmatrix}\end{split}$$ (2) $$CNOT=\begin{bmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{bmatrix}$$ (3) It has been proven that any unitary operation can be approximated to arbitrary accuracy by using only single qubit gates such as given in equations 1 and 2 and the CNOT gate, as given in equation 3 [38]. A unitary matrix is used to represent each quantum operation of our quantum circuit to enable decomposition and fusing of quantum operations. The unitary matrix representation of gates is a useful mathematical tool which allows the compiler to efficiently fuse quantum operations using simple matrix multiplications and Kronecker product computations. Combining quantum gates is particularly useful for reducing the number of quantum operations and thus the overall execution time of a quantum algorithm to perform the largest possible number of quantum operations within the coherence time of the qubits. For instance, combining a set of single qubit rotations can be cancelled out if their fusion is equivalent to an identity operation which can be removed from the quantum circuit. Any quantum gate can be fully specified using a unitary matrix, and any unitary matrix can be decomposed into a finite number of gates from some universal set. In OpenQL, this is achieved using Quantum Shannon Decomposition[42] as show in Figure 5, which has been implemented using the C++ Eigen library [43]. The universal set of gates used are the arbitrary y-rotation, the arbitrary z-rotation and the controlled-not gate. The matrices for these are shown in equations 3 and 4. $$\begin{split}\displaystyle R_{y}(\theta)=\begin{bmatrix}cos\theta/2&sin\theta/% 2\\ -sin\theta/2&cos\theta/2\end{bmatrix}\end{split}\quad\begin{split}% \displaystyle R_{z}(\theta)=\begin{bmatrix}e^{-i\theta/2}&0\\ 0&e^{i\theta/2}\end{bmatrix}\end{split}$$ (4) At each level of the recursion, a unitary gate $U$ is decomposed into four unitary gates spanning one less qubit, and three uniformly controlled rotation gates. The latter are decomposed using the technique from [44], and the algorithm is called again on the smaller unitary gates. This recursion continues until the one-qubit unitary gates can be implemented using ZYZ-decomposition [45]. For an n-qubit unitary, the decomposition results in $U(n)=3/2*4^{n}-3/2*2^{n}$ rotation gates and $C(n)=3/4*4^{n}-3/2*2^{n}$ controlled-not gates. These gates are added to the circuit and passed on to the next stages in the compilation. VI-B Gate-Level Optimization VI-B1 Gate Dependency Analysis Once the quantum operations have been decomposed into a sequence of elementary operations, the gate dependency is analyzed and represented in the form of a Direct Acyclic Graph (DAG) where the nodes represent the quantum gates and the edges the dependency between them. We refer to this graph as the Gate Dependency Graph (GDG). Beside extracting the parallelism from the quantum circuit, the GDG allows reordering the gates with respect to their dependencies and helps extracting local gate sequences that can potentially fused into smaller sequence of operations or even cancelled out if equivalent to an identity gate. This allows reducing the overall circuit depth and thus the algorithm execution time. The fidelity can also be greatly improved as more operations can be executed within the qubit coherence time. Figure 6 shows the gate dependency graph of a quantum circuit and a potential gate sequence optimization. We note, that without gate dependency analysis, some optimization opportunities can be missed as those gate sequences may be split into small scattered chunks that are not necessarily specified back-to-back in the original algorithm. VI-B2 Gate Sequence Optimization Gate sequence optimization uses the unitary representation of quantum gates to approximate the overall unitary operation. For instance, the equivalent unitary operation of a sequence of quantum gates operating on the same qubit can be obtained through matrix multiplication. The equivalent operation could be i) an identity that can be compiled out from the circuit, ii) an operation that can be implemented using a shorter sequence of elementary gates, iii) an operation that can be approximated using a shorter sequence of elementary operations. In order to control the accuracy of the compilation process, the compiler computes the distance between the target sequence of operation and the new set of elementary operations. The optimization will take place if that distance is smaller than the allowed error which is specified as a compilation parameter that can be controlled by the user to achieve at the desired accuracy. OpenQL uses a sliding window over each sequence of gates to fuse locally quantum operations whenever possible. The size of the sliding window is critical to the compilation complexity which grows linearly with the number of gates. VI-B3 Gate Scheduling Gate scheduling aims to use gate-dependency analysis to extract parallelism and schedule the operations in parallel while respecting dependencies. It uses the knowledge of the duration of each gate as specified in the platform’s configuration file to determine the cycle at which each gate can potentially start its execution. OpenQL gate scheduling can perform three types of scheduling: an ASAP (As Soon As Possible), an ALAP (As Late As Possible) or a Uniform ALAP. • In an ASAP schedule, the cycle values are minimal but it may result in many gates being executed at the start of the circuit and thus longer cycles between successive gates operating on the same qubit, and thus a lower fidelity. • At the other extreme, in an ALAP schedule the cycle values are maximal under the constraint that the total execution time of the circuit is equal to that of an ASAP schedule of the same circuit. But while at the start of the circuit relatively few gates are executed per cycle, at the end many gates will get executed on average. That they are executed as late as possible is good to get a higher fidelity but executing many gates per cycle may be more than the control electronics of the quantum computer was designed for, potentially leading to buffer overflows in that area and therefore to the requirement of a local feedback system to hold more gates off, effectively making execution time of a circuit longer. • The Uniform ALAP schedule aims to produce an ALAP schedule with a balanced number of gates per cycle over the whole execution of the circuit. This scheduling scheme is based on [46]. It starts by creating an ASAP schedule then performing a backward pass over the circuit in an ALAP fashion: filling cycles with gates by moving them towards the end while respecting the dependencies. Each of these three types of schedulers, dependencies and gate duration primarily determine the result. However, scheduler may need to respect more constraints, especially for the real targets. These constraints are mainly hardware constraints, for example those of control electronics, that limit the parallelism [47]. Using resource descriptions of those control electronics in the hardware configuration file, the gate scheduler optionally produces an ASAP, an ALAP or a Uniform ALAP schedule which respects these resource constraints. The main and from a hardware design perspective crucial property of the resulting schedules is that hardware can execute gates in the cycles determined by the scheduler as in a Very Long Instruction Word (VLIW) processor, without the need of maintaining whether gates are ready, etc.; this significantly reduces the complexity and size of the hardware. VI-C Mapping of quantum circuits The OpenQL compiler also includes the Qmap mapper [47] that is responsible for creating a version of the circuit that respects the processor contraints. The main constrains include the elementary gate set, the qubit topology that usually limits the interaction between qubits to only nearest-neighbour (NN) and the control electronics contraints -e.g. a single Arbitrary Waveform Generator (AWG) is used to operate in a group of qubits. In order to adapt the circuit to these quantum hardware characteristics, the Qmap mapper: i) performs an initial placement of the qubits in which virtual qubits (qubits in the circuit) are mapped to the hardware qubits (physical qubits in the chip); ii) it will move non-neighbouring qubits to adjacent positions to perform a two-qubit gate; and iii) it will re-schedule the quantum operations respecting their dependencies and all hardware constraints. Note that it uses the hardware properties that are described in the configuration file. The mapper aims to find the best qubit placement. Ideally, qubits can be placed in a way that all two-qubit interactions (two-qubit gates) present in the quantum program are allowed without need of any movement. However, this is rarely the case when the program is designed without considering the placement beforehand. Often qubit routing is required to perform two-qubits operations between non-neighbouring qubits when the optimal placement does not allow direct interaction between them. From this perspective, qubit routing can be considered as a critical component of the qubit mapping which allow to resolve such conflicts. OpenQL supports this by two algorithms, in sequence: • Initial Placement: This first pass aims to find the optimal qubit placement in the target physical device to enable performing two-qubits operations at the lowest possible cost. Currently, OpenQL can detect where constraints violations and thus illegal operations on such two-qubit gates between non-neighbouring qubits appear. It tries to find a map of the qubits that minimizes the overhead and enables qubit interactions. The mapper does this by using an Interger Linear Programming (ILP) algorithm as explained in [48]. Such an approach works perfectly on smaller circuits but takes too much execution time on longer circuits because of exponential scaling. • Qubit router: The second pass guarantees that two-qubit gate operations on non-neighbouring qubits can be performed by inserting a series of gates -e.g. SWAP gates- that move qubits to neighbouring places. For each of such two-qubit gate operations, it determines the distance of those qubits and when too far apart, it evaluates all possible ways to make those qubits nearest neighbour. To do so, it evaluates all possible shortest paths and chooses the one that, for instance, results in the minimum increase of the circuit depth (number of cycles). Then, the corresponding ’move’ operations are inserted in the program. Note that after the mapping the number of gates and the circuit depth will increase, increasing the failure rate and then reducing the algorithm’s reliability. VI-D Technology-Independent Common QASM After gate decomposition, quantum circuit optimization or gate scheduling, a cQASM compiler is responsible of producing a technology-independent common quantum assembly code called cQASM. Currently the cQASM 1.0 [33] is used to describe the circuit at the gate level and allows the user to simulate the execution of the quantum algorithm using the QX Simulator [34]. The simulation allows the programmer to verify the correctness of the quantum algorithm or to simulate and evaluate its behaviour on noisy quantum computing devices. The cQASM 1.0 aims to enable the description of quantum circuit while abstracting away the hardware details, for instance, a H q[1] describes a Hadamard gate on qubit q[1] without specifying the low level implementation of that quantum operation on a specific qubit technology. Besides the description of common quantum operations, cQASM 1.0 allows the specification of parallelism in the quantum circuit in the form of ’bundles’ (lists of gates starting in a same cycle) and ’SIMD operations’ (a gate operating on a range of qubits). This allows the OpenQL scheduler to express the parallelism that it found in cQASM 1.0. The cQASM 1.0 allows the naming of quantum circuit sections or ”sub-circuits”; these sub-circuits correspond to the names of the quantum kernels and allow the user to relate the produced cQASM to its high-level algorithm written in Python or C++. In the cQASM code example 4, we see the scheduled code produced for the Grover search algorithm. VI-E Technology-Dependent Compilation : eQASM After compiling the technology-independent QASM code, the compiler generates the Executable QASM (eQASM) which targets specific control hardware. The compiler uses different eQASM compilation backends depending on the target platform specified in the hardware configuration file. The eQASM compiler can reschedule the quantum operations to exploit the available parallelism on the target micro-architecture and map the quantum circuit based on the topology of the target qubit chip and the connectivity of the control hardware. VI-F Quantum Computer Micro-Architecture OpenQL has currently several backends capable of generating executable Quantum Assembly Code (eQASM) for two different microarchitectures discussed in [36] and [35]. The backends convert the compiled cQASM code to a specific eQASM code for the target microarchitecture with respect to the hardware constraints such as the available parallelism and the timing constraints. VI-F1 Temporal Transformation : Low-level Scheduling While the QASM-level scheduler pass extracts all the available gate-level parallelism, the target platform can have limited parallelism due the control electronic constraints. After analyzing the quantum gate dependencies, the compiler schedules the instructions either ‘As Late As Possible’ (ALAP) or ‘As Soon As Possible’ (ASAP) with respect to the gate dependencies and cycle-accurate durations of the different gates. VI-F2 Spatial Transformation : Connectivity-Aware Mapping The OpenQL compiler maps the qubits with respect to the qubit plane topology which specifies the operation constraints such as nearest neighbour interactions or operation parallelism limitations. The current version of OpenQL relies on the two-qubit instruction specification in the hardware configuration file to extract the constraints, but the mapping task is being shifted to the mapping layer at the gate level which will use a dedicated mapping specification in the hardware configuration file and more advanced mapping techniques. VI-F3 eQASM Execution Monitoring Tracing the different the instruction execution and timing of the different signals controlling the qubits is critical for debugging and monitoring the hardware. The OpenQL compiler generates auxilliary outputs for tracing purposes such as timed instructions and a graphical timing diagram as shown in Fig. 8. In this timing diagram, both the digital and analog signals are shown with their respective starting time and duration. Each signal refer to both its originating eQASM instruction and the originating cQASM instruction with the precise execution clock cycle. When the compiler compensate for latencies in a given channel, both the original and the compensated timing are shown. VI-G Hardware Configuration Specification : Control Electronics In order to compile the produced QASM instructions into executable instructions (e.g. eQASM), the compiler needs to know not only the instruction set supported by the target microarchitecture but also the specification of all the constraints related the hardware resource usage, the operations timing and the qubits connectivity, etc. The hardware specification file aims to provide this information in an abstract way to allow describing different architectures and enable the compiler to adapt to their constraints and requirements when producing the executable code. This allows extending the compiler support to many architectures without fundamental changes in its upper technology-independent layers. The following JSON code describe the hardware setup and list all the supported operation and their settings such as the number of qubits, the time scale, the operations dependencies, their timing parameters, mathematical description and associated instruction set. ⬇ 1{ 2   "eqasm_compiler" : "qumis_compiler", 3 4   "hardware_settings": { 5         "qubit_number": 2, 6         "cycle_time" : 5, 7         "mw_mw_buffer": 0, 8         "mw_flux_buffer": 0, 9         â€¦ 10   }, 11   "instructions": { 12      "rx180 q1" : { 13         "duration": 40, 14         "latency": 20, 15         "qubits": ["q1"], 16         "matrix" : [ [0.0,0.0], [1.0,0.0], 17                      [1.0,0.0], [0.0,0.0] ], 18         "disable_optimization": false, 19         "type" : "mw", 20         "qumis_instr": "pulse", 21         "qumis_instr_kw": { 22            "codeword": 1, 23            "awg_nr": 2 24         } 25      }, 26      "rx180 q0" : { 27         "duration": 40, 28         "latency": 10, 29         "qubits": ["q0"], 30         "matrix" : [ [0.0,0.0], [1.0,0.0], 31                      [1.0,0.0], [0.0,0.0] ], 32         "disable_optimization": false, 33         "type" : "mw", 34         "qumis_instr": "codeword_trigger", 35         "qumis_instr_kw": { 36            "codeword_ready_bit": 0, 37            "codeword_ready_bit_duration" : 5, 38            "codeword_bits": [1, 2, 3, 4], 39            "codeword": 1 40         } 41     }, 42     "prepz q0" : { 43         "duration": 100, 44         "latency": 0, 45         "qubits": ["q0"], 46         "matrix" : [ [1.0,0.0], [0.0,0.0], 47                    [0.0,0.0], [1.0,0.0] ], 48         "disable_optimization": true, 49         "type" : "mw", 50         "qumis_instr": "trigger_sequence", 51         "qumis_instr_kw": { 52            "trigger_channel": 4, 53            "trigger_width": 0 54         } 55      } 56   }, 57 58   "gate_decomposition": { 59      "x q0" : ["rx180 q0"], 60      "y q0" : ["ry180 q0"], 61      "z q0" : ["ry180 q0","rx180 q0"], 62      "h q0" : ["ry90 q0"], 63      "cnot q0,q1" : ["ry90 q1","cz q0,q1","ry90 q1"] 64   }, 65 66   "resources" : { 67   }, 68 69   "topology" : { 70   } 71     â€¦ 72} The sections of the hardware configuration file are organized as follows: • eqasm_compiler: this section specifies the executable QASM (eQASM) compiler backend which should be used to generate the executable code. The allows the compiler to target different microarchitectures using the appropriate backend. • instructions: in this section, the quantum operations supported by the target platform are described by their duration, their latency in the control system, their unitary matrix representation, their type (microwave, flux or readout) and finally microarchitecture-specific information to enable the compiler to generate the executable code. – Instruction Properties * duration (int) : duration of the operation in ns * latency (int): latency of operation in ns * qubits (list) : list of affected qubits by this operation (this includes the qubits which are directly used or made inaccessible by this operation). * matrix (matrix): the unitary matrix representation of the quantum operation. * disable_optimization (bool): setting this field to True prevent the compiler from compiling away or optimizing the operation. * type (str): one of either ’mw’ (microwave), ’flux’ , ’readout’ or ’none’. – Microarchitecture Specific Properties * qumis_instr (str): one of wait, pulse, trigger, CW_trigger, dummy, measure. * qumis_instr_kw (dict): dictionary containing keyword arguments for the qumis instruction. • gate_decomposition: the gate decomposition section aims to describe the decomposition of coarse grain quantum operations into the elementary operations defined in the previous section. Each composite instruction in this section is defined by its equivalent quantum gate sequence. For instance, a CNOT gate can be described as: ”CNOT • resources: describe the various hardware constraints that are used by the hardware constrained scheduling algorithm • topology: describes the qubit grid topology, i.e. qubits and their connnections for performing two-qubit gates The operation duration, latency and the target qubits are used by the eQASM backend to analyze the dependencies of the instructions. This information is critical for different compilation stages, for instance the duration of an instruction and its qubit dependency is crucial for the low-level hardware-dependent scheduling stage which use these information to schedule the instructions. The latency field is used by the backend compiler to compensate for the instruction latency by adjusting the instructions starting times to synchronize different channels with different latencies. Different latencies could exist in different control channels due to propagation delays through different cables, control latencies in waveform generators or readout hardware. VII OpenQL Application OpenQL has been used to program several experiments and algorithms on various quantum computer architectures and also on different qubit technologies, namely superconducting and semiconducting qubits. VII-A Superconducting Qubit Experiments We used OpenQL to compile quantum code and implement various experiments on several quantum chips with 2, 5 and 7 qubits using two different microarchitectures, namely QuMA 1.0 [36] and QuMA 2.0 [49], for controlling the qubits using two different instruction sets. We implemented several standard experiments such as Clifford-based Randomized Benchmarking RB [50], AllXY [51] and other calibration routines, such as Rabi oscillation [51]. For each experiment, the same high-level OpenQL code has been reused on different setups and devices without changes, only the hardware configuration file has been changed to specify each target hardware setup and its constraints to instruct the compiler how to generate the appropriate code for each platform. Apart from the above basic experiments, OpenQL has also been used to compile code for the following applications: 1. Net-zero two qubit gate [52] 2. 3 qubit repeated parity checks [53] 3. Variational quantum eigen solver [54] 4. Calculating energy derivatives in quantum chemistry [55] VII-B Semiconducting Qubit In order to evaluate the portability of OpenQL over different qubit technologies, the AllXY experiment has been reproduced on both superconducting qubit and semiconducting qubit using the same code and different configuration files. We used a Si-Spin qubit device [5] controlled by different control electronics, the hardware configuration file was changed to reflect the control setup and enable the compiler to automatically adapt the generated code to the target system: the compiler took into account the latencies of the different signal generators and measurement units involved in the setup and rescheduled all the quantum operations accordingly to compensate for those latencies and provide coherent qubit control. VIII Conclusion In this paper we presented the OpenQL quantum programming framework which includes a high-level quantum programming language and its compiler. A quantum program can be expressed using C++ or Python interface and compiler translates this high-level program into a Common QASM (cQASM) to target simulators. This program can further be compiled for a specific architecture targeting physical quantum computer. OpenQL has been used for implementing several experiments and quantum algorithms on several quantum computer architectures targeting both superconducting and semiconducting qubit technologies. Acknowledgment This project is funded by Intel Corporation. The authors would like to thank Pr. L. DiCarlo and his team for giving us the opportunity to test OpenQL on various Superconducting qubit systems and Pr. L. Vandersypen and his team for allowing us to test OpenQL on a Si-Spin qubit device. The authors would like to thank all members of the Quantum Computer Architecture Lab at TU Delft for their valuable feedback and suggestions. References [1] P. W. Shor, “Polynomial-time algorithms for prime factorization and discrete logarithms on a quantum computer,” SIAM J. Comput., vol. 26, no. 5, pp. 1484–1509, Oct. 1997. [Online]. Available: http://dx.doi.org/10.1137/S0097539795293172 [2] L. K. Grover, “Quantum mechanics helps in searching for a needle in a haystack,” Physical Review Letters, vol. 79, no. 2, pp. 325–328, 1997. [Online]. Available: http://link.aps.org/doi/10.1103/PhysRevLett.79.325 [3] R. Versluis, S. Poletto, N. Khammassi, B. Tarasinski, N. Haider, D. J. Michalak, A. Bruno, K. Bertels, and L. DiCarlo, “Scalable quantum circuit and control for a superconducting surface code,” Phys. Rev. Applied, vol. 8, no. 3, p. 034021, 2017. [4] C. Monroe and J. Kim, “Scaling the ion trap quantum processor,” Science, vol. 339, no. 6124, pp. 1164–1169, 2013. [5] T. Watson, S. Philips, E. Kawakami, D. Ward, P. Scarlino, M. Veldhorst, D. Savage, M. Lagally, M. Friesen, S. Coppersmith, M. Eriksson, and L. Vandersypen, “A programmable two-qubit quantum processor in silicon,” Nature, vol. 555, 03 2018. [6] C. G. Almudever, L. Lao, X. Fu, N. Khammassi, I. Ashraf, D. Iorga, S. Varsamopoulos, C. Eichler, A. Wallraff, L. Geck et al., “The engineering challenges in quantum computing,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.   IEEE, 2017, pp. 836–845. [7] S. Bettelli, T. Calarco, and L. Serafini, “Toward an architecture for quantum programming,” The European Physical Journal D - Atomic, Molecular, Optical and Plasma Physics, 2003. [Online]. Available: https://doi.org/10.1140/epjd/e2003-00242-2 [8] B. Ã–mer, “A procedural formalism for quantum computing,” Tech. Rep., 1998. [9] P. SELINGER, “Towards a quantum programming language,” Mathematical Structures in Computer Science, vol. 14, no. 4, p. 527–586, 2004. [10] S. P. and V. B., “A lambda calculus for quantum computation with classical control,” Urzyczyn P. (eds) Typed Lambda Calculi and Applications. TLCA 2005. Lecture Notes in Computer Science, vol. 3461, 2005. [11] M. ZORZI, “On quantum lambda calculi: a foundational perspective,” Mathematical Structures in Computer Science, vol. 26, no. 7, p. 1107–1195, 2016. [12] J. Paykin, R. Rand, and S. Zdancewic, “Qwire: A core language for quantum circuits,” in Proceedings of the 44th ACM SIGPLAN Symposium on Principles of Programming Languages, ser. POPL 2017.   New York, NY, USA: Association for Computing Machinery, 2017, p. 846–858. [Online]. Available: https://doi.org/10.1145/3009837.3009894 [13] M. Fingerhuth, “Open-source quantum software projects.” [Online]. Available: https://github.com/qosf/awesome-quantum-software#quantum-compilers [14] M. Fingerhuth, “Open-source quantum software projects.” [Online]. Available: https://github.com/qosf/awesome-quantum-software#quantum-simulators [15] R. LaRose, “Overview and comparison of gate level quantum software platforms,” Quantum, vol. 3, p. 130, Mar 2019. [Online]. Available: http://dx.doi.org/10.22331/q-2019-03-25-130 [16] A. JavadiAbhari, S. Patil, D. Kudrow, J. Heckey, A. Lvov, F. T. Chong, and M. Martonosi, “Scaffcc: A framework for compilation and analysis of quantum computing programs,” in Proceedings of the 11th ACM Conference on Computing Frontiers, ser. CF ’14.   New York, NY, USA: ACM, 2014, pp. 1:1–1:10. [Online]. Available: http://doi.acm.org/10.1145/2597917.2597939 [17] A. J. Abhari, S. Patil, D. Kudrow, J. Heckey, A. Lvov, F. T. Chong, and M. Martonosi, “Scaffcc: Scalable compilation and analysis of quantum programs,” 2015. [18] A. J. Abhari, A. Faruque et al., “Scaffold: Quantum programming language,” 2012. [19] K. Svore, M. Roetteler, A. Geller, M. Troyer, J. Azariah, C. Granade, B. Heim, V. Kliuchnikov, M. Mykhailova, and A. Paz, “Q#: Enabling scalable quantum computing and development with a high-level dsl,” Proceedings of the Real World Domain Specific Languages Workshop 2018 on - RWDSL2018, 2018. [Online]. Available: http://dx.doi.org/10.1145/3183895.3183901 [20] D. S. Steiger, T. Häner, and M. Troyer, “ProjectQ: an open source software framework for quantum computing,” Quantum, vol. 2, p. 49, Jan. 2018. [Online]. Available: https://doi.org/10.22331/q-2018-01-31-49 [21] H. Abraham et al., “Qiskit: An open-source framework for quantum computing,” 2019. [22] IBM, “IBM Quantum Experience,” https://www.research.ibm.com/ibm-q/. [23] R. S. Smith, E. C. Peterson, M. G. Skilbeck, and E. J. Davis, “An open-source, industrial-strength optimizing compiler for quantum programs,” 2020. [24] R. S. Smith, M. J. Curtis, and W. J. Zeng, “A practical quantum instruction set architecture,” 2016. [25] QuTech, “Quantum Inspire: The multi hardware Quantum Technology platform,” https://www.quantum-inspire.com/. [26] M. Zahran, “Heterogeneous computing: Here to stay,” Queue, vol. 14, no. 6, p. 31–42, Dec. 2016. [Online]. Available: https://doi.org/10.1145/3028687.3038873 [27] P. Rogers, “Chapter 2 - hsa overview,” in Heterogeneous System Architecture, W. mei W. Hwu, Ed.   Boston: Morgan Kaufmann, 2016, pp. 7 – 18. [Online]. Available: http://www.sciencedirect.com/science/article/pii/B9780128003862000018 [28] J. Jeffers and J. Reinders, Intel Xeon Phi Coprocessor High Performance Programming, 1st ed.   San Francisco, CA, USA: Morgan Kaufmann Publishers Inc., 2013. [29] Texas Instruments, “OMAP3530 Application Processors,” http://www.ti.com/product/omap3530. [30] Xilinx, “Zynq-7000 All Programmable SoC,” http://www.xilinx.com/products/silicon-devices/soc/zynq-7000. [31] S. Vassiliadis, S. Wong, G. N. Gaydadjiev, K. Bertels, G. Kuzmanov, and E. Moscu Panainte, “The molen polymorphic processor,” IEEE Transactions on Computers, vol. 53, pp. 1363–1375, 2004. [32] D. Luebke, M. Harris, N. Govindaraju, A. Lefohn, M. Houston, J. Owens, M. Segal, M. Papakipos, and I. Buck, “Gpgpu: General-purpose computation on graphics hardware,” in Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, ser. SC ’06.   New York, NY, USA: Association for Computing Machinery, 2006, p. 208–es. [Online]. Available: https://doi.org/10.1145/1188455.1188672 [33] N. Khammassi, G. Guerreschi, I. Ashraf, J. Hogaboam, C. Almudever, and K. Bertels, “cqasm v1. 0: Towards a common quantum assembly language,” arXiv preprint arXiv:1805.09607, 2018. [34] N. Khammassi, I. Ashraf, X. Fu, C. Almudever, and K. Bertels, “Qx: A high-performance quantum computer simulation platform,” IEEE 2017 Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 464–469, March 2017. [35] X. Fu, L. Riesebos, M. A. Rol, J. van Straten, J. van Someren, N. Khammassi, I. Ashraf, R. F. L. Vermeulen, V. Newsum, K. K. L. Loh, J. C. de Sterke, W. J. Vlothuizen, R. N. Schouten, C. G. Almudever, L. DiCarlo, and K. Bertels, “eqasm: An executable quantum instruction set architecture,” 2018. [36] X. Fu, M. A. Rol, C. C. Bultink, J. van Someren, N. Khammassi, I. Ashraf, R. F. L. Vermeulen, J. C. de Sterke, W. J. Vlothuizen, R. N. Schouten, C. G. Almudever, L. DiCarlo, and K. Bertels, “An experimental microarchitecture for a superconducting quantum processor,” in Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture, ser. MICRO-50 ’17.   New York, NY, USA: ACM, 2017, pp. 813–825. [Online]. Available: http://doi.acm.org/10.1145/3123939.3123952 [37] N. Khammassi, I. Ashraf, X. Fu, C. G. Almudéver, and K. Bertels, “Qx: A high-performance quantum computer simulation platform,” in Design, Automation & Test in Europe Conference & Exhibition (DATE), 2017.   IEEE, 2017, pp. 464–469. [38] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information: 10th Anniversary Edition, 10th ed.   New York, NY, USA: Cambridge University Press, 2011. [39] M. Amy, D. Maslov, M. Mosca, and M. Roetteler, “A meet-in-the-middle algorithm for fast synthesis of depth-optimal quantum circuits,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 32, pp. 818–830, 2013. [40] D. Deutsch, “Quantum theory, the church-turing principle and the universal quantum computer,” vol. 400, pp. 97–117, 1985. [41] S. Lloyd, “Universal quantum simulators,” Science, vol. 273, no. 5278, pp. 1073–1078, 1996. [Online]. Available: https://science.sciencemag.org/content/273/5278/1073 [42] V. Shende, S. S. Bullock, and I. Markov, “Synthesis of quantum logic circuits,” Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol. 25, pp. 1000 – 1010, July 2006. [43] B. J. (founder), G. G. (guru), and many more, “The eigen documentation,” 2019, accessed on: 04-09-2019. [Online]. Available: http://eigen.tuxfamily.org/index.php?title=Main_Page [44] M. Möttönen, J. J. Vartiainen, V. Bergholm, and M. M. Salomaa, “Quantum circuits for general multiqubit gates,” Phys. Rev. Lett., vol. 93, p. 130502, Sep 2004. [45] A. Barenco, C. H. Bennett, R. Cleve, D. P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. Smolin, and H. Weinfurter, “Elementary gates for quantum computation,” Physical Review A, November 1995. [46] L. Josipović, R. Ghosal, and P. Ienne, “Dynamically scheduled high-level synthesis,” in Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, ser. FPGA ’18.   New York, NY, USA: ACM, 2018, pp. 127–136. [Online]. Available: http://doi.acm.org/10.1145/3174243.3174264 [47] L. Lao, D. M. Manzano, H. van Someren, I. Ashraf, and C. G. Almudever, “Mapping of quantum circuits onto nisq superconducting processors,” arXiv preprint arXiv:1908.04226, 2019. [48] L. Lao, B. van Wee, I. Ashraf, J. van Someren, N. Khammassi, K. Bertels, and C. Almudever, “Mapping of lattice surgery-based quantum circuits on surface code architectures,” Quantum Science and Technology, vol. 4, p. 015005, 2019. [49] X. Fu, L. Riesebos, M. A. Rol, J. van Straten, J. van Someren, N. Khammassi, I. Ashraf, R. F. L. Vermeulen, V. Newsum, K. K. L. Loh, J. C. de Sterke, W. J. Vlothuizen, R. N. Schouten, C. G. Almudever, L. DiCarlo, and K. Bertels, “eqasm: An executable quantum instruction set architecture.” [50] E. Magesan, J. M. Gambetta, and J. Emerson, “Scalable and robust randomized benchmarking of quantum processes,” Physical Review Letters, vol. 106, p. 180504, 2011. [51] M. D. Reed, “Entanglement and quantum error correction with superconducting qubits,” Ph.D. dissertation, Yale University, 2013. [52] M. A. Rol, F. Battistel, F. K. Malinowski, C. C. Bultink, B. M. Tarasinski, R. Vollmer, N. Haider, N. Muthusubramanian, A. Bruno, B. M. Terhal, and L. DiCarlo, “A fast, low-leakage, high-fidelity two-qubit gate for a programmable superconducting quantum computer,” pp. 1–18, mar 2019. [Online]. Available: http://arxiv.org/abs/1903.02492 [53] C. C. Bultink, T. E. O’Brien, R. Vollmer, N. Muthusubramanian, M. Beekman, M. A. Rol, X. Fu, B. Tarasinski, V. Ostrouckh, B. Varbanov, A. Bruno, and L. DiCarlo, “Protecting quantum entanglement from qubit errors and leakage via repetitive parity measurements,” arXiv:1905.12731, 2019. [Online]. Available: https://arxiv.org/abs/1905.12731 [54] R. Sagastizabal, X. Bonet-Monroig, M. Singh, M. A. Rol, C. C. Bultink, X. Fu, C. H. Price, V. P. Ostroukh, N. Muthusubramanian, A. Bruno, M. Beekman, N. Haider, T. E. O’Brien, and L. DiCarlo, “Error Mitigation by Symmetry Verification on a Variational Quantum Eigensolver,” no. 0, pp. 1–13, feb 2019. [Online]. Available: http://arxiv.org/abs/1902.11258 [55] T. E. O’Brien, B. Senjean, R. Sagastizabal, X. Bonet-Monroig, A. Dutkiewicz, F. Buda, L. DiCarlo, and L. Visscher, “Calculating energy derivatives for quantum chemistry on a quantum computer,” pp. 1–20, may 2019. [Online]. Available: http://arxiv.org/abs/1905.03742
Representation theory, Radon transform and the heat equation on a Riemannian symmetric space Gestur Ólafsson and Henrik Schlichtkrull Department of Mathematics, Louisiana State University, Baton Rouge, LA 70803, USA [email protected] Department of Mathematics, University of Copenhagen, Universitetsparken 5, DK-2100 København Ø, Denmark [email protected] Abstract. Let $X=G/K$ be a Riemannian symmetric space of the noncompact type. We give a short exposition of the representation theory related to $X$, and discuss its holomorphic extension to the complex crown, a $G$-invariant subdomain in the complexified symmetric space $X_{\mathbb{C}}=G_{\mathbb{C}}/K_{\mathbb{C}}$. Applications to the heat transform and the Radon transform for $X$ are given. Key words and phrases:Heat Equation, Radon Transform, Riemannian symmetric spaces, Hilbert spaces of holomorphic functions 1991 Mathematics Subject Classification: 33C67, 46E20, 58J35; Secondary 22E30, 43A85 Research of Ólafsson was supported by NSF grants DMS-0402068. Part of this paper was written while the second author was a visitor at the Max-Planck Institute for Mathematics, Bonn, Germany. He expresses gratitude for support and hospitality Introduction In the analysis of Riemannian symmetric spaces one can follow several approaches, with the emphasis for example on differential geometry, partial differential equations, functional analysis, complex analysis, or representation theory. The representation theory associated with a Riemannian symmetric space is of course well known through the work of Harish-Chandra [25, 26], and it is based on important results in the theory of infinite dimensional representations, by Gelfand-Naimark, Mackey, Segal, just to mention a few. The powerful theory of Harish-Chandra has been widely generalized [2, 6, 28], and successfully applied to many problems related to Riemannian symmetric spaces. The geometric point of view, with emphasis on relations to topics like classical Euclidean analysis and the Radon transform has been represented by Helgason [29, 31, 32]. Both aspects, as well as the connection to the work of the school around Gelfand and Graev are described in a short and clear fashion in the second half of Mackey’s famous Chicago Lectures on unitary group representations [47]. In recent years much research has been directed towards the interplay between the real analysis and the geometry of the symmetric space $G/K$ on the one side, and complex methods in analysis, geometry and representation theory on the other side. This development has a long history tracing back to Cartan’s analysis of bounded symmetric domains, and Harish-Chandra’s construction of the holomorphic discrete series [23]. The so-called Gelfand-Gindikin program [13], suggests to consider functions on $G$ through holomorphic extension to domains in the complexification $G_{\mathbb{C}}$ of $G$, and to study representations of $G$ realized on spaces of such functions, analogous to the classical Hardy spaces on tube domains over $\mathbb{R}^{n}$. Only partial results have been obtained so far. The program was carried out for the holomorphic discrete series of groups of Hermitian type type in [51, 54], the holomorphic discrete series for compactly causal symmetric spaces in [38], and finally the holomorphic most continuous series for noncompactly causal symmetric spaces in [17]. Connected to this development is the study of natural domains in $G_{\mathbb{C}}$ to which the spherical function of $G/K$ admit holomorphic extensions, initiated by Akhiezer and Gindikin [1], and continued by several people. Without claiming to be complete we would like to mention [3, 5, 7, 8, 10, 11, 16, 17, 18, 19, 44, 45, 48] representing different aspects of this important development. The most relevant articles related to the present exposition are the papers [15, 45]. One important conclusion is that there exists a maximal $G$-invariant domain, called the complex crown, to which all the spherical functions extend. A related area of recent research is the study of the heat equation on the symmetric space. There are several generalizations to the well-known Segal-Bargmann transform, which maps an $L^{2}$-function $f$ on the Euclidean space $\mathbb{R}^{n}$ to the function $H_{t}f$ on $\mathbb{C}^{n}$, which is the holomorphic continuation of the solution at time $t$ to the heat equation with initial Cauchy data $f$. The first work in this direction was by Hall [20], who replaced $\mathbb{R}^{n}$ by a connected compact semisimple Lie group $U$, and $\mathbb{C}^{r}$ by the complexification $U_{\mathbb{C}}$. This was put into a general framework using polarization of a restriction map in [50]. The results of Hall were extended to compact symmetric spaces $U/K$ by Stenzel in [56], where the complexification is $U_{\mathbb{C}}/K_{\mathbb{C}}$. It is important to note that in this compact case, all eigenfunctions of the algebra of invariant differential operators on $U/K$, and also the heat kernel itself, extend to holomorphic functions on $U_{\mathbb{C}}/K_{\mathbb{C}}$. This is related to the fact, that each irreducible representation of $U$ extends to a holomorphic representation of $U_{\mathbb{C}}$. For a symmetric space of the noncompact type $G/K$, the maximal $G$-invariant domain is the complex crown and not the full complexification $X_{\mathbb{C}}=G_{\mathbb{C}}/K_{\mathbb{C}}$. It was shown in [46] that the image of the Segal-Bargmann transform on $G/K$ can be identified as a Hilbert-space of holomorphic functions on the crown. The norm was defined using orbital integrals and the Faraut version of Gutzmer’s formula [8]. Some special cases have also been considered in [21, 22], but without using the Akhiezer-Gindikin domain explicitly. In particular, in [22] Hall and Mitchell give a description of the image of the $K$-invariant functions in $L^{2}(G/K)$ in case $G$ is complex. In the present paper we have the following basic aims. Our first aim is to give a short exposition of the basic representation theory related to a Riemannian symmetric space $G/K$, connecting to work of Harish-Chandra and Helgason. The secondary aim is to discuss the complex extension in $X_{\mathbb{C}}$, and to introduce a $G$-invariant Hilbert space of holomorphic functions on the crown, which carries all the representation theory of $G/K$. Thirdly, we combine the Fourier theory and the holomorphic theory in a study of the heat equation on $G/K$. The image of the Segal-Bargmann transform is described in a way similar to that of [49], where we considered the case of $K$-invariant functions on $G/K$ (in a more general setting of root systems), and derived a result containing that of Hall and Mitchell as a special case. The image is characterized in terms of a Fock space on $\mathfrak{a}_{\mathbb{C}}$. Finally we discuss some aspects of the Radon transform on $G/K$. We introduce a $G$-invariant Hilbert space of CR-functions on a subset of the complexified horocycle space, and we use the Radon transform to construct a unitary $G$-invariant isomorphism between the Hilbert spaces on the crown and on the complex horocycle space, respectively. We shall now describe the content of the paper in some more detail. Let $X=G/K$ be a Riemannian symmetric space of the noncompact type. For the mentioned primary aim, we define a Fourier transform and we state a Plancherel theorem and an inversion formula, following the beautiful formulation of Helgason. The content of the formulation is described by means of representation theory. We also indicate a representation theoretic proof of the reduction to Harish-Chandra’s theorem. At the end of Section 1 we give a short description of the heat transform and its image in $L^{2}(X)$ as a direct integral over principal series representations. The space $X$ is naturally contained in the complexified symmetric space $X_{\mathbb{C}}=G_{\mathbb{C}}/K_{\mathbb{C}}$ as a totally real submanifold. Inside the complexification $X_{\mathbb{C}}$ is the $G$-invariant domain $\mathrm{Cr}(X)\subset X_{\mathbb{C}}$, the complex crown, which was introduced in [1]. It has the important property, shown in [45], that all joint eigenfunctions for $\mathbb{D}(X)$, the algebra of invariant differential operators on $X$, extend to holomorphic functions on it. A second fundamental fact is the convexity theorem of Gindikin, Krötz, and Otto [15, 43], which is recalled in Theorem 2.1. In Section 2 we define a $G$-invariant Hilbert space $\mathcal{H}_{X}$ of holomorphic functions on $\mathrm{Cr}(X)$, such that restriction to $X$ maps continuously into $L^{2}(X)$, and such that the representation of $G$ on $\mathcal{H}_{X}$ carries all the irreducible representations found in the decomposition of $L^{2}(X)$. The definition, which is essentially representation theoretic, is related to definitions given in [16]. The main results are stated in Theorem 2.3 where a representational description of $\mathcal{H}_{X}$ is given and the reproducing kernel of the space is determined, see also [18]. The final section deals with the holomorphic extension of $H_{t}f$ for $f\in L^{2}(X)$. It was shown in [45] that $H_{t}f$ extends to a holomorphic function on $\mathrm{Cr}(X)$, and a description of the image $H_{t}(L^{2}(X))\subset\mathcal{O}(\mathrm{Cr}(X))$ was given in [46]. In fact, we show in Theorem 3.1 that $H_{t}(L^{2}(X))\subset\mathcal{H}_{X}$, and we give an alternative description of the image by means of the Fourier transform. The Radon transform sets up a relation between functions on $X$ and functions on the space $\Xi=G/MN$ of horocycles on $X$. The Radon transform is used in Theorem 3.3 to give yet another description of the image of $H_{t}$. We also give an inversion formula for $H_{t}$. The space $\Xi$ sits inside a complex space, the space $\Xi_{\mathbb{C}}$ of complex horospheres on $X_{\mathbb{C}}$, and in this complex space one can define a $G$-invariant domain $\Xi(\Omega)\supset\Xi$ analogous to the crown. However, $\Xi(\Omega)$ is not a complex manifold but a CR-submanifold of $\Xi_{\mathbb{C}}$. On this domain, we define a space $\mathcal{H}_{\Xi}$ similar to $\mathcal{H}_{X}$, and we show in Theorem 3.5 that the normalized Radon transform can be used to set up an unitary isomorphism $\tilde{\Lambda}:\mathcal{H}_{X}\to\mathcal{H}_{\Xi}$. 1. Representation theory and harmonic analysis In this section we introduce the standard notation that will be used throughout this article. We will also recall some well known facts about the principal series representations $\pi_{\lambda}$ and the Fourier transform on $X$. 1.1. Notation Let $G$ be a connected noncompact semisimple Lie group with Lie algebra $\mathfrak{g}$, let $\theta:G\to G$ be a Cartan involution and set $$K=G^{\theta}=\{x\in G\mid\theta(x)=x\}\,.$$ The space $X=G/K$ is Riemannian symmetric space of noncompact type. It does not depend on which one of the locally isomorphic groups with Lie algebra $\mathfrak{g}$ is used, as the center of $G$ is always contained in $K$. We will therefore assume that $G$ is contained in a simply connected Lie group $G_{\mathbb{C}}$ with Lie algebra $\mathfrak{g}_{\mathbb{C}}=\mathfrak{g}\otimes_{\mathbb{R}}\mathbb{C}$. In particular, $G$ has finite center and $K$ is a maximal compact subgroup of $G$. We will denote by the same symbol $\theta$ the holomorphic extension of $\theta$ to $G_{\mathbb{C}}$, as well as the derived Lie algebra homomorphisms $\theta:\mathfrak{g}\to\mathfrak{g}$ and its complex linear extension to $\mathfrak{g}_{\mathbb{C}}$. Denote by $\sigma:\mathfrak{g}_{\mathbb{C}}\to\mathfrak{g}_{\mathbb{C}}$ the conjugation on $\mathfrak{g}_{\mathbb{C}}$ with respect to $\mathfrak{g}$. As $G_{\mathbb{C}}$ is simply connected, $\sigma$ integrates to an involution on $G_{\mathbb{C}}$ with $G=G_{\mathbb{C}}^{\sigma}$. Denote by $K_{\mathbb{C}}\subset G_{\mathbb{C}}$ the complexification of $K$ in $G_{\mathbb{C}}$ and $X_{\mathbb{C}}=G_{\mathbb{C}}/K_{\mathbb{C}}$. Then $X_{\mathbb{C}}$ has a complex structure, with respect to which $X\simeq G\cdot x_{o}\subset X_{\mathbb{C}}$ is a totally real submanifold. It can be realized as the connected component containing $x_{o}=eK_{\mathbb{C}}\in X_{\mathbb{C}}$ of the fixed point set of the conjugation $g\cdot x_{o}\mapsto\sigma(g\cdot x_{o}):=\sigma(g)\cdot x_{o}$. Let $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ be the Cartan decomposition defined by $\theta$, and let $\mathfrak{a}$ be a maximal abelian subspace of $\mathfrak{p}$, $\Delta$ the set of roots of $\mathfrak{a}$ in $\mathfrak{g}$, and $\Delta^{+}$ a fixed set of positive roots. For $\alpha\in\Delta$ let $$\mathfrak{g}^{\alpha}=\{X\in\mathfrak{g}\mid(\forall H\in\mathfrak{a})\,[H,X]=% \alpha(H)X\}$$ be the joint $\alpha$-eigenspace. Let $$\mathfrak{n}=\bigoplus_{\alpha\in\Delta^{+}}\mathfrak{g}^{\alpha}\,,$$ then $\mathfrak{n}$ is a nilpotent subalgebra of $\mathfrak{g}$. Let $\mathfrak{m}=\mathfrak{z}_{\mathfrak{k}}(\mathfrak{a})$, then $\mathfrak{m}\oplus\mathfrak{a}\oplus\mathfrak{n}$ is a minimal parabolic subalgebra and $$\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{a}\oplus\mathfrak{n}\,.$$ Let $A=\exp\mathfrak{a}$, $N=\exp\mathfrak{n}$, $M=Z_{K}(A)$ and $P=MAN$. We have the Iwasawa decompositions $$G=KAN\subset K_{\mathbb{C}}A_{\mathbb{C}}N_{\mathbb{C}}\subset G_{\mathbb{C}}$$ where the subscript ${}_{\mathbb{C}}$ on a subgroup of $G$ stands for its complexification in $G_{\mathbb{C}}$. The map $K\times A\times N\ni(k,a,n)\mapsto kan\in G$ is an analytic diffeomorphism (the analogous statement fails for the complexified Iwasawa decomposition). We shall denote its inverse $x\mapsto(k(x),a(x),n(x))$. Let $B=K/M$, this is the so-called Furstenberg boundary of $X$. Note that since $B\simeq G/P$, it carries the action of $G$ given by $g\cdot(kM)=k(gk)M$. 1.2. Integration If $C$ is a Lie group or a homogeneous space of a Lie group, then we denote by $dc$ a left invariant measure on $C$. We normalize the invariant measure on compact groups and compact homogeneous spaces such that the total measure is one. We require that the Haar measures on $A$ and $i\mathfrak{a}^{*}$ are normalized such that if (1.1) $$\mathcal{F}_{A}(f)(\lambda)=\int_{A}f(a)a^{-\lambda}\,da$$ is the Fourier transform on the Abelian group $A$, then $$f(x)=\int_{i\mathfrak{a}^{*}}\mathcal{F}_{A}(f)(\lambda)a^{\lambda}\,d\lambda\,.$$ Finally, we normalize the Haar measure on $N$ as usual by $\int_{N}a(\theta(n))^{-2\rho}\,dn=1$, where $2\rho=\sum_{\alpha\in\Delta^{+}}m_{\alpha}\alpha$ and $m_{\alpha}=\dim\mathfrak{g}^{\alpha}$. Then we can normalize the Haar measure on $G$ such that for all $f\in C_{c}(G)$ we have (1.2) $$\int_{G}f(x)\,dx=\int_{N}\int_{A}\int_{K}f(ank)\,dkdadn=\int_{K}\int_{A}\int_{% N}f(kan)a^{2\rho}\,dndadk\,.$$ In this normalization the invariant measure on $X$ is given by $$\int_{X}f(x)\,dx=\int_{G}f(g\cdot x_{o})\,dg=\int_{N}\int_{A}f(an\cdot x_{o})% \,dadn\,.$$ Finally, it follows from (1.2) that the $K$-invariant measure on $B$ transforms under the $G$-action according to (1.3) $$\int_{B}f(g\cdot b)\,db=\int_{B}f(b)a(g^{-1}b)^{-2\rho}\,db$$ for all $f\in L^{1}(B)$ and $g\in G$. 1.3. Spherical principal series and spherical functions Denote by $L$ the left regular representation $L_{a}f(x)=f(a^{-1}x)$ and by $R$ the right regular representation $R_{a}f(x)=f(xa)$. If $C/D$ is a homogeneous space, then we identify functions on $C/D$ with right $D$-invariant functions on $C$. The spherical principal series of representations are defined as follows. For $\lambda\in\mathfrak{a}_{\mathbb{C}}^{*}$ denote by $H_{\lambda}$ the Hilbert space of measurable functions $f:G\to\mathbb{C}$ such that for all $man\in P$ (with the obvious notation) (1.4) $$R_{man}f=a^{-\lambda-\rho}f\quad\mathrm{and}\quad\|f\|^{2}=\int_{K}|f(k)|^{2}% \,dk<\infty\,.$$ Define the representation $\pi_{\lambda}$ of $G$ on $H_{\lambda}$ by (1.5) $$\pi_{\lambda}(x)f(y)=L_{x}f(y)=a(x^{-1}y)^{-\lambda-\rho}f(k(x^{-1}y))\,.$$ A different picture (called the compact picture) of the representations $\pi_{\lambda}$ is obtained by noting that the Iwasawa decomposition implies that the restriction map $f\mapsto f|_{K}$, is a unitary $K$-isomorphism $H_{\lambda}\to L^{2}(B)$. Thus we may view $\pi_{\lambda}$ as a representation on the latter space. By (1.5) the representation is then given by (1.6) $$\pi_{\lambda}(x)f(b)=a(x^{-1}k)^{-\lambda-\rho}f(x^{-1}\cdot b)\,.$$ The advantage of the compact picture is that the Hilbert space is independent of $\lambda$. The representations $\pi_{\lambda}$ are known to be unitary when $\lambda$ is purely imaginary on $\mathfrak{a}$. Furthermore, it is known that $\pi_{\lambda}$ is irreducible for almost all $\lambda\in\mathfrak{a}_{\mathbb{C}}^{*}$, and that $\pi_{w\lambda}$ and $\pi_{\lambda}$ are equivalent representations, also for almost all $\lambda\in\mathfrak{a}_{\mathbb{C}}^{*}$, see [4, 42]. Here $W$ denotes the Weyl group of the root system $\Delta$. The representations $\pi_{\lambda}$ all restrict to the same representation of $K$, the left regular representation on $L^{2}(B)$. In particular, the trivial representation of $K$ has multiplicity one and is realized on the space of constant functions on $B$. We fix $p_{\lambda}\in H_{\lambda}^{K}$ as the constant function 1 on $B$, that is, on $G$ it is $$p_{\lambda}(g)=a(g)^{-\lambda-\rho}.$$ We define the following function $e_{\lambda,b}$ on $X$ for $(\lambda,b)\in\mathfrak{a}_{\mathbb{C}}^{*}\times B$, (1.7) $$e_{\lambda,kM}(gK)=p_{\lambda}(g^{-1}k)\,,\quad k\in K,g\in G\,.$$ then $(x,b)\mapsto e_{\lambda,b}(x)$ is the generalized Poisson kernel on $X\times B$. The spherical functions on $X$ are the $K$-biinvariant matrix coefficients of $\pi_{\lambda}$ defined by (1.8) $$\varphi_{\lambda}(x)=(\pi_{\lambda}(g)p_{\lambda},p_{\lambda})=\int_{B}e_{% \lambda,b}(x)\,db$$ where $x=gK$ and $\lambda\in\mathfrak{a}^{*}_{\mathbb{C}}$. The latter integral is exactly Harish-Chandra’s formula for the spherical functions. 1.4. The standard intertwining operators As mentioned, the representation $\pi_{w\lambda}$ is known to be equivalent with $\pi_{\lambda}$ for almost all $\lambda\in i\mathfrak{a}^{*}$. Hence for such $\lambda$ there exists a unitary intertwining operator $$\mathcal{A}(w,\lambda)\colon H_{\lambda}\to H_{w\lambda}.$$ The operator is unique, up to scalar multiples, by Schur’s lemma. The trivial $K$-type has multiplicity one and is generated by the function $p_{\lambda}\in H_{\lambda}$, which has norm 1 in $L^{2}(B)$. It follows that $\mathcal{A}(w,\lambda)p_{\lambda}$ is a unitary multiple of $p_{w\lambda}$. We normalize the intertwining operator so that $$\mathcal{A}(w,\lambda)p_{\lambda}=p_{w\lambda}.$$ The intertwining operator so defined is called the normalized standard intertwining operator. It is known that the map $\lambda\mapsto\mathcal{A}(w,\lambda)$ extends to a rational map (which we denote by the same symbol) from $\mathfrak{a}^{*}_{\mathbb{C}}$ into the bounded operators on $L^{2}(B)$. In fact, one can give a formula for the operator $\mathcal{A}(w,\lambda)$ as follows. The (unnormalized) standard intertwining operator $A(w,\lambda)\colon H_{\lambda}\to H_{w\lambda}$ is defined by the formula $$A(w,\lambda)f(g)=\int_{\bar{N}_{w}}f(gw\bar{n})\,d\bar{n}$$ where $\bar{N}_{w}=\theta(N)\cap w^{-1}Nw$, see [41]. The integral converges when $f$ is continuous and $\lambda\in\mathfrak{a}^{*}_{\mathbb{C}}$ satisfies that ${\rm Re}\langle\lambda,\alpha\rangle>0$ for all $\alpha\in\Delta^{+}$. It is defined by meromorphic continuation for other values of $\lambda$, and by continuous extension for $f\in L^{2}(B)$ (see [40], Ch. 7). From the definition of $A(w,\lambda)$ we see $$A(w,\lambda)p_{\lambda}=c_{w}(\lambda)p_{w\lambda}$$ where $c_{w}(\lambda)=\int_{\bar{N}_{w}}p_{\lambda}(\bar{n})\,d\bar{n}$ (see [36] page 446). Hence $$\mathcal{A}(w,\lambda)=c_{w}(\lambda)^{-1}A(w,\lambda).$$ 1.5. The Fourier transform In this section we introduce the Fourier transform on $X$, following Helgason [31, 32]. While Helgason introduced it from a more geometric point of view, we shall show here that it can also be done from a representation theory point of view, resulting in alternative proofs of the inversion formula and the Plancherel theorem. From the point of view of representation theory, the Fourier transform of an integrable function on $G$ is the operator $\pi(f)$ on $\mathcal{H}$ defined by $$\pi(f)v=\int_{G}f(g)\pi(g)v\,dg,\quad v\in\mathcal{H},$$ for each unitary irreducible representation $(\pi,\mathcal{H})$. If $f$ is a function on $X$, this operator will be trivial on the orthocomplement of the space of $K$-fixed vectors. This space is always one dimensional in an irreducible representation, and hence it becomes natural to define the Fourier transform of $f$ as the vector $\pi(f)v$ in the representation space of $\pi$, where $v$ is a specified $K$-fixed vector. For the spherical principal series, we thus arrive at the following definition of the Fourier transform: $$\hat{f}_{\lambda}:=\pi_{-\lambda}(f)p_{-\lambda}\in H_{-\lambda}$$ for $\lambda\in i\mathfrak{a}^{*}$ (the reason for the minus is just historical). In the notation of the compact picture, it is (1.9) $$\hat{f}_{\lambda}(b)=(\pi_{-\lambda}(f)p_{-\lambda})(b)=\int_{G}f(x)p_{-% \lambda}(x^{-1}b)\,dx,=\int_{G}f(x)\overline{e_{\lambda,b}(x)}\,dx\,.$$ where $b\in B$. Thus the Fourier transform of $f$ may be viewed as a map $$\mathfrak{a}_{\mathbb{C}}^{*}\times B\ni(\lambda,b)\mapsto\hat{f}(\lambda,b):=% \hat{f}_{\lambda}(b)\in\mathbb{C}$$ Apart from the replacement of $\lambda$ by $i\lambda$, this is the Fourier transform as it was introduced by Helgason in [31] . Let $$\mathfrak{a}^{+}:=\{H\in\mathfrak{a}\mid(\forall\alpha\in\Delta^{+})\,\alpha(X% )>0\}$$ be the positive Weyl chamber corresponding to $\Delta^{+}$, and let $\mathfrak{a}_{+}^{*}$ denote the corresponding open chamber in $\mathfrak{a}^{*}$. Let $c(\lambda)$ be the Harish-Chandra $c$-function, which for $\mathrm{Re}\lambda\in\mathfrak{a}^{*}_{+}$ is given by (see [36] p. 447) $$c(\lambda)=c_{w^{*}}(\lambda)=\int_{\bar{N}}p_{\lambda}(\bar{n})d\bar{n},$$ where $w^{*}\in W$ is the long element and $\bar{N}=\theta N$. We recall that an explicit formula for $c(\lambda)$ was determined by Gindikin and Karpelevich, see [14] or [36], p. 447. Furthermore, we define a measure $\mu$ on $i\mathfrak{a}^{*}_{+}\times B$ by (1.10) $$d\mu(\lambda,b)=|c(\lambda)|^{-2}\,d\lambda\,db\,.$$ We will also denote by $d\mu$ the measure $|c(\lambda)|^{-2}d\lambda$ on $i\mathfrak{a}^{*}$. Let $L^{2}_{W}(i\mathfrak{a}^{*}\times B,\frac{d\mu}{|W|})$ be the space of all $F\in L^{2}(i\mathfrak{a}^{*}\times B,\frac{d\mu}{|W|})$ such that for all $w\in W$ we have (1.11) $$F(w\lambda,\cdot)=\mathcal{A}(w,-\lambda)F(\lambda,\cdot)$$ in $L^{2}(B)$, for almost all $\lambda\in i\mathfrak{a}^{*}$. Notice that, since $\mathcal{A}(w,-\lambda)$ is an intertwining operator for each $\lambda\in i\mathfrak{a}^{*}$, this is an invariant subspace for the unitary action of $G$ on $L^{2}(i\mathfrak{a}^{*}\times B,\frac{d\mu}{|W|})$, defined by $$(g\cdot F)(\lambda,\cdot)=\pi_{-\lambda}(g)F(\lambda,\cdot).$$ We recall the following theorem of [32, 33]: Theorem 1.1. The Fourier transform is an intertwining unitary isomorphism (1.12) $$L^{2}(X)\simeq L^{2}_{W}(i\mathfrak{a}^{*}\times B,\frac{d\mu}{|W|})\,.$$ Furthermore, if $f\in C_{c}^{\infty}(X)$, then (1.13) $$f(x)=\frac{1}{|W|}\int_{i\mathfrak{a}^{*}}\int_{B}\widehat{f}(\lambda,b)e_{% \lambda,b}(x)\,d\mu(\lambda,b)\,.$$ For left $K$-invariant functions on $X$, this is Harish-Chandra’s Plancherel theorem for $X$. The Fourier transform $\hat{f}_{\lambda}$ is then a constant function on $B$, and the constant is $$\hat{f}_{\lambda}(b)=\int_{K}\hat{f}(\lambda,kM)\,dk=\int_{X}f(x)\varphi_{-% \lambda}(x)\,dx$$ (see (1.8)), the spherical Fourier transform of $f$. By definition, the normalized intertwining operator maps $p_{-\lambda}$ to $p_{-w\lambda}$, and hence in this case the intertwining relation (1.11) is just $F(w\lambda)=F(\lambda)$. The proof of the spherical Plancherel theorem is given in [26] (see also [24]), but it depends on two conjectures, see p. 611-612. One conjecture was affirmed with [14] by the mentioned formula for $c(\lambda)$. The second conjecture is affirmed in [27], see p. 4. A simpler proof has later been given in [52], see also [36] p. 545. As explained in [30] p. 50, Theorem 1.1 is proved by reduction to the spherical case. Because of the modified point of view invoking the representation theory, and since we have stated the relations (1.11) differently, we discuss some aspects of the proof. For more details, we refer to [32, 33, 37]. 1.6. The intertwining relation For the Fourier transformed function $F=\hat{f}$, the intertwining relation (1.11) is a direct consequence of the definition of $\hat{f}$: (1.14) $$\displaystyle\mathcal{A}(w,-\lambda)\hat{f}_{\lambda}$$ $$\displaystyle=$$ $$\displaystyle\mathcal{A}(w,-\lambda)\pi_{-\lambda}(f)p_{-\lambda}$$ $$\displaystyle=$$ $$\displaystyle\pi_{-w\lambda}(f)\mathcal{A}(w,-\lambda)p_{-\lambda}$$ $$\displaystyle=$$ $$\displaystyle\pi_{-w\lambda}(f)p_{-w\lambda}$$ $$\displaystyle=$$ $$\displaystyle\hat{f}_{w\lambda}\,.$$ The equations (1.11) allow the following important reformulation (which is the original formulation of Helgason, see [33], p. 132) (1.15) $$\int_{B}e_{w\lambda,b}(x)F(w\lambda,b)\,db=\int_{B}e_{\lambda,b}(x)F(\lambda)% \,db\,\qquad(\forall x\in X).$$ It follows that the integral over $i\mathfrak{a}^{*}$ in (1.13) is $W$-invariant, and thus can be written as an integral over the Weyl chamber $\mathfrak{a}^{*}_{+}$. The equivalence of (1.11) and (1.15) is an immediate consequence of the following lemma. Lemma 1.2. Let $f,g\in L^{2}(B)$ and $w\in W$ be given. Then the following holds for every $\lambda\in\mathfrak{a}^{*}_{\mathbb{C}}$ outside a locally finite set of complex hyperplanes. The relation (1.16) $$\int_{B}e_{w\lambda,b}(x)g(b)\,db=\int_{B}e_{\lambda,b}(x)f(b)\,db\,.$$ holds for all $x\in X$, if and only if (1.17) $$g=\mathcal{A}(w,-\lambda)f.$$ Proof. The relation (1.16) can be written in terms of the Poisson transformation for $X$. Recall (see [32, 53]) that the Poisson transform is the operator $\mathcal{P}_{\lambda}\colon H_{-\lambda}\to C^{\infty}(X)$ which is defined by $$\mathcal{P}_{\lambda}f(x)=\int_{B}f(b)e_{\lambda,b}(x)\,db$$ or equivalently (see [53] page 80-81), $$\mathcal{P}_{\lambda}f(gK)=\int_{K}f(gk)\,dk.$$ The latter formula shows that $\mathcal{P}_{\lambda}$ is a $G$-equivariant operator for the left action. Notice that $\mathcal{P}_{\lambda}p_{-\lambda}$ is exactly the spherical function $\varphi_{\lambda}$. It is now seen that (1.16) holds for all $x$ if and only if (1.18) $$\mathcal{P}_{w\lambda}g=\mathcal{P}_{\lambda}f.$$ On the other hand, the following can be seen to hold for all $\lambda$, for which the normalized standard intertwining operator is non-singular, (1.19) $$\mathcal{P}_{w\lambda}\circ\mathcal{A}(w,-\lambda)=\mathcal{P}_{\lambda}.$$ Indeed, it suffices to verify the identity for almost all $\lambda$, so we can assume that $\pi_{\lambda}$ is irreducible. It follows from the identity $\varphi_{w\lambda}=\varphi_{\lambda}$ that the two operators agree when applied to the element $p_{-\lambda}\in H_{-\lambda}$. Since the operators are $G$-equivariant, they must then agree everywhere. This proves (1.19). The equivalence of (1.18) and (1.17) follows immediately, for all $\lambda$ such that $\mathcal{P}_{w\lambda}$ is injective. The latter is obviously true if $\pi_{-w\lambda}$ is irreducible. In fact it is known that the Poisson transformation is injective for all $\lambda$, except on a singular set of hyperplanes (see [34] and [53] Thm 5.4.3). ∎ 1.7. The inversion formula Let $f\in C^{\infty}_{c}(X)$. Then, at the origin of $X$, $f(eK)=f^{K}(eK)$ where $f^{K}(x)=\int_{K}f(kx)\,dk.$ By the inversion formula of Harish-Chandra one can determine $f(eK)$ through the expression (1.20) $$f(eK)=\int_{i\mathfrak{a}_{+}^{*}}\mathrm{Tr}(\pi_{-\lambda}(f))\,d\mu(\lambda).$$ Applied to the function $L_{g^{-1}}f$ it gives $$f(gK)=\int_{i\mathfrak{a}_{+}^{*}}\mathrm{Tr}(\pi_{-\lambda}(g^{-1})\pi_{-% \lambda}(f))\,d\mu(\lambda).$$ Since $\pi_{-\lambda}(f)$ annihilates the orthocomplement of $H_{-\lambda}^{K}$, $$\mathrm{Tr}(\pi_{-\lambda}(g^{-1})\pi_{-\lambda}(f))=(\pi_{-\lambda}(g^{-1})% \pi_{-\lambda}(f)p_{-\lambda},p_{-\lambda})_{L^{2}(B)},$$ and as the representations are unitary this equals $$(\pi_{-\lambda}(f)p_{-\lambda},\pi_{-\lambda}(g)p_{-\lambda})_{L^{2}(B)}=\int_% {B}\hat{f}(\lambda,b)e_{\lambda,b}(gK)\,db.$$ It follows that $$f(x)=\int_{i\mathfrak{a}_{+}^{*}\times B}\hat{f}(\lambda,b)e_{\lambda,b}(x)\,d% \mu(\lambda,b)\,.$$ 1.8. The $L^{2}$-isomorphism For $f:G\to\mathbb{C}$ let $f^{*}(g)=\overline{f(g^{-1})}$. Then, if $\pi$ is an unitary representation, we have $\pi(f^{*})=\pi(f)^{*}$ and $\pi(f*g)=\pi(f)\pi(g)$ for $f,g\in L^{1}(G)$. Thus, for $\lambda\in i\mathfrak{a}^{*}$ (so $\pi_{\lambda}$ is unitary), we have (1.21) $$\mathrm{Tr}\,(\pi_{-\lambda}(f^{*}*f))=(\pi_{-\lambda}(f)p_{-\lambda},\pi_{-% \lambda}(f)p_{-\lambda})=\int_{B}|\hat{f}(\lambda,b)|^{2}\,db.$$ As $\|f\|^{2}=f^{*}*f(eK)$ the equations (1.20) and (1.21) imply that $$\|f\|_{L^{2}(X)}^{2}=\int_{i\mathfrak{a}^{*}_{+}\times B}|\hat{f}(\lambda,b)|^% {2}\,d\mu(\lambda,b)=\int_{i\mathfrak{a}_{+}^{*}}\|\hat{f}_{\lambda}\|^{2}\,d% \mu(\lambda)\,.$$ Thus $L^{2}(X)\ni f\mapsto\hat{f}_{\lambda}\in L^{2}(B)$ sets up a unitary map into $\int_{i\mathfrak{a}_{+}^{*}}^{\oplus}(H_{-\lambda},\pi_{-\lambda})\,d\mu(\lambda)$. By Harish-Chandra’s Plancherel formula this is an unitary isomorphism on the level of $K$-invariant elements. We want to show that this implies it is onto. Let $\mathcal{K}$ be the orthogonal complement of the image in $\int_{i\mathfrak{a}_{+}^{*}}^{\oplus}H_{-\lambda}d\mu(\lambda)\,.$ Then $\mathcal{K}$ is $G$-invariant and $\mathcal{K}^{K}=\{0\}$. By [47], Thm. 2.15, there exists measurable subset $\Lambda\subset i\mathfrak{a}_{+}^{*}$ such that, as a representation of $G$, we have $\mathcal{K}\simeq\int_{\Lambda}^{\oplus}H_{-\lambda}\,d\mu(\lambda)$. In particular, $\mathcal{K}^{K}\simeq\int_{\Lambda}^{\oplus}H_{-\lambda}^{K}\,d\mu(\lambda)$ and hence it follows from $\mathcal{K}^{K}=\{0\}$ that $\mu(\Lambda)=0$. Hence $\mathcal{K}=\{0\}$ and (1.22) $$(L^{2}(X),L)\simeq\int_{i\mathfrak{a}_{+}^{*}}^{\oplus}(H_{-\lambda},\pi_{-% \lambda})\,d\mu(\lambda)$$ The isomorphism statement in (1.12) follows easily. This completes the discussion of Theorem 1.1. 1.9. The heat equation We illustrate the use of the Fourier theory by applying it to the heat equation on $X$. Denote by $L_{X}$ the Laplace operator on $X$. It is known that $e_{\lambda,b}$ is an eigenfunction for $L_{X}$, for all $\lambda$, $b$, with the eigenvalues $-(|\lambda|^{2}+|\rho|^{2})$ for $\lambda\in i\mathfrak{a}^{*}$. The heat equation on $X$ refers to the Cauchy problem (1.23) $$L_{X}u(x,t)=\partial_{t}u(x,t)\qquad\mathrm{and}\qquad u(x,0)=f(x)$$ where $f\in L^{2}(X)$. The problem has a unique solution, which is easily found by using the Fourier transform in Theorem 1.1. It is given by (1.24) $$u(x,t)=\frac{1}{|W|}\int_{i\mathfrak{a}^{*}\times B}e^{-t(|\lambda|^{2}+|\rho|% ^{2})}\widehat{f}(b,\lambda)e_{\lambda,b}(x)\,d\mu(\lambda,b)\,.$$ We denote by $h_{t}$ the heat kernel on $X$ defined by $\hat{h}_{t}(\lambda,b)=e^{-t(|\lambda|^{2}+|\rho|^{2})}$ for all $\lambda,b$, or equivalently, (1.25) $$h_{t}(x)=\frac{1}{|W|}\int_{i\mathfrak{a}^{*}}e^{-t(|\lambda|^{2}+|\rho|^{2})}% \varphi_{\lambda}(x)\,d\mu(\lambda)\,$$ (see [12]). The convolution product of a function $f$ on $X$ with a $K$-invariant function $h$ on $X$ (both viewed as functions on $G$), is again a function on $X$. Moreover, since $\pi(f*h)=\pi(f)\pi(h)$, it follows that $$(f*h)^{\wedge}(\lambda,b)=\hat{f}(\lambda,b)\hat{h}(\lambda)$$ It follows that we can write, for all $f\in L^{2}(X)$, (1.26) $$u(\cdot,t)=f*h_{t}.$$ We define the heat transform as the map $f\mapsto H_{t}f=f*h_{t}$ that associates the solution at time $t>0$ to the initial function $f$. It follows from the Fourier analysis that the heat transform is injective. Notice that the map $H_{t}$ is a $G$-equivariant bounded operator from the space $L^{2}(X)$ to itself. We equip the image $\mathrm{Im}(H_{t})$, not with the norm of $L^{2}(X)$, but with the norm that makes the heat transform a unitary isomorphism. As a consequence we obtain the following Observation 1.3. Let $d\mu_{t}(\lambda)=e^{2t(|\lambda|^{2}+|\rho|^{2})}\,d\mu(\lambda)$. Then, as a representation of $G$, the image of the heat transform decomposes as (1.27) $$(\mathrm{Im}(H_{t}),L)\simeq\int_{i\mathfrak{a}_{+}^{*}}^{\oplus}(H_{-\lambda}% ,\pi_{-\lambda})\,d\mu_{t}(\lambda)\,.$$ Notice also the semigroup property $H_{t}(H_{s}f)=H_{s+t}f$ which follows from (1.25). 2. The complex crown of $X$ and the space $\mathcal{H}_{X}$ In this section we discuss some aspects of the interplay between complex geometry and the harmonic analysis on $X$. We introduce the crown, and we construct a $G$-invariant Hilbert space of holomorphic functions on it. The construction is motivated by the analysis in [17] where a similar construction was carried out on a subdomain $\mathrm{Cr}(X)_{j}\subseteq\mathrm{Cr}(X)$. There the purpose was to obtain a Hardy space realization of a part of the most continuous spectrum of a pseudo-Riemannian symmetric space $G/H_{j}$, which is embedded in the boundary of $\mathrm{Cr}(X)_{j}$. In the present paper, our purpose is obtain a holomorphic model that carries all the representations in the Plancherel decomposition of $L^{2}(X)$. The main references for Subsection 2.1 are [15, 45, 17]. For a nice overview article see [7]. 2.1. The complex convexity theorem Let $$\Omega=\{X\in\mathfrak{a}\mid(\forall\alpha\in\Delta)\,|\alpha(X)|<\pi/2\}\,.$$ For $\emptyset\not=\omega\subset\mathfrak{a}$ set $T(\omega)=\mathfrak{a}\oplus i\omega$, $A(\omega)=\exp T(\omega)$ and $X(\omega)=GA(\omega)\cdot x_{o}=G\exp i\omega\cdot x_{o}$. Then $X(\omega)$ is $G$-invariant by construction. The set $\mathrm{Cr}(X):=X(\Omega)$, introduced in [1], is called the complex crown of $X$ or the Akhiezer-Gindikin domain. Note, that by Theorem 11.2 in [35] it follows that $\exp:T(\Omega)\to A(\Omega)$ is a diffeomorphism. The crown is an open $G$-invariant complex submanifold of $X_{\mathbb{C}}$ containing $X$ as a totally real submanifold. Furthermore, the $G$-action on $\mathrm{Cr}(X)$ is proper, $\mathrm{Cr}(X)\subset N_{\mathbb{C}}A_{\mathbb{C}}\cdot x_{o}$ and $\mathrm{Cr}(X)$ is Stein. The importance of $\mathrm{Cr}(X)$ for harmonic analysis comes from the fact, that it allows a holomorphic extension of every eigenfunction of the algebra of invariant differential operators. The main step in proving this, see [45] Proposition 1.3, is to show that each function $e_{\lambda,b}$ extends to a holomorphic function on $\mathrm{Cr}(X)$ and then to use the affirmative solution to the Helgason conjecture [39]. The important complex convexity theorem of Gindikin-Krötz [15] is the inclusion $\subseteq$ of the following theorem. The equality was recently established by Krötz-Otto in [43]. Theorem 2.1. Let $Y\in\Omega$, then (2.1) $$a(\exp(iY)G)=A\exp(i\,\mathrm{conv}(W\cdot Y))\,.$$ Here $\mathrm{conv}$ stands for convex hull. Note that (2.1) follows form Theorem 5.1 in [43] by $a(\exp iYg)=a(\exp iYk(g))a(g)$. 2.2. The space $\mathcal{H}_{X}$ We define the following function $\omega:i\mathfrak{a}^{*}\times\mathfrak{a}\to\mathbb{R}^{+}$. (2.2) $$\omega(i\nu,Y):=\frac{1}{|W|}\sum_{w\in W}e^{2\nu(wY)}\in\mathbb{R}^{+}.$$ Furthermore, we define $$\omega(\lambda)=\sup_{Y\in\Omega}\omega(\lambda,Y)\,$$ for $\lambda\in i\mathfrak{a}^{*}$, and we define a measure $\mu_{\omega}$ on $i\mathfrak{a}^{*}\times B$ by $d\mu_{\omega}(\lambda,b)=\omega(\lambda)d\mu(\lambda,b)$. Lemma 2.2. Let $Y\in\Omega$. There exists $T\in\mathfrak{a}^{+}$, and for each $g\in G$ a constant $C>0$, such that (2.3) $$\left|\frac{e_{i\nu,b}(g\exp iY\cdot x_{o})}{\sqrt{\omega(i\nu)}}\right|\leq Ce% ^{-\nu(T)}$$ for all $\nu\in\mathfrak{a}^{*}_{+}$ and $b\in B$. The element $T$ can be chosen locally uniformly with respect to $Y$, and the constant $C$ can be chosen locally uniformly with respect to $Y$ and $g$. In particular $(\lambda,b)\mapsto e_{\lambda,b}(z)/\omega(\lambda)$ is in $L^{2}(i\mathfrak{a}_{+}^{*}\times B,d\mu_{\omega})$ for all $z\in\mathrm{Cr}(X)$. Proof. Let $b=kM\in B$. Using Theorem 2.1 we can write $$a(\exp(-iY)g^{-1}k)=a\exp iZ$$ where $Z\in-\mathrm{conv}(W\cdot Y)\subset\Omega$ and $a\in A$ are unique and depend continuously on $Y$, $g$ and $b$. In particular, if $Q\subset G\times\Omega$ is compact, then there exists a constant $C=C_{Q}>0$ such that $a^{-\rho}\leq C$ for all $(g,Y)\in Q$ and $b\in B$. Hence, for all $\lambda=i\nu\in i\mathfrak{a}^{*}_{+}$: $$|e_{\lambda,b}(g\exp iY\cdot x_{o})|=|(a\exp(iZ))^{-i\nu-\rho}|\leq Ce^{\nu(Z)}.$$ Let $U\subset\Omega$ be a compact neighborhood of $Y$. Since $-\mathrm{conv}(W\cdot U)$ is compact and contained in $\Omega$, we can find an element $T\in\mathfrak{a}^{+}$ such that, $-\mathrm{conv}(W\cdot U)+T\subset\Omega$. Thus by (2.2) $$\frac{1}{|W|}e^{2\nu(Z+T)}\leq\frac{1}{|W|}\sum_{w\in W}e^{2\nu(w(Z+T))}=% \omega(i\nu,Z+T)\leq\omega(\lambda)$$ for all $Z\in-\mathrm{conv}(W\cdot U)$. Notice that $T$ was chosen independently of $g$ and $b$. Hence $$\frac{1}{\omega(\lambda)}\leq|W|e^{-2\nu(Z)}e^{-2\nu(T)}\,.$$ It follows that there exists a constant $C>0$ as claimed such that (2.3) holds. As $|c(i\nu)|^{-2}$ has a polynomial growth on $\mathfrak{a}^{*}$ it follows that $|c(i\nu)|^{-2}e^{-\nu(T)}$ is bounded. The last statement follows as $\mathfrak{a}_{+}^{*}\ni\nu\mapsto e^{-\nu(T)}$ is integrable for all $T\in\mathfrak{a}^{+}$, see [9], p. 10. ∎ Denote by $\mathcal{H}_{X}$ the space of holomorphic functions $F:\mathrm{Cr}(X)\to\mathbb{C}$ such that $F|_{X}\in L^{2}(X)$ and $$\|F\|_{\mathcal{H}_{X}}^{2}:=\frac{1}{|W|}\int_{i\mathfrak{a}^{*}\times B}|% \widehat{F|_{X}}(\lambda,b)|^{2}\,d\mu_{\omega}(\lambda,b)<\infty\,.$$ Theorem 2.3. The space $\mathcal{H}_{X}$ is a $G$-invariant Hilbert space. The action of $G$ is unitary and $$(\mathcal{H}_{X},L))\simeq\int_{i\mathfrak{a}_{+}^{*}}^{\oplus}(H_{-\lambda},% \pi_{-\lambda})\,d\mu_{\omega}(\lambda)\,.$$ Furthermore, the following holds: (1) Let $F\in\mathcal{H}_{X}$ and $f=F|_{X}$. Then $$F(z)=\int\widehat{f}(\lambda,b)e_{\lambda,b}(z)\,d\mu(\lambda,b)$$ for all $z\in\mathrm{Cr}(X)$. (2) For each $\varphi\in L^{2}(i\mathfrak{a}_{+}^{*}\times B,d\mu_{\omega})$ the function defined by $$F(z)=\int\varphi(\lambda,b)e_{\lambda,b}(z)\,d\mu(\lambda,b)$$ belongs to $\mathcal{H}_{X}$ and has $\widehat{F|_{X}}=\varphi$. (3) The point evaluation maps $$\mathcal{H}_{X}\ni F\mapsto F(z)\in\mathbb{C}$$ are continuous for all $z\in\mathrm{Cr}(X)$. (4) The reproducing kernel of $\mathcal{H}_{X})$ is given by $$\displaystyle K(z,w)$$ $$\displaystyle=$$ $$\displaystyle\int_{i\mathfrak{a}_{+}^{*}\times B}\frac{e_{\lambda,b}(z)e_{-% \lambda,b}(\sigma(w))}{\omega(\lambda)}\,d\mu(\lambda,b)$$ $$\displaystyle=$$ $$\displaystyle\int_{i\mathfrak{a}_{+}^{*}}\frac{\varphi_{\lambda}(\sigma(w)^{-1% }z)}{\omega(\lambda)}\,d\mu(\lambda)$$ where $\sigma$ is the conjugation introduced in Subsection 1.1. Proof. We first establish the property (2). For $z\in\mathrm{Cr}(X)$ let $f_{z}(\lambda,b)=e_{\lambda,b}(z)\omega(\lambda)^{-1}$. Assume that $\varphi\in L^{2}(i\mathfrak{a}_{+}^{*}\times B,d\mu_{\omega})$. Then by Cauchy-Schwartz and the last part of Lemma 2.2 we have $$\displaystyle\int_{i\mathfrak{a}^{*}_{+}\times B}|\varphi(\lambda,b)e_{\lambda% ,b}(z)|\,d\mu(\lambda,b)$$ $$\displaystyle=$$ $$\displaystyle\int_{i\mathfrak{a}^{*}_{+}\times B}|\varphi(\lambda,b)\frac{f_{z% }(\lambda,b(z)}{\omega(\lambda)}|\,d\mu_{\omega}(\lambda,b)$$ $$\displaystyle\leq$$ $$\displaystyle\|\varphi\|_{L^{2}(d\mu_{\omega})}\|f_{z}\|_{L^{2}(d\mu_{\omega})}$$ $$\displaystyle<$$ $$\displaystyle\infty$$ Recall that there is a compact neighborhood $U$ of $z$ such that $\|f_{z}\|$ is uniformly bounded on $U$. It follows that the function $$\mathrm{Cr}(X)\ni z\mapsto G_{\varphi}(z):=\int\varphi(\lambda,b)e_{\lambda,b}% (z)\,d\mu(\lambda,b)\in\mathbb{C}$$ exists and is holomorphic. Furthermore, (2.4) $$|G_{\varphi}(z)|\leq C\|\varphi\|_{L^{2}(d\mu_{\omega})}\,.$$ for all $\varphi\in L^{2}(i\mathfrak{a}_{+}^{*}\times B,d\mu_{\omega})$ with $C$ depending locally uniformly on $z$. Now part (2) follows. Let $F\in\mathcal{H}_{X}$ and $f=F|_{X}$. Then it follows that $$G_{\hat{f}}(z)=\int\widehat{f}(\lambda,b)e_{\lambda,b}(z)\,d\mu(\lambda,b)$$ is holomorphic and satisfies $G|_{X}=F|_{X}$. Hence $G=F$. This proves part (1), and the part (3) now follows from (2.4). It follows from part (3) that there exists a reproducing kernel $K(z,w)=K_{w}(z)$ for $\mathcal{H}_{X}$. Let $F\in\mathcal{H}_{X}$. Then on the one hand $$F(w)=(F,K_{w})_{\mathcal{H}_{X}}=\int\widehat{F|_{X}}(\lambda,b)\overline{% \widehat{K_{w}|_{X}}(\lambda,b)}\omega(\lambda)\,d\mu(\lambda,b),$$ and on the other, using part (1) $$F(w)=\int\widehat{F|_{X}}(\lambda,b)e_{\lambda,b}(w)\,d\mu(\lambda,b).$$ It follows from part (2) that $\widehat{F|_{X}}(\lambda,b)$ can be any function in $L^{2}(i\mathfrak{a}_{+}^{*}\times B,d\mu_{\omega})$. Since $e_{\lambda,b}/\omega$ also belongs to this space by Lemma 2.2, it follows that $$\widehat{K_{w}|_{X}}(\lambda,b)=\overline{e_{\lambda,b}(w)}/\omega(\lambda)=e_% {-\lambda,b}(\sigma(w))/\omega(\lambda).$$ Thus $$\displaystyle K(z,w)$$ $$\displaystyle=$$ $$\displaystyle(K_{w},K_{z})_{\mathcal{H}_{X}}$$ $$\displaystyle=$$ $$\displaystyle\int\frac{e_{-\lambda,b}(\sigma(w))e_{\lambda,b}(z)}{\omega(% \lambda)^{2}}\,\omega(\lambda)d\mu$$ $$\displaystyle=$$ $$\displaystyle\int\frac{e_{\lambda,b}(z)e_{-\lambda,b}(\sigma(w))}{\omega(% \lambda)}\,d\mu\,.$$ It suffices to establish the final formula in (4) for $z,w\in X$. Moreover, by $G$-invariance of the kernel we may assume $w=e$. Then the formula follows immediately from (1.8) (see also Theorem 1.1 in [37], p. 224). ∎ Remark 2.4. It is clear that the space $\mathcal{H}_{X}^{K}$ also is a reproducing kernel Hilbert space. The reproducing kernel is obtained by averaging over $K$, $$K_{\mathcal{H}_{X}^{K}}(z,w)=\int_{i\mathfrak{a}_{+}^{*}}\frac{\varphi_{% \lambda}(z)\varphi_{-\lambda}(\sigma(w))}{\omega(\lambda)}\,d\mu(\lambda)\,,$$ as stated in [18], Proposition 8.7. 3. The image of the Segal-Bargmann Transform In this section we introduce the Segal-Bargmann transform on $X$ and give two characterizations of its image as a $G$-invariant Hilbert space of holomorphic functions on $\mathrm{Cr}(X)$, both different from the one given in [46]. The first characterization is a natural extension of Observation 1.27. The second characterization uses the normalized Radon transform $\Lambda$ from [32]. A similar result was proved in [49] for the Segal-Bargmann transform related to positive multiplicity functions, but only for the $K$-spherical case. In the last part of the section we show that the normalized Radon transform allows an analytic extension as a unitary isomorphism of $\mathcal{H}_{X}$ into a function space over a domain in the complexified horocycle space $\Xi_{\mathbb{C}}$. A similar construction for the Hardy space on a subdomain $\mathrm{Cr}(X)_{j}$ was given in [18]. 3.1. The Segal-Bargmann transform on $X$ The description of the heat transform in Subsection 1.9 implies that the heat kernel $h_{t}$, as well as every function $H_{t}f$, $f\in L^{2}(X)$, extends to a holomorphic function on $\mathrm{Cr}(X)$ (see also [45], Prop. 6.1). In fact, it follows from Theorem 2.3 (2), that these extensions belong to $\mathcal{H}_{X}$. We shall denote the holomorphic extensions by the same symbols. The map $$H_{t}:L^{2}(X)\to\mathcal{H}_{X}\subset\mathcal{O}(\mathrm{Cr}(X))$$ is the Segal-Bargmann transform. Note, $\|H_{t}f\|_{\mathcal{H}_{X}}\leq\|f\|_{L^{2}}$. Our first description of the image of $H_{t}$ in $\mathcal{O}(\mathrm{Cr}(X))$ is given by the following. Theorem 3.1. The image $H_{t}(L^{2}(X))$ of the Segal-Bargmann transform is the space $\mathcal{O}_{t}(\mathrm{Cr}(X))$ of holomorphic functions $F$ on $\mathrm{Cr}(X)$ such that $F|_{X}\in L^{2}(X)$ and $$\|F\|_{t}^{2}:=\int_{i\mathfrak{a}_{+}^{*}\times B}|\widehat{F|_{X}}(\lambda,b% )|^{2}\,d\mu_{t}(\lambda,b)<\infty\,.$$ Here $d\mu_{t}$ is the measure defined in Observation 1.3. Furthermore, point evaluation is continuous on $\mathcal{O}_{t}(\mathrm{Cr}(X))$, and the reproducing kernel is given by $$K_{t}(z,w)=h_{2t}(\sigma(w)^{-1}z)\,.$$ Proof. The first statement follows from Observation 1.3 and Theorem 2.3. The reproducing kernel is obtained by a standard argument, using the semigroup property of the convolution with $h_{t}$. Let $F=f*h_{t}$ and $z\in\mathrm{Cr}(X)$. Note that $L_{z}h_{t}$ is well defined for $z\in\mathrm{Cr}(X)$ as $h_{t}$ is $K$-biinvariant. We have $$\displaystyle F(w)$$ $$\displaystyle=$$ $$\displaystyle f*h_{t}(w)$$ $$\displaystyle=$$ $$\displaystyle(f,L_{\sigma(w)}h_{t})_{L^{2}(X)}$$ $$\displaystyle=$$ $$\displaystyle(H_{t}f,H_{t}(L_{\sigma(w)}h_{t}))_{\mathcal{O}_{t}}\,.$$ Hence point evaluation is continuous and $$K_{w}(z)=H_{t}(L_{\sigma(w)}h_{t})(z)=(L_{\sigma(w)}h_{t})*h_{t}(z)=h_{2t}(% \sigma(w)^{-1}z)\,.$$ ∎ 3.2. The Radon transform on X and the Segal-Bargmann transform Let $\Xi=G/MN$ and put $\xi_{o}=eMN\in\Xi$. By the Iwasawa decomposition it follows that (3.1) $$B\times A\simeq\Xi\,\quad(kM,a)\mapsto ka\cdot\xi_{o}$$ is a diffeomorphism. A subset $\xi\subset X$ is said to be a horocycle if there exists $x\in X$ and $g\in G$ such that $\xi=gNx$. It is well known, see [29, 32], that the map $g\xi_{0}\mapsto gNx_{0}$ is a bijection of $\Xi$ onto the set of horocycles. Using this identification the space of horocycles becomes an analytic manifold with a transitive $G$ action. The Radon transform $\mathcal{R}(f)(g\cdot\xi_{o})=\int_{N}f(gn\cdot x_{o})\,dn$ is a $G$-intertwining operator $C_{c}^{\infty}(X)\to C^{\infty}(\Xi)$. The importance of this observation comes from the fact that the regular representation of $G$ on $L^{2}(\Xi)$ is much easier to decompose than that on $L^{2}(X)$. As induction commutes with direct integral, induction by stages shows that $(L^{2}(\Xi),L)\simeq\int_{i\mathfrak{a}^{*}}^{\oplus}(H_{\lambda},\pi_{\lambda% })\,d\lambda$, see [47], p. 284 and 287. In fact, let $\chi_{\lambda}(man)=a^{\lambda}$. Denote by $\epsilon$ the trivial representation of $MN$ and by $L_{\Xi}$ the regular representation of $G$ on $L^{2}(\Xi)$. As $MAN/MN\simeq A$ and $MN$ acts trivially on $L^{2}(MAN/MN)\simeq L^{2}(A)$, it follows that $$\displaystyle L_{\Xi}$$ $$\displaystyle\simeq$$ $$\displaystyle\mathrm{ind}_{MN}^{G}\epsilon$$ $$\displaystyle\simeq$$ $$\displaystyle\mathrm{ind}_{MAN}^{G}\mathrm{ind}_{MN}^{MAN}\epsilon$$ $$\displaystyle\simeq$$ $$\displaystyle\mathrm{ind}_{MAN}^{G}\int_{i\mathfrak{a}^{*}}^{\oplus}\chi_{% \lambda}\,d\lambda$$ $$\displaystyle\simeq$$ $$\displaystyle\int_{i\mathfrak{a}^{*}}^{\oplus}\pi_{\lambda}\,d\lambda\,.$$ Equation (1.2) implies that (up to a constant) the $G$-invariant measure on $\Xi$ is given by $$\int_{\Xi}f(\xi)\,d\xi=\int_{B}\int_{A}f(ka\cdot\xi_{o})\,a^{2\rho}\,dadk\,.$$ It follows that (3.2) $$L^{2}(\Xi)\ni f\mapsto[(kM,a)\mapsto a^{\rho}f(ka\cdot\xi_{0})]\in L^{2}(B% \times A)$$ is a unitary isomorphism. We also note that by (1.2), (3.3) $$\hat{f}(\lambda,kM)=\int_{AN}f(kan\cdot x_{o})a^{-\lambda+\rho}\,dnda=\int_{A}% a^{\rho}\mathcal{R}(f)(kM,a)a^{-\lambda}\,da$$ for $f\in C_{c}(X)$. It is therefore natural to define a $\rho$-twisted Radon transform by (3.4) $$\mathcal{R}_{\rho}(f)(b,a)=a^{\rho}\mathcal{R}(f)(b,a)\,.$$ Notice that this is then an intertwining operator for the regular action of $G$ on functions over $X$ and $\Xi$, respectively, when the action on functions over $\Xi$ is transferred to a $\rho$-twisted action on functions over $B\times A$ through (3.2), that is, (3.5) $$(g\cdot\phi)(b,a):=a(g^{-1}b)^{-\rho}\phi(k(g^{-1}b),a(g^{-1}b)a).$$ Identifying $L^{2}(B\times A)$ with $L^{2}(A,L^{2}(B))$ in a natural way, equation (3.3) now reads (3.6) $$\hat{f}_{\lambda}=\mathcal{F}_{A}(\mathcal{R}_{\rho}(f))(\lambda)\,.$$ Note, if $f$ is $K$-invariant the $\rho$-twisted Radon transform reduces to the Abel transform $f\mapsto F_{f}$ introduced in [26], p. 261, and conjectured to be injective. The proof of that conjecture was the final step towards the Plancherel formula, see [27], p. 4. It follows from (3.6) and Theorem 1.1 that $\mathcal{R}_{\rho}$ is injective also without the assumption of $K$-invariance. Denote by $\Psi_{\mathfrak{a}}$ the multiplication operator $F\mapsto\frac{1}{c(-\cdot)}F$ and define a pseudo-differential operator $\Psi$ on $A$ by $$\Psi=\mathcal{F}_{A}^{-1}\circ\Psi_{\mathfrak{a}}\circ\mathcal{F}_{A}\,.$$ Recall that by the Gindikin-Karpelevich product formula for the $c$-function, it follows that $\Psi$ is a differential operator if and only if all multiplicities $m_{\alpha}$ are even. Let $L^{2}_{W}(B\times i\mathfrak{a}^{*},|W|^{-1}db\,d\lambda)\simeq L^{2}_{W}(i% \mathfrak{a}^{*},L^{2}(B);|W|^{-1}\,d\lambda)$ be the space of functions in $L^{2}(B\times i\mathfrak{a}^{*},|W|^{-1}db\,d\lambda)$ satisfying the intertwining relations (3.7) $$c(-w\lambda)F(\cdot,w\lambda)=c(-\lambda)\mathcal{A}(w,-\lambda)F(\cdot,% \lambda)\,,$$ for all $w\in W$, cf. Theorem 1.1. Note, if $F$ is $K$-invariant, then (3.7) amounts to $c(-w\lambda)F(w\lambda)=c(-\lambda)F(\lambda)$ for all $w\in W$. Let $d\tau(b,a)=|W|^{-1}db\,da$ on $B\times A$. The preimage of $L^{2}_{W}(B\times i\mathfrak{a}^{*},|W|^{-1}db\,d\lambda)$ under the Fourier transform of $A$ is denoted by $L^{2}_{W}(B\times A,d\tau)$. It is often helpful to identify this space with $L^{2}_{W}(A,L^{2}(B);|W|^{-1}\,da)$. Several formulas like the definition of the Hardy space $\mathcal{H}_{\Xi}$ in (3.3) below are then analogous to the Euclidean case, cf. [55], Chapter III, Section 2. By Theorem 1.1 and (3.6) we have the following commutative diagram (3.8) $$\xymatrix{L^{2}(X)\ar[d]_{\mathcal{F}}&\ar[r]^{(\mathrm{id}\times\Psi)\circ% \mathcal{R}_{\rho}}&&L^{2}_{W}(B\times A,d\tau)\ar[d]^{\mathrm{id}\times% \mathcal{F}_{A}}\\ L^{2}_{W}(B\times\mathfrak{a}^{*},|W|^{-1}d\mu)&\ar[r]_{\mathrm{id}\times\Psi_% {\mathfrak{a}}}&&L^{2}_{W}(B\times\mathfrak{a}^{*},|W|^{-1}db\,d\lambda)}\,$$ The vertical maps and the lower horizontal map are unitary isomorphisms. It follows that that the linear operator (3.9) $$\Lambda:=(\mathrm{id}\times\Psi)\circ\mathcal{R}_{\rho}:L^{2}(X)\to L^{2}_{W}(% B\times A,d\tau)$$ is an unitary isomorphism. By Lemma 3.3 in [32], p. 42, we know that $\Lambda$ is an intertwining operator. Here the action of $G$ on $L^{2}(B\times A,d\tau)$ is the $\rho$-twisted action defined through the identification of $B\times A$ with $\Xi$ (see (3.5)). By Theorem 5.3 in [29] it follows that for each invariant differential operator $D$ on $X$, there exists a differential operator $\widetilde{D}$ on $A$ such that $\Lambda(Df)(b,a)=\widetilde{D}_{a}\Lambda(f)(b,a)$. Here the subscript indicates that $\tilde{D}$ acts on the $a$ variable only. In particular, this applies with $D$ equal to the Laplace operator $L_{X}$. Tracing the commutative diagram (3.9) and using that $\widehat{L_{X}f}(\cdot,\lambda)=-(|\lambda|^{2}+|\rho|^{2})\hat{f}(\cdot,\lambda)$, $\lambda\in i\mathfrak{a}^{*}$, it is easily seen that $\widetilde{L_{X}}=L_{A}-|\rho|^{2}$, i.e., for $f$ sufficiently smooth (3.10) $$\Lambda(L_{X}f)=(L_{A}-|\rho|^{2})\Lambda(f)\,.$$ Let $r=\dim A$. Lemma 3.2. Let $f\in L^{2}(X)$. Then $e^{t|\rho|^{2}}\Lambda(H_{t}f)$ solves the heat equation on $A$ with initial value $\Lambda(f)\in L^{2}(B)$. In particular, the map $\mathfrak{a}\ni X\mapsto\Lambda(H_{t}f)(\cdot,\exp X)\in L^{2}(B)$ extends to a holomorphic function on $\mathfrak{a}_{\mathbb{C}}$, again denoted by $\Lambda(H_{t}f)$ such that $$|W|^{-1}(2\pi t)^{-r/2}\int_{B\times\mathfrak{a}_{\mathbb{C}}}|e^{t|\rho|^{2}}% \Lambda(H_{t}f)(b,\exp(X+iY))|^{2}\,e^{-|Y|^{2}/2t}\,dbdXdY<\infty\,.$$ Proof. See the proof of Lemma 2.5 in [49]. ∎ Let $\mathcal{F}_{W,t}(B\times\mathfrak{a}_{\mathbb{C}})$ denote the space of holomorphic functions on $B\times\mathfrak{a}_{\mathbb{C}}$ such that $$\mathfrak{a}_{\mathbb{C}}\ni Z\mapsto F(\cdot,Z)\in L^{2}(B)$$ is holomorphic with $F\circ(\mathrm{id},\log)\in L^{2}_{W}(B\times A)$, and satisfies $$\displaystyle\|F\|^{2}_{t}$$ $$\displaystyle=$$ $$\displaystyle|W|^{-1}(2\pi t)^{-r/2}\int_{\mathfrak{a}_{\mathbb{C}}^{*}}\int_{% B}|F(b,X+iY)|^{2}\,e^{-|Y|^{2}/2t}\,dXdY$$ $$\displaystyle=$$ $$\displaystyle|W|^{-1}(2\pi t)^{-r/2}\int_{\mathfrak{a}_{\mathbb{C}}^{*}}\|F(X+% iY)\|^{2}_{L^{2}(B)}\,e^{-|Y|^{2}/2t}\,dXdY<\infty\,.$$ Thus, $\mathcal{F}_{W,t}(B\times\mathfrak{a}_{\mathbb{C}})$ is analog to a $L^{2}(B)$ valued Fock space describing the image of the Segal-Bargmann transform on the Euclidean space $\mathfrak{a}$ with the addition of the Weyl group relations derived from (3.7). For $t>0$ define $\Lambda_{t}:\mathcal{O}_{t}(\mathrm{Cr}(X))\to\mathcal{F}_{W,t}(B\times% \mathfrak{a}_{\mathbb{C}})$ in the following way. Let $F\in\mathcal{O}_{t}(\mathrm{Cr}(X))$. By Theorem 3.1 there exists a unique $f\in L^{2}(X)$ such that $F|_{X}=H_{t}f$. Let $\Lambda_{t}(F)$ be the holomorphic extension of $e^{t|\rho|^{2}}\Lambda(H_{t}f)$ given by Lemma 3.2. By the same lemma the Weyl group relations are satisfied and $\|\Lambda_{t}(F)\|_{t}<\infty$. Thus $\Lambda_{t}(F)\in\mathcal{F}_{W,t}(B\times\mathfrak{a}_{\mathbb{C}})$. The following theorem gives an alternative description of the Hilbert space $\mathcal{O}_{t}(\mathrm{Cr}(X))$. Theorem 3.3. The map $\Lambda_{t}:\mathcal{O}_{t}(\mathrm{Cr}(X))\to\mathcal{F}_{W,t}(B\times% \mathfrak{a}_{\mathbb{C}})$ is an unitary isomorphism. Furthermore, let $F\in\mathcal{O}_{t}(\mathrm{Cr}(X))$. Define $f\in L^{2}(X)$ by applying $\Lambda^{*}$ to the function on $B\times A$ given by $$(b,a)\mapsto(4\pi t)^{-r/2}\lim_{R\to\infty}\int_{|Y|\leq R}\Lambda_{t}(F)(b,% \log a+iY)e^{-|Y|^{2}/4t}\,dY\,.$$ Then $H_{t}(f)=F$. Proof. The proof is a simple adaption of the standard argument for $\mathbb{R}^{r}$, as described in [21], to the $L^{2}(B)$-valued case. ∎ 3.3. Holomorphic properties of the normalized Radon transform Consider as before $X$ as a subset of $X_{\mathbb{C}}=G_{\mathbb{C}}/K_{\mathbb{C}}$. A complex horocycle in $X_{\mathbb{C}}$ is a set of the form $gN_{\mathbb{C}}x_{o}\subset X_{\mathbb{C}}$ for some $g\in G_{\mathbb{C}}$, see [18]. Let $\xi_{o}^{\mathbb{C}}=N_{\mathbb{C}}\cdot x_{o}$, then $$\Xi_{\mathbb{C}}=\{g\cdot\xi_{o}^{\mathbb{C}}\subset G_{\mathbb{C}}/K_{\mathbb% {C}}\mid g\in G_{\mathbb{C}}\}\simeq G_{\mathbb{C}}/M_{\mathbb{C}}N_{\mathbb{C}}$$ is the set of complex horocycles. The map $$\Xi\ni g\cdot\xi_{o}\mapsto g\cdot\xi_{o}^{\mathbb{C}}\in\Xi_{\mathbb{C}},% \quad g\in G$$ is well defined and injective. Define $$\Xi(\Omega)=G\exp i\Omega\cdot\xi_{o}=KA(\Omega)\cdot\xi_{o}\subset\Xi_{% \mathbb{C}}\,.$$ Then $\Xi(\Omega)\simeq B\times A(\Omega)$ is a $G$-invariant CR-submanifold of $\Xi_{\mathbb{C}}$. Let $\mathcal{H}_{\Xi}$ be the space of function $F:\Xi(\Omega)\to\mathbb{C}$ such that the map $A(\Omega)\ni z\mapsto F(\cdot,z)=F_{z}\in L^{2}(B)$ is holomorphic and for each fixed $Y\in\Omega$ the function $(b,a)\mapsto F(b,a\exp iY)$ is in $L^{2}_{W}(B\times A,d\tau)$ and $$\displaystyle\|F\|_{\mathcal{H}_{\Xi}}^{2}$$ $$\displaystyle:=$$ $$\displaystyle\sup_{Y\in\Omega}|W|^{-1}\int_{B\times A}|F(b,a\exp iY)|^{2}\,dbda$$ $$\displaystyle=$$ $$\displaystyle|W|^{-1}\sup_{Y\in\Omega}\int_{A}\|F_{a\exp iY}\|^{2}_{L^{2}(B)}% \,da<\infty\,.$$ We define an action of $G$ on $\mathcal{H}_{\Xi}$ by holomorphic extension of the $\rho$-twisted action in (3.5), that is (3.12) $$(g\cdot F)(ka\cdot\xi_{0}^{\mathbb{C}}):=a(g^{-1}k)^{-\rho}F(g^{-1}ka\cdot\xi^% {\mathbb{C}}_{0})$$ for $g\in G$, $k\in K$ and $a\in A(\Omega)$. Notice that $(g\cdot F)|_{B\times A}=g\cdot(F|_{B\times A})$. Lemma 3.4. Let $F=H_{t}(f)\in\mathcal{H}_{X}$ where $f\in L^{2}(X)$, $t>0$. Then $a\mapsto\Lambda(F|_{X})(\cdot,a)$ extends to a holomorphic $L^{2}(B)$ valued function on $A(\Omega)$, also denoted by $\Lambda(F|_{X})$, which belongs to $\mathcal{H}_{\Xi}$ and satisfies $$\|\Lambda(F|_{X})\|_{\mathcal{H}_{\Xi}}=\|F\|_{\mathcal{H}_{X}}\,.$$ Moreover, the map $f\mapsto\Lambda(F|_{X})$ is intertwining for the actions of $G$. Proof. Let $\varphi=F|_{X}$. It follows by Lemma 3.2 that $a\mapsto\Lambda(\varphi)(\cdot,a)$ extends to a holomorphic $L^{2}(B)$-valued function on $A(\Omega)$. In fact $$\Lambda(\varphi)(\cdot,\exp(X+iY))=e^{-t|\rho|^{2}}\Lambda(f)*_{A}h_{t}^{A}(% \cdot,\exp(X+iY))$$ where $h_{t}^{A}(\exp X)=(4\pi t)^{-r/2}e^{-|X|^{2}/4t}$ is the heat kernel on $A$ and the convolution is on the abelian group $A$. For $Y\in\Omega$ the function $g_{Y}:(b,a)\mapsto\Lambda(\varphi)(b,a\exp iY)$ is in $L^{2}_{W}(B\times A,d\tau)$. By the explicit formula for $h_{t}^{A}$ there exists a positive constant $C>0$ such that for $a\in A$ and $Y\in\Omega$ (3.13) $$\|g_{Y}(\cdot,\exp X)\|_{L^{2}(B)}\leq Ce^{-(|X|-1)^{2}/4t}e^{|Y|^{2}/4t}\leq C% _{1}e^{-(|X|-1)^{2}/4t}$$ where $$C_{1}=C\sup_{Y\in\overline{\Omega}}e^{|Y|^{2}/4t}\,.$$ Let $g_{b,Y}(a)=g_{Y}(b,a)$. The estimate (3.13) allows us to change the path of integration to derive $$\mathcal{F}_{A}(g_{b,Y})(\lambda)=\mathcal{F}_{A}(g_{b,0})(\lambda)e^{i\lambda% (Y)}\,.$$ Thus the integral over $B\times A$ in (3.3) is (3.14) $$\displaystyle\int_{B}\!\!\int_{A}|g_{Y}(b,a)|^{2}\,dadb$$ $$\displaystyle=$$ $$\displaystyle\int_{B}\!\!\int_{i\mathfrak{a}^{*}}|\mathcal{F}_{A}(g_{b,Y})(% \lambda)|^{2}\,\,d\lambda db$$ $$\displaystyle=$$ $$\displaystyle\int_{i\mathfrak{a}^{*}}\!\!\int_{B}|\mathcal{F}_{A}(g_{b,0})(% \lambda)|^{2}\,e^{2i\lambda(Y)}\,dbd\lambda$$ $$\displaystyle=$$ $$\displaystyle\int_{i\mathfrak{a}^{*}}\!\!\int_{B}|\mathcal{F}_{A}(\Lambda(% \varphi))(b,\lambda)|^{2}\,db\,e^{2i\lambda(X)}\,d\lambda$$ $$\displaystyle=$$ $$\displaystyle\int_{B\times i\mathfrak{a}^{*}}|\widehat{\varphi}(b,\lambda)|^{2% }e^{2i\lambda(Y)}\,\frac{dbd\lambda}{|c(\lambda)|^{2}}$$ where (3.14) follows from the definition of $\Lambda$ in (3.8). According to (1.14) we have $\mathcal{A}(w,-\lambda)\widehat{\varphi}(\cdot,\lambda)=\widehat{\varphi}(% \cdot,w\lambda)$. Hence $\int_{B}|\widehat{\varphi}(b,\lambda)|^{2}\,db$ is $W$-invariant as the intertwining operator $\mathcal{A}(w,-\lambda)$ is unitary. Summing over the Weyl group and using that $|c(\lambda)|^{-2}$ is $W$-invariant, we obtain $$\int_{B\times i\mathfrak{a}^{*}}|\widehat{\varphi}(b,\lambda)|^{2}e^{2i\lambda% (Y)}\,\frac{dbd\lambda}{|c(\lambda)|^{2}}=\int_{B\times i\mathfrak{a}^{*}}|% \widehat{\varphi}(b,\lambda)|^{2}\,\omega(\lambda,-Y)\,d\mu(b,\lambda)\,.$$ Divide by $|W|$ and take the supremum over $Y\in\Omega=-\Omega$ to get that the norms are equal. The intertwining property of the map follows from the corresponding properties for $H_{t}$ and for $\Lambda$ on $B\times A$. The latter property was remarked below (3.9). ∎ Let $F\in\mathcal{H}_{X}$ and $\varphi=F|_{X}$. Let $t_{n}\to 0$, $t_{n}>0$, and view $\varphi_{n}:=H_{t_{n}}\varphi$ as an element of $\mathcal{H}_{X}$. Then $$\|\varphi_{n}-F\|^{2}_{\mathcal{H}_{X}}=\int_{B\times i\mathfrak{a}_{+}^{*}}|e% ^{-t_{n}(|\lambda|^{2}+|\rho|^{2})}-1|^{2}|\hat{\varphi}(b,\lambda)|^{2}\omega% (\lambda)\,d\mu(b,\lambda)\,.$$ As $(b,\lambda)\mapsto|\hat{\varphi}(b,\lambda)|^{2}\omega(\lambda)$ is integrable with respect to $d\mu$ it follows by the Lebesque dominant convergence theorem that $\lim_{n}\varphi_{n}\to F$ in $\mathcal{H}_{X}$. By Lemma 3.4 it follows that $\lim_{n}\Lambda(H_{t_{n}}\varphi)$ exists in $\mathcal{H}_{\Xi}$ and is independent of the sequence $t_{n}$. Define $\tilde{\Lambda}:\mathcal{H}_{X}\to\mathcal{H}_{\Xi}$ by $$\tilde{\Lambda}(F)=\lim_{n\to\infty}\Lambda(H_{t_{n}}\varphi)\,.$$ Theorem 3.5. The map $\tilde{\Lambda}:\mathcal{H}_{X}\to\mathcal{H}_{\Xi}$ is an unitary intertwining isomorphism. Proof. We only have to show that $\tilde{\Lambda}$ is surjective. Let $F\in\mathcal{H}_{\Xi}$. Then $F|_{B\times A}\in L^{2}_{W}(B\times A,d\tau)$. Define $f=\Lambda^{*}(F|_{B\times A})$. Then the argument in the proof of Lemma 3.4 shows that $$|W|^{-1}\int_{B\times i\mathfrak{a}^{*}}|\hat{f}(b,\lambda)|^{2}\omega(\lambda% )\,d\mu(b,\lambda)=\|F\|^{2}_{\mathcal{H}_{\Xi}}<\infty\,.$$ In particular it follows that $f$ extends to a holomorphic function on $\mathrm{Cr}(X)$, denoted by $\varphi$, and $\varphi\in\mathcal{H}_{X}$. By construction we have $\Lambda(\varphi|_{X})=F|_{B\times i\mathfrak{a}^{*}}$. By construction it follows now easily that $\tilde{\Lambda}(\varphi)=F$. ∎ References [1] D.N. Akhiezer and S. Gindikin, On Stein extensions of real symmetric spaces, Math. Ann. 286, 1–12 (1990). [2] E. van den Ban and H. Schlichtkrull, The Plancherel decomposition for a reductive symmetric space. I-II, Invent. Math. 161 (2005), 453–566, 567–628. [3] L. Barchini, Stein extensions of real symmetric spaces and the geometry of the flag manifold, Math. Ann. 326 (2003), 331–346. [4] F. Bruhat, Sur les représentations induites de groupes de Lie, Bull. Soc. Math. France 84 (1956), 97–205. [5] D. Burns, S. Halverscheid, R. Hind, The geometry of Grauert tubes and complexification of symmetric spaces, Duke Math. J. 118 (2003), 465–491. [6] P. Delorme, Formule de Plancherel pour les espaces symétriques réductifs Ann. of Math. 147 (1998), 417–452. [7] J. Faraut, Analysis on the crown of a Riemannian symmetric space, in Lie groups and symmetric spaces, 99–110, Amer. Math. Soc. Transl. Ser. 2, 210, Amer. Math. Soc., Providence, RI, 2003. [8] by same author, Formule de Gutzmer pour la complexification d’un espace riemannien symétrique, Harmonic analysis on complex homogeneous domains and Lie groups (Rome, 2001). Atti Accad. Naz. Lincei Cl. Sci. Fis. Mat. Natur. Rend. Lincei (9) Mat. Appl. 13 (2002), no. 3-4, 233–241. [9] J. Faraut and A. Koranyi, Analysis on symmetric cones, Oxford Mathematical Monographs. Oxford Science Publications. The Clarendon Press, Oxford University Press, New York, 1994. [10] G. Fels and A. Huckleberry, Characterization of cycle domains via Kobayashi hyperbolicity, Bull. Soc. Math. France 133 (2005), 121–144. [11] G. Fels, A. Huckleberry, and J.A. Wolf, Cycle spaces of flag domains. A complex geometric viewpoint, Progress in Mathematics, 245. Birkhäuser Boston, Inc., Boston, MA, 2006. [12] R. Gangolli, Asymptotic behavior of spectra of compact quotients of certain symmetric spaces, Acta Math.121 (1968), 151–192. [13] I.M. Gelfand and S.G. Gindikin, Complex manifolds, the skeletons of which are semisimple real Lie groups, and analytic discrete series of representations, Funct. Anal. and Appl. 11 (1977), 19–27. [14] S. Gindikin and F. I. Karpelevich, Plancherel measure for symmetric Riemannian spaces of non-positive curvaure, Dokl. Akad. Nauk. SSSR 145 (1962), 252–255, English translation, Soviet Math. Dokl. 3 (1962), 1962–1965. [15] S. Gindikin and B. Krötz, Invariant Stein domains in Stein symmetric spaces and a nonlinear complex convexity theorem, Int. Math. Res. Not. 18 (2002) 959–971. [16] S. Gindikin, B. Krötz and G. Ólafsson, Hardy spaces for non-compactly causal symmetric spaces and the most continuous spectrum, Math. Ann. 327 (2003), 25–66 [17] by same author, Holomorphic $H$-spherical distribution vectors in principal series representations, Inventiones Mathematicae 158 (2004), 643–682. [18] by same author, Holomorphic horospherical transform on non-compactly causal spaces, Int. Math. Res. Notes 2006 (2006), article 76857, 1–47. [19] S. Gindikin and T. Matsuki, Stein extensions of Riemannian symmetric spaces and dualities of orbits on flag manifolds, Transform. Groups 8 (2003), no. 4, 333–376. [20] B.C. Hall, The Segal-Bargmann transform for compact Lie groups, J. Funct. Anal. 143 (1997), 103–151. [21] by same author, The range of the heat operator, In The ubiquitous heat kernel, 203–231, Contemp. Math. 398, Amer. Math. Soc., Providence, RI, 2006. [22] B.C. Hall and J.J. Mitchell, The Segal-Bargmann transform for noncompact symmetric spaces of the complex type, J. Funct. Anal. 227 (2005), no. 2, 338–371. [23] Harish-Chandra, Representations of semisimple Lie groups IV, Amer. J. Math. 77 (1955), 743–777. [24] by same author, On the Plancherel formula for the right $K$-invariant functions on a semisimple Lie group, Proc. Nat. Acad. Sci. U.S.A. 40 (1954), 200–204. [25] by same author, Spherical functions on a semisimple Lie group, Proc. Nat. Acad. Sci. U.S.A. 43 (1957), 408–409. [26] by same author, Spherical functions on a semisimple Lie group. I and II, Amer. J. Math. 80 (1958) 241–310 and 553–613. [27] by same author, Discrete series for semisimple Lie groups, II, Acta Math. 116, (1966), 1–111. [28] by same author, Harmonic analysis on real reductive groups III. The Maass-Selberg relations and the Plancherel formula Annals of Math. 104 (1976), 117–201. [29] S. Helgason, Duality and Radon transform for symmetric spaces, Bull. Amer. Math. Soc. 69 (1963), 782–788. [30] by same author, A duality in integral geometry on symmetric spaces. Proc. U.S.-Japan Seminar in Differential Geometry (Kyoto, 1965) pp. 37–56 Nippon Hyoronsha, Tokyo [31] by same author, Lie Groups and Symmetric Spaces, in 1968 Battelle Rencontres. 1967 Lectures in Mathematics and Physics pp. 1–71 Benjamin, New York [32] by same author, A Duality for Symmetric Spaces with Applications to Group Representations, Adv. in Math. 5 (1970), 1–154. [33] by same author, Functions on symmetric spaces. In Harmonic analysis on homogeneous spaces (Proc. Sympos. Pure Math., Vol. XXVI, Williams Coll., Williamstown, Mass., 1972), pp. 101–146. Amer. Math. Soc., Providence, R.I., 1973. [34] by same author, A Duality for Symmetric Spaces with Applications to Group Representations, II. Differential equations and eigenspace representations, Adv. in Math. 22 (1976), 187–219. [35] by same author, Differential Geometry, Lie Groups, and Symmetric Spaces, Academic Press, New York, 1978 [36] by same author, Groups and Geometric Analysis, Academic Press, New York, 1984. [37] by same author, Geometric Analysis on Symmetric Spaces, Mathematical Surveys and Monographs 39, AMS, 1994. [38] J. Hilgert, G. Ólafsson, B. Ørsted, Hardy Spaces on Affine Symmetric Spaceses, J. reine und angew. Math. 415 (1991), 189–218. [39] M. Kashiwara, A. Kowata, K. Minemura, K. Okamoto, T. Oshima, and M. Tanaka, Eigenfunctions of invariant differential operators on a symmetric space. Ann. of Math. (2) 107 (1978), no. 1, 1–39. [40] A. W. Knapp, Representation theory of semisimple groups. An overview based on examples, Princeton University Press, 1986. [41] A. W. Knapp, and E. M. Stein, Intertwining operators for semisimple groups, Ann. of Math. (2) 93 (1971), 489–578. [42] B. Kostant, On the existence and irreducibility of certain series of representations, Bull. Amer. Math. Soc. 75 (1969) 627–642. [43] K.Krötz and M. Otto, A refinement of the complex convexity theorem via symplectic techniques, Proc. Amer. Math. Soc. 134 (2006), no. 2, 549–558. [44] B. Krötz and R.J. Stanton, Holomorphic extension of representations: (I) automorphic functions, Annals of Mathematics, (2) 159 (2004), no. 2, 641–724. [45] by same author, Holomorphic extension of representation (II): Geometry and harmonic analysis. Geom. Funct. Anal. 15 (2005), no. 1, 190–245. [46] B. Krötz, G. Ólafsson, and R. J. Stanton, The image of the heat kernel transform on Riemannian symmetric spaces of the noncompact type, Int. Math. Res. Not. 2005 (2005), no. 22, 1307–1329. [47] G. W. Mackey, The theory of unitary group representations, Chicago Lectures in Mathematics, The University of Chicago Press, Chicago, 1976. [48] T. Matsuki, Equivalence of domains arising from duality of orbits on flag manifolds, Trans. Amer. Math. Soc. 358 (2006), no. 5, 2217–2245 [49] G. Ólafsson and H. Schlichtkrull, The image of the heat transform associated to root systems, Adv. Math.208 (2007), 422–437. [50] G. Ólafsson and B. Ørsted, Generalization of the Bargmann Transform In, Ed. Dobrev, Döbner, Hilgert: Proceedings of a “Workshop on Lie Theory and its Applications in Physics” Clausthal, August 1995, World Scientific, 1996 [51] G.I. Ol’shanskii, Invariant cones in Lie algebras, Lie semigroups, and the holomorphic discrete series, Functional Anal. Appl. 15 (1982), 275–285. [52] J. Rosenberg, A quick proof of Harish-Chandra’s Plancherel theorem for spherical functions on a semisimple Lie group, Proc. Amer. Math. Soc. 63, 143-149. [53] H. Schlichtkrull, Hyperfunctions and Harmonic Analysis on Symmetric Spaces, Progress in Mathematics 49, Birkhäuser, 1984. [54] R.J. Stanton, Analytic extension of the holomorphic discrete series, Amer. J. Math. 108 (1986), 1411–1424. [55] E. M. Stein and G. Weiss, Introduction to Fourier Analsys on Euclidean Spaces, Princeton University Press, New Jersey, 1971. [56] M.B. Stenzel, The Segal-Bargmann transform on a symmetric space of compact type. J. Funct. Anal. 165 (1999), 44–58.
Bisectors as Distance Estimators for Microquasars? C. Foellmi 1European Southern Observatory, 3107 Alonso de Cordova, Vitacura, Casilla 19., Santiago, Chile [email protected] d’Astrophysique, Observatoire de Grenoble, 414 rue de la Piscine, 38400 Saint-Martin d’Hères, France [email protected] (November 26, 2020) 1 Scientific Context Microquasars are small-scaled version of their extragalactic parents, the quasars with which they share the same physics Mirabel-2004 . They have a central black-hole (here of stellar mass), an accretion disk and (sometime persistent) jets. These jets are sometimes relativistic, and can reach apparent superluminal motion. However, the superluminal ”property” depends on the distance between the object and the observer. Recently, the distance of the second known superluminal microquasar in our Galaxy, namely GRO J1655-40 Hjellming-Rupen-1995 , has been challenged by Foellmi-etal-2006b-astroph . At a new distance smaller than 1.7 kpc (instead of 3.2 kpc), the jets are not superluminal anymore. It is thus of central importance to obtain reliable values of the distance in order to achieve an consistent understanding of these relativistic objects. At an inferred distance of 1.0 kpc, GRO J1655-40 competes with the microquasar called 1A 0620-00 to be the closest black-hole to the sun. 2 New Exploratory Method to Obtain Distance Estimate We propose here a new and exploratory method to obtain an estimate of the distance of microquasars. A distance estimate, say within 10-20%, is already a great progress for microquasars, where distances are sometimes uncertain by a factor of 2. This new method is made of two ingredients. First, it has been shown recently by Gray-2005 that the bluemost point of the spectroscopic bisector of a star is linearly related to its absolute magnitude. It is an empirical relation working (at least) for late-type stars. Unfortunately, to obtain a good quality single-line bisector, it is necessary to achieve a Signal-to-Noise ratio of at least 300 (often even higher) in the considered line, and at the same time a high spectral resolution (say, above 40 000). For faint objects in the optical such as GRO J1655-40 (V $\sim$ 18), a bisector to be made with UVES (R=45 000) would require more than two continuous weeks of 24-hours observations… Second, it has been shown recently by Dall-etal-2006 using the HARPS instrument installed in the La Silla Observatory (Chile)), that the bisector of the cross-correlation function (CCF) can be used as much the same way as single-line bisectors. This is rather natural, since well-chosen atmospheric lines should behave mostly the same way. The combination of the two ingredients opens the door to the observation of faint targets. As a matter of fact, it is much easier to obtain a good quality bisector of the CCF, since the information of numerous lines can be combined together. The Signal-to-Noise ratio required now depends also on the number of lines used to compute the CCF. 3 Known Caveats and Prospects The idea behind this method is to obtain a good-quality spectrum of the secondary star of microquasars in quiescence, compute their CCF and measure the position of the bluemost point of it. This point being then compared to the linear relation as described in Dall-etal-2006 . Of course, there are obvious caveats in this method that we list here. • The correlation between (CCF) bisector and absolute magnitude is not fully understood yet. • The correlation is known to work only for late-type stars, in addition to have been tested on a small sample only. • Microquasars have often Roche-filling secondary stars that could influence in an unclear manner the bisector and the absolute magnitude. • The spectrum of microquasar can be polluted by black-hole accretion disk Despite these caveats, we are currently investigating this relation using the newly commissionned near-infrared echelle spectrograph at the VLT: CRIRES. With such instrument, it is possible to achieve the necessary high resolution, and obtain a good quality spectrum of microquasars in the near-infrared, where microquasars are much brighter than in the optical (for GRO J1655-40, K$\sim$12.5). Bibliography (1) T. H. Dall, N. C. Santos, T. Arentoft et al.: A&A 454, 341 (2006) (2) C. Foellmi, E. Depagne, T. H. Dall, I. F. Mirabel: A&A, in press (2006) ArXiv Astrophysics e-prints (astro-ph/0606269) (3) D. F. Gray: PASP, 117, 711 (2005) (4) R. M. Hjellming and M. P. Rupen: Nature, 375, 464 (1995) (5) I. F. Mirabel: Microquasar-AGN-GRB Connections. In: ESA SP-552: 5th INTEGRAL Workshop on the INTEGRAL Universe, ed by V. Schšnfelder, G. Lichti & C. Winkler (ESA SP-552 2004) pp 175–183 Index 1A 0620-00 §1 absolute magnitude §2 accretion disk 4th item bisector §2 black-hole §1 CRIRES §3 cross-correlation function §2 distance §1 echelle spectrograph §3 GRO J1655-40 §1 HARPS §2 jets §1 method §2 microquasars §1 quasars §1 UVES §2 VLT §3
Some computability-theoretic reductions between principles around $\mathsf{ATR}_{0}$ Jun Le Goh Department of Mathematics, Cornell University, 310 Malott Hall, Ithaca NY, USA 14853 [email protected] (Date:: January 15, 2021) Abstract. We study the computational content of various theorems with reverse mathematical strength around Arithmetical Transfinite Recursion ($\mathsf{ATR}_{0}$) from the point of view of computability-theoretic reducibilities, in particular Weihrauch reducibility. Our first main result states that it is equally hard to construct an embedding between two given well-orderings, as it is to construct a Turing jump hierarchy on a given well-ordering. This answers a question of Marcone. We obtain a similar result for Fraïssé’s conjecture restricted to well-orderings. We then turn our attention to König’s duality theorem, which generalizes König’s theorem about matchings and covers to infinite bipartite graphs. Our second main result shows that the problem of constructing a König cover of a given bipartite graph is roughly as hard as the following “two-sided” version of the aforementioned jump hierarchy problem: given a linear ordering $L$, construct either a jump hierarchy on $L$ (which may be a pseudohierarchy), or an infinite $L$-descending sequence. We also obtain several results relating the above problems with choice on Baire space (choosing a path on a given ill-founded tree) and unique choice on Baire space (given a tree with a unique path, produce said path). This work was partially supported by National Science Foundation grants DMS-1161175 and DMS-1600635. We thank Richard A. Shore for many useful discussions and suggestions. We also thank Paul-Elliot Angles d’Auriac, Takayuki Kihara, Alberto Marcone, Arno Pauly, and Manlio Valenti for their comments and interest. 1. Introduction Given any two well-orderings, there must be an embedding from one of the well-orderings into the other. How easy or difficult is it to produce such an embedding? Is this problem more difficult if we are required to produce an embedding whose range forms an initial segment? Before attempting to answer such questions we ought to discuss how we could formalize them. One approach is to use well-established notions of (relative) complexity of sets. It is “easy” to produce an embedding between two given well-orderings, if there is an embedding which is “simple” relative to the given well-orderings. Depending on context, “simple” could mean computable, polynomial-time computable, etc. On the other hand, one could say it is “difficult” to produce an embedding between two given well-orderings, if any embedding between them has to be “complicated” relative to the given well-orderings. Then we may define a notion of complexity on problems as follows: a problem is “easy” if every instance of the problem is “easy” in the above sense; a problem is “difficult” if there is an instance of the problem which is “difficult” in the above sense. How, then, could we compare the relative complexity of such problems? Following the above approach, it is natural to do so by comparing problems against a common yardstick, which is defined using notions of complexity of sets. Computability theory provides several such notions. One example is the number of Turing jumps needed to compute a set, or more generally, its position in the arithmetic hierarchy or the hyperarithmetic hierarchy. Another example is the lowness hierarchy. This is useful for getting a rough idea of the complexity of a problem, but turns out to be unsuitable for finer calibrations. One reason is that our yardsticks may only be loosely comparable to each other (as is the case for the arithmetic and lowness hierarchies). When comparing two problems, one of them could be simpler from one point of view, but more difficult from another. Second, even if two problems are equally simple relative to the same yardstick (say, if $X$-computable instances of both problems have $X^{\prime}$-computable solutions), how do we know if they are related in any sense? Put another way, are they simple for the same “reason”? The above considerations suggest a complementary approach: instead of measuring the complexity of problems by measuring the complexity of their solutions, we could focus on the relationships between problems themselves. A common type of “relationship” which represents relative complexity is a reduction. Roughly speaking, a problem $P$ is reducible to a problem $Q$ if given an oracle for solving $Q$, we could transform it into an oracle for solving $P$. In order for this notion to be meaningful, such a transformation process has to be simple relative to the difficulty of solving $Q$. In this paper, we will focus on uniformly computable reductions, also known as Weihrauch reductions (Definition 2.2). Many theorems can be viewed as problems, and for such theorems, a proof of theorem $A$ from theorem $B$ can often be viewed as a reduction from the problem corresponding to theorem $A$ to the problem corresponding to theorem $B$. Therefore, our endeavor of studying reductions between problems is closely related to the program of reverse mathematics, which is concerned with whether a theorem is provable from other theorems (over a weak base theory). If a proof of theorem $A$ using theorem $B$ does not obviously translate to a reduction from problem $A$ to problem $B$, there are two possible outcomes. Sometimes, we might be able to massage the proof into one that does translate into a reduction. We might also find a different proof of $A$ using $B$ that can be translated into a reduction. Otherwise, we might be able to show that there is no reduction from $A$ to $B$. In that case, this suggests that any proof of $A$ using $B$ has to be somewhat complicated. Certain questions about the structure of proofs have natural analogs in terms of computable reducibilities. For example, one may appeal to a premise multiple times in the course of a proof. Such appeals may be done in “parallel” or in “series”. One may wonder whether multiple appeals are necessary, or whether appeals in series could be made in parallel instead. These questions can be formalized in the framework of computable reducibilities, for there are ways of combining problems which correspond to applying them in parallel or in series (Definitions 2.3, 2.5). Finally, the framework of computable reducibilities uncovers and makes explicit various computational connections between problems from computable analysis and theorems that have been studied in reverse mathematics. We will see how the problem of choosing any path on an ill-founded tree and the problem of choosing the path on a tree with a unique path (known as $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ and $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ respectively, see Definition 2.6) are related to theorems which do not obviously have anything to do with trees. In this paper, we use the framework of computable reducibilities to provide a fine analysis of the computational content of various theorems, such as Fraïssé’s conjecture for well-orderings, weak comparability of well-orderings, and König’s duality theorem for countable bipartite graphs. In reverse mathematics, all of these theorems are known to be equivalent to the system of Arithmetical Transfinite Recursion ($\mathsf{ATR}_{0}$). Our analysis exposes finer distinctions between these theorems. We describe our main results as follows. In the first half of this paper, we define a problem $\mathsf{ATR}$ which is analogous to $\mathsf{ATR}_{0}$ in reverse mathematics (Definition 3.2). Then we use $\mathsf{ATR}$ to calibrate the computational content of various theorems about embeddings between well-orderings. In particular, we show that: The problem of computing an embedding between two given well-orderings is as hard as $\mathsf{ATR}$ (Theorem 6.3). This answers a question of Marcone [15, Question 5.8]. This also implies that it is no harder to produce an embedding whose range forms an initial segment, than it is to produce an arbitrary embedding. Note that in this case the situation is the same from the point of view of either Weihrauch reducibility or reverse mathematics. In the second half of this paper, we define several “two-sided” problems, which are natural extensions of their “one-sided” versions. This allows us to calibrate the computational content of König’s duality theorem for countable bipartite graphs (see section 9). In particular, we define a two-sided version of $\mathsf{ATR}$, denoted $\mathsf{ATR}_{2}$ (Definition 8.2), and show that: The problem of computing a König cover of a given bipartite graph is (roughly) as hard as $\mathsf{ATR}_{2}$ (Theorems 9.25 and 9.27). $\mathsf{ATR}_{2}$ is much harder than $\mathsf{ATR}$ in terms of computational difficulty (Corollary 8.8), so this example exhibits a marked difference between computable reducibilities and reverse mathematics. The two-sided problems we study and König’s duality theorem also provide examples of problems which lie strictly between $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ and $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ in the Weihrauch degrees. Other examples exhibiting similar phenomena were studied by Kihara, Marcone, Pauly [15]. 2. Background 2.1. Computability For background on hyperarithmetic theory, we refer the reader to Sacks [19, I–III]. We will use the following version of “effective transfinite recursion” on linear orderings, which easily follows from the recursion theorem. Theorem 2.1. Let $L$ be an $X$-computable linear ordering. Suppose $F:\mathbb{N}\to\mathbb{N}$ is total $X$-computable and for all $e\in\mathbb{N}$ and $b\in L$, if $\Phi^{X}_{e}(a)\!\!\downarrow$ for all $a<_{L}b$, then $\Phi^{X}_{F(e)}(b)\!\!\downarrow$. Then there is some $e$ such that $\Phi^{X}_{e}\simeq\Phi^{X}_{F(e)}$. Furthermore: • $\{b:\Phi^{X}_{e}(b)\!\!\uparrow\}$ is either empty or contains an infinite $<_{L}$-descending sequence; • Such an index $e$ can be found uniformly in $X$, an index for $F$, and an index for $L$. In many of our applications, $X$ will be a sequence of sets $\langle X_{a}\rangle_{a}$ indexed by elements of a linear ordering (sometimes $L$, but not always). We will think of $\Phi^{X}_{e}$ as a partial function $f:L\to\mathbb{N}$, and we will think of each $f(b)$ as an index for a computation from some $X_{a}$. 2.2. Representations Let $X$ be a set of countable structures, such as (countable) linear orderings, trees, or graphs. A ($\mathbb{N}^{\mathbb{N}}$-)representation of $X$ allows us to transfer notions of computability from $\mathbb{N}^{\mathbb{N}}$ to $X$. Formally, a representation of $X$ is a surjective (possibly partial) map $\delta:\subseteq\mathbb{N}^{\mathbb{N}}\to X$. (More generally, $X$ can be any set of cardinality at most that of $\mathbb{N}^{\mathbb{N}}$.) The pair $(X,\delta)$ is called a represented space. If $\delta(p)=x$ then we say that $p$ is a ($\delta$-)name for $x$. Every $x\in X$ has at least one $\delta$-name. We say that $x\in X$ is computable if it has some $\delta$-name which is computable. If we have two representations $\delta$ and $\delta^{\prime}$ of a set $X$, we say that $\delta$ is computably reducible to $\delta^{\prime}$ if there is some computable function $F:\subseteq\mathbb{N}^{\mathbb{N}}\to\mathbb{N}^{\mathbb{N}}$ such that for all $p\in\mathrm{dom}(\delta)$, $\delta(p)=\delta^{\prime}(F(p))$. We say $\delta$ and $\delta^{\prime}$ are computably equivalent if they are computably reducible to each other. Computably equivalent representations of $X$ induce the same notion of computability on $X$. Typically, the spaces $X$ we work with have a standard representation (or encoding), which we will not specify in detail. We will work extensively with the represented spaces of linear orderings and well-orderings, so we describe their representations as follows. If $L$ is a linear ordering or well-ordering whose domain is a subset of $\mathbb{N}$, we represent it as the relation $\{\langle a,b\rangle:a\leq_{L}b\}$. Then the following operations are computable: • checking if a given element is in the domain of the ordering; • adding two given orderings (denoted by $+$); • adding a given sequence of orderings (denoted by $\Sigma$); • multiplying two given orderings (denoted by $\cdot$); • restricting a given ordering to a given subset of its domain. On the other hand, the following operations are not computable: • checking whether a given element is a successor or limit; • finding the successor of a given element (if it exists); • comparing the ordertype of two given well-orderings; • checking if a given real is a name for a well-ordering. In section 9, we will work with rooted subtrees of $\mathbb{N}^{<\mathbb{N}}$, which are subsets $T$ of $\mathbb{N}^{<\mathbb{N}}$ for which there is a unique $r\in T$ (called the root) such that: • no proper prefixes of $r$ lie in $T$; • for every $s\in T$, $s$ extends $r$ and every prefix of $s$ which extends $r$ lies in $T$. A rooted subtree of $\mathbb{N}^{<\mathbb{N}}$ whose root is the empty node $\langle\rangle$ is just a prefix-closed subset of $\mathbb{N}^{<\mathbb{N}}$. If $r\in\mathbb{N}^{<\mathbb{N}}$ and $R\subseteq\mathbb{N}^{<\mathbb{N}}$, we define $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}R=\{r\mathord{\mathchoice{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3pt}{\scalebox{.7}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{\raisebox{3.01pt}{% \scalebox{.5}{$\frown$}}}}s:s\in R\}$. In particular, if $T\subseteq\mathbb{N}^{<\mathbb{N}}$ is prefix-closed, then $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T$ is a subtree of $\mathbb{N}^{<\mathbb{N}}$ with root $r$. Conversely, if a rooted subtree of $\mathbb{N}^{<\mathbb{N}}$ has root $r$, it is equal to $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T$ for some such $T$. If $T$ is prefix-closed, we sometimes refer to a tree of the form $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T$ as a copy of $T$. (Our usage of “copy” is more restrictive than its usage in computable structure theory.) If $T$ is a rooted subtree of $\mathbb{N}^{<\mathbb{N}}$, for any $t\in T$, the subtree of $T$ above $t$ is the subtree $\{s\in T:t\preceq s\}$ with root $t$. For each $r\in\mathbb{N}^{<\mathbb{N}}$, $e\in\mathbb{N}$ and $X\subseteq\mathbb{N}$, $(r,e,X)$ is a name for the following tree $T$ with root node $r$: $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}\sigma\in T$ if and only if for all $k<|\sigma|$, $\Phi^{X}_{e,\prod_{i<k}(\sigma(i)+1)}(\sigma\restriction k)\!\!\downarrow=1$. This representation is easily seen to be computably equivalent to what is perhaps the usual representation, where if $\Phi^{X}_{e}$ is total, then $(r,e,X)$ is the name for the tree defined by $\Phi^{X}_{e}$ starting with root $r$. The advantage of our representation is that $(r,e,X)$ names some tree even if $\Phi^{X}_{e}$ is partial, which will be useful when $e$ is produced by the recursion theorem. Using the above representation, we can define a representation for sequences of subtrees of $\mathbb{N}^{<\mathbb{N}}$: view $(e,X)$ as $\langle(\langle n\rangle,e_{n},X)\rangle_{n}$, where $e_{n}$ is an $X$-index for $\Phi^{X}_{e}(n,\cdot)$. Observe that every $(e,X)$ names some such sequence. We will also work with bipartite graphs in section 9. We represent bipartite graphs as their vertex set and edge relation. Alternatively, our representation of a bipartite graph could also include a partition of its vertex set which witnesses that the graph is bipartite. Even though these two representations are not computably equivalent111In fact, there is a computable bipartite graph such that no computable partition of its vertices witnesses that the graph is bipartite. This was known to Bean [3, remarks after Theorem 7] (we thank Jeff Hirst for pointing this out.) See also Hirst [13, Corollary 3.17]., all of our results hold for either representation. 2.3. Weihrauch reducibility and computable reducibility For a self-contained introduction to Weihrauch reducibility, we refer the reader to Brattka, Gherardi, Pauly [6]. In this section, we will only present the definitions that we need in this paper. We begin by identifying problems, such as that of constructing an embedding between two given well-orderings, with (possibly partial) multivalued functions between represented spaces, denoted $P:\subseteq X\rightrightarrows Y$. A theorem of the form $$(\forall x\in X)(\Theta(x)\rightarrow(\exists y\in Y)\Psi(x,y))$$ corresponds to the multivalued function $P:\subseteq X\rightrightarrows Y$ where $P(x)=\{y\in Y:\Psi(x,y)\}$. Note that logically equivalent statements can correspond to different problems. The domain of a problem, denoted $\mathrm{dom}(P)$, is the set of $x\in X$ such that $P(x)$ is nonempty. An element of the domain of $P$ is called a $P$-instance. If $x$ is a $P$-instance, an element of $P(x)$ is called a $P$-solution to $x$. A realizer of a problem $P$ is a (single-valued) function $F:\subseteq\mathbb{N}^{\mathbb{N}}\to\mathbb{N}^{\mathbb{N}}$ which takes any name for a $P$-instance to a name for any of its $P$-solutions. Intuitively, $P$ is reducible to $Q$ if one can transform any realizer for $Q$ into some realizer for $P$. If such a transformation can be done in a uniformly computable way, then $P$ is said to be Weihrauch reducible to $Q$: Definition 2.2. $P$ is Weihrauch reducible (or uniformly reducible) to $Q$, written $P\leq_{W}Q$, if there are computable functions $\Phi,\Psi:\subseteq\mathbb{N}^{\mathbb{N}}\to\mathbb{N}^{\mathbb{N}}$ such that: • given a name $p$ for a $P$-instance, $\Phi(p)$ is a name for a $Q$-instance; • given a name $q$ for a $Q$-solution to the $Q$-instance named by $\Phi(p)$, $\Psi(p\oplus q)$ is a name for a $P$-solution to the $P$-instance named by $p$. In this case, we say that $\Phi$ and $\Psi$ are forward and backward functionals, respectively, for a Weihrauch reduction from $P$ to $Q$. We say that $P$ is arithmetically Weihrauch reducible to $Q$, written $P\leq_{W}^{\mathrm{arith}}Q$, if the above holds for some arithmetically defined functions $\Phi$ and $\Psi$, or equivalently, some computable functions $\Phi$ and $\Psi$ which are allowed access to some fixed finite Turing jump of their inputs. For readability, we will typically not mention names in our proofs. For example, we will write “given a $P$-instance” instead of “given a name for a $P$-instance”. It is easy to see that Weihrauch reducibility is reflexive and transitive, and hence defines a degree structure on problems. In fact, there are several other natural operations on problems that define corresponding operations on the Weihrauch degrees. In the following, we define only the operations that we use. First we have the parallel product, which captures the power of applying problems in parallel: Definition 2.3. The parallel product of $P$ and $Q$, written $P\times Q$, is defined as follows: $\mathrm{dom}(P\times Q)=\mathrm{dom}(P)\times\mathrm{dom}(Q)$ and $(P\times Q)(x,y)=P(x)\times Q(y)$. The (infinite) parallelization of $P$, written $\widehat{P}$, is defined as follows: $\mathrm{dom}(\widehat{P})=\mathrm{dom}(P)^{\mathbb{N}}$ and $\widehat{P}((x_{n})_{n})=\{(y_{n})_{n}:y_{n}\in P(x_{n})\}$. It is easy to see that the parallel product and parallelization of problems induce corresponding operations on their Weihrauch degrees. More generally, we can also apply problems in series: Definition 2.4. The composition $\circ$ is defined as follows: for $P:\subseteq X\rightrightarrows Y$ and $Q:\subseteq Y\rightrightarrows Z$, we define $\mathrm{dom}(Q\circ P)=\{x\in X:P(x)\subseteq\mathrm{dom}(Q)\}$ and $(Q\circ P)(x)=\{z\in Z:\exists y\in P(x)(z\in Q(y))\}$. The composition of problems, however, does not directly induce a corresponding operation on Weihrauch degrees. It is also too restrictive, in the sense that a $P$-solution is required to be literally a $Q$-instance. Nevertheless, one can use the composition to define an operation on Weihrauch degrees that more accurately captures the power of applying two problems in series: Definition 2.5 (Brattka, Gherardi, Marcone [5]). The compositional product $\ast$ is defined as follows: $$Q\ast P=\sup\{Q_{0}\circ P_{0}:Q_{0}\leq_{W}Q,P_{0}\leq_{W}P\},$$ where the $\sup$ is taken over the Weihrauch degrees. Brattka and Pauly [7] showed that $Q\ast P$ always exists. Next, we define some well-studied problems that are helpful for calibrating the problems we are interested in. Definition 2.6. Define the following problems: $\mathsf{LPO}$: given $p\in\mathbb{N}^{\mathbb{N}}$, output $1$ if there is some $k\in\mathbb{N}$ such that $p(k)=0$, else output $0$; $\mathsf{C}_{\mathbb{N}}$: given some $f:\mathbb{N}\to\mathbb{N}$ which is not surjective, output any $x$ not in the range of $f$; $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$: given an ill-founded subtree of $\mathbb{N}^{<\mathbb{N}}$, output any path on it; $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$: given an ill-founded subtree of $\mathbb{N}^{<\mathbb{N}}$ with a unique path, output said path. For more information about the above problems, we refer the reader to the survey by Brattka, Gherardi, Pauly [6]. Finally, we define a non-uniform coarsening of Weihrauch reducibility known as computable reducibility. Definition 2.7 (Dzhafarov [9]). $P$ is computably reducible to $Q$, written $P\leq_{c}Q$, if given a name $p$ for a $P$-instance, one can compute a name $p^{\prime}$ for a $Q$-instance such that given a name $q$ for a $Q$-solution to the $Q$-instance named by $p^{\prime}$, one can use $p\oplus q$ to compute a name for a $P$-solution to the $P$-instance named by $p$. For example, even though $\mathsf{LPO}$ is not Weihrauch reducible to the identity function, it is computably reducible to the identity because a solution to an $\mathsf{LPO}$-instance is either $0$ or $1$. Observe that $\mathsf{LPO}$ is also arithmetically Weihrauch reducible to the identity. The same conclusions hold for $\mathsf{C}_{\mathbb{N}}$. The following easy proposition will help us derive corollaries of our results which involve computable reducibility and arithmetic Weihrauch reducibility: Proposition 2.8. Suppose $R\leq_{W}Q\ast P$. If $Q\leq_{c}\mathrm{id}$, then $R\leq_{c}P$. If $Q\leq_{W}^{\mathrm{arith}}\mathrm{id}$, then $R\leq_{W}^{\mathrm{arith}}P$. 3. An $\mathsf{ATR}$-like problem In this section, we formulate a problem which is analogous to $\mathsf{ATR}_{0}$ in reverse mathematics. Informally, $\mathsf{ATR}_{0}$ in reverse mathematics asserts that one can iterate the Turing jump along any countable well-ordering starting at any set [22, pg. 38]. We make that precise as follows: Definition 3.1. Let $L$ be a linear ordering with first element $0_{L}$, and let $A\subseteq\mathbb{N}$. We say that $\langle X_{a}\rangle_{a\in L}$ is a jump hierarchy on $L$ which starts with $A$ if $X_{0}=A$ and for all $b>_{L}0_{L}$, $X_{b}=(\bigoplus_{a<_{L}b}X_{a})^{\prime}$. There are several ways to define jump hierarchies. We have chosen the above definition for our convenience. We will show that the Weihrauch degree of the resulting problem is rather robust with regards to which definition we choose. See, for example, Proposition 3.7. Note that by transfinite recursion and transfinite induction, for any well-ordering $L$ and any set $A$, there is a unique jump hierarchy on $L$ which starts with $A$. Definition 3.2. Define the problem $\mathsf{ATR}$ as follows. Instances are pairs $(L,A)$ where $L$ is a well-ordering and $A\subseteq\mathbb{N}$, with unique solution being the jump hierarchy $\langle X_{a}\rangle_{a\in L}$ which starts with $A$. There is a significant difference between the problem $\mathsf{ATR}$ and the system $\mathsf{ATR}_{0}$ in reverse mathematics, as expounded in the remark after Theorem 3.2 in Kihara, Marcone, Pauly [15]. For example, in the setting of reverse mathematics, different models may disagree on which linear orderings are well-orderings. The standard definition of $\mathsf{ATR}_{0}$ in reverse mathematics [22, Definition V.2.4] involves iterating arbitrary arithmetical operators instead of just the Turing jump. We formulate that statement as a problem and show that it is Weihrauch equivalent to $\mathsf{ATR}$. Proposition 3.3. $\mathsf{ATR}$ is Weihrauch equivalent to the following problem. Instances are triples $(L,A,\Theta)$ where $L$ is a well-ordering, $A\subseteq\mathbb{N}$, and $\Theta(n,Y,A)$ is an arithmetical formula whose only free variables are $n$, $Y$ and $A$, with unique solution $\langle Y_{a}\rangle_{a\in L}$ such that for all $b\in L$, $Y_{b}=\{n:\Theta(n,\bigoplus_{a<_{L}b}Y_{a},A)\}$. Proof. $\mathsf{ATR}$ is Weihrauch reducible to the above problem: for the forward reduction, given $(L,A)$, consider $(L,A,\Theta)$ where $\Theta(n,Y,A)$ holds if either $Y=\emptyset$ and $n\in A$, or $n\in Y^{\prime}$. The backward reduction is the identity. Conversely, given $(L,A,\Theta)$, let $k$ be one greater than the number of quantifier alternations in $\Theta$. Apply $\mathsf{ATR}$ to $(1+k\cdot L,L\oplus A)$ to obtain the jump hierarchy $\langle X_{\alpha}\rangle_{\alpha\in 1+k\cdot L}$. For the backward reduction, we will use $\langle X_{(a,k-1)}\rangle_{a\in L}$-effective transfinite recursion along $L$ to define a total $\langle X_{(a,k-1)}\rangle_{a\in L}$-recursive function $f:L\to\mathbb{N}$ such that: • $\Phi^{X_{(b,k-1)}}_{f(b)}$ is total for all $b\in L$; • if we define $Y_{b}=\Phi^{X_{(b,k-1)}}_{f(b)}$ for all $b\in L$, then $Y_{b}=\{n:\Theta(n,\bigoplus_{a<_{L}b}Y_{a},A)\}$. For each $b\in L$, we define $\Phi^{X_{(b,k-1)}}_{f(b)}$ as follows. First note that $X_{(b,0)}$ uniformly computes $L\oplus A$ (because of the $1$ in front of $1+k\cdot L$), and hence uniformly computes $A\oplus\bigoplus_{a<_{L}b}X_{(a,k-1)}$. Now $X_{(b,k-1)}$ uniformly computes $X_{(b,0)}^{(k)}$, which uniformly computes $\left(A\oplus\bigoplus_{a<_{L}b}X_{(a,k-1)}\right)^{(k)}$. Since $\Phi^{X_{(a,k-1)}}_{f(a)}$ is total for all $a<_{L}b$, that in turn uniformly computes $\left(A\oplus\bigoplus_{a<_{L}b}Y_{a}\right)^{(k)}$, where $Y_{a}$ is defined to be $\{n:\Phi^{X_{(a,k-1)}}_{f(a)}(n)\!\!\downarrow=1\}$. Finally, $\left(A\oplus\bigoplus_{a<_{L}b}Y_{a}\right)^{(k)}$ uniformly computes $\{n:\Theta(n,\bigoplus_{a<_{L}b}Y_{a},A)\}$, which defines $\Phi^{X_{(b,k-1)}}_{f(b)}$ as desired. By transfinite induction along $L$, $f$ is total. Hence we can compute $Y_{b}=\Phi^{X_{(b,k-1)}}_{f(b)}$ for all $b\in L$, and output $\langle Y_{b}\rangle_{b\in L}$. ∎ When we define reductions from $\mathsf{ATR}$ to other problems by effective transfinite recursion, we will often want to perform different actions at the first step, successor steps, and limit steps. If we want said reductions to be uniform, we want to be able to compute which step we are in. This motivates the following definition: Definition 3.4. A labeled well-ordering is a tuple $\mathcal{L}=(L,0_{L},S,p)$ where $L$ is a well-ordering, $0_{L}$ is the first element of $L$, $S$ is the set of all successor elements in $L$, and $p:S\to L$ is the predecessor function. We show that when defining Weihrauch reductions from $\mathsf{ATR}$ to other problems, we may assume that the given well-ordering has labels: Proposition 3.5. $\mathsf{ATR}$ is Weihrauch equivalent to the following problem. Instances are pairs $(\mathcal{L},A)$ where $\mathcal{L}=(L,0_{L},S,p)$ is a labeled well-ordering and $A\subseteq\mathbb{N}$, with unique solution being the jump hierarchy $\langle X_{a}\rangle_{a\in L}$ which starts with $A$. Proof. Given $(L,A)$, we can uniformly compute labels for $\omega\cdot(1+L)$. Then apply the above problem to $(\omega\cdot(1+L),L\oplus A)$ to obtain the jump hierarchy $\langle X_{(n,\alpha)}\rangle_{n\in\omega,\alpha\in 1+L}$ which starts with $L\oplus A$. For the backward reduction, we will use $\langle X_{(0,b)}\rangle_{b\in L}$-effective transfinite recursion along $L$ to define a total $\langle X_{(0,b)}\rangle_{b\in L}$-recursive function $f:L\to\mathbb{N}$ such that $\Phi^{X_{(0,b)}}_{f(b)}$ is total for every $b\in L$ and $\langle\Phi^{X_{(0,b)}}_{f(b)}\rangle_{b\in L}$ is the jump hierarchy on $L$ which starts with $A$. First note that every $X_{(0,b)}$ uniformly computes $(L\oplus A)^{\prime}$, and hence $0_{L}$. This means that it uniformly computes the case division in the following construction. For the base case, $X_{(0,0_{L})}$ uniformly computes $L\oplus A$ and hence $A$. As for $b>_{L}0_{L}$, $X_{(0,b)}$ uniformly computes $L$, hence it uniformly computes $(\bigoplus_{a<_{L}b}X_{(0,a)})^{\prime}$. Therefore it uniformly computes $(\bigoplus_{a<_{L}b}\Phi^{X_{(0,a)}}_{f(a)})^{\prime}$. ∎ The following closure property will be useful for proving Proposition 4.5. This fact also follows from the combination of work of Pauly ($\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ is parallelizable [17]) and Kihara, Marcone, Pauly ($\mathsf{ATR}\equiv_{W}\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ [15]), but we provide a short direct proof. Proposition 3.6. $\mathsf{ATR}$ is parallelizable, i.e., $\widehat{\mathsf{ATR}}\equiv_{W}\mathsf{ATR}$. Proof. It suffices to show that $\widehat{\mathsf{ATR}}\leq_{W}\mathsf{ATR}$. Instead of $\widehat{\mathsf{ATR}}$, we consider the parallelization of the version of $\mathsf{ATR}$ in Proposition 3.5. Given $\langle(\mathcal{L}_{i},A_{i})\rangle_{i}$, apply $\mathsf{ATR}$ to $(\sum_{i}L_{i},\bigoplus_{i}L_{i}\oplus A_{i})$ to obtain the jump hierarchy $\langle X_{(i,a)}\rangle_{i\in\omega,a\in L_{i}}$ which starts with $\bigoplus_{i}L_{i}\oplus A_{i}$. For each $i$, we show how to compute the jump hierarchy $\langle X_{a}\rangle_{a\in L_{i}}$ which starts with $A_{i}$ using $(\mathcal{L}_{0}\oplus\mathcal{L}_{i}\oplus\langle X_{(i,a)}\rangle_{a\in L_{i% }})$-effective transfinite recursion along $L_{i}$. This is done by defining a total $(\mathcal{L}_{0}\oplus\mathcal{L}_{i}\oplus\langle X_{(i,a)}\rangle_{a\in L_{i% }})$-recursive function $f_{i}:L_{i}\to\mathbb{N}$ such that for all $a\in L_{i}$, $\Phi^{X_{(i,a)}}_{f(a)}$ is total and defines $X_{a}$. (The role of $\mathcal{L}_{0}\oplus\mathcal{L}_{i}$ is to provide the values of $0_{L_{0}}$ and $0_{L_{i}}$ in the following computation.) For the base case, $X_{(i,0_{L_{i}})}$ uniformly computes $X_{(0,0_{L_{0}})}=\bigoplus_{i}L_{i}\oplus A_{i}$, which uniformly computes $A_{i}$. For $b>_{L_{i}}0_{L_{i}}$, $X_{(i,b)}$ uniformly computes $X_{(0,0_{L_{0}})}$ which uniformly computes $L_{i}$, so $X_{(i,b)}$ uniformly computes $(\bigoplus_{a<_{L_{i}}b}X_{(i,a)})^{\prime}$. That in turn uniformly computes $(\bigoplus_{a<_{L_{i}}b}\Phi^{X_{(i,a)}}_{f(a)})^{\prime}=(\bigoplus_{a<_{L_{i% }}b}X_{a})^{\prime}=X_{b}$ as desired. ∎ Henceforth we will primarily work with the following version of $\mathsf{ATR}$: Proposition 3.7. $\mathsf{ATR}$ is Weihrauch equivalent to the following problem: instances are pairs $(\mathcal{L},c)$ where $\mathcal{L}$ is a labeled well-ordering and $c\in L$, with unique solution being $Y_{c}$, where $\langle Y_{a}\rangle_{a\in L}$ is the unique hierarchy such that: • $Y_{0_{L}}=\mathcal{L}$; • if $b$ is the successor of $a$, then $Y_{b}=Y^{\prime}_{a}$; • if $b$ is a limit, then $Y_{b}=\bigoplus_{a<_{L}b}Y_{a}$. Proof. Using Proposition 3.3, it is easy to see that the above problem is Weihrauch reducible to $\mathsf{ATR}$. Conversely, we reduce the version of $\mathsf{ATR}$ in Proposition 3.5 to the above problem. Given $(\mathcal{L},A)$, define $$M=\omega\cdot(1+(A,<_{\mathbb{N}})+L+1)+1.$$ Formally, the domain of $M$ is $$\displaystyle\{(0,n):n\in\omega\}\cup\{(1,m,n):m\in A,n\in\omega\}$$ $$\displaystyle\cup\{(2,a,n):a\in L,n\in\omega\}\cup\{(3,n):n\in\omega\}\cup\{m_% {M}\}$$ with the ordering described above. It is easy to see that $L\oplus A$ uniformly computes $M$ and labels for it. Let $\mathcal{M}$ denote the tuple of $M$ and its labels. Apply the given problem to $\mathcal{M}$ and $m_{M}\in M$ to obtain $Y_{m_{M}}$. Note that since $m_{M}$ is a limit, $Y_{m_{M}}$ uniformly computes $Y_{(0,0)}=\mathcal{M}$, and hence $\langle Y_{c}\rangle_{c\in M}$. For the backward functional, we perform $(\mathcal{L}\oplus\langle Y_{c}\rangle_{c\in M})$-effective transfinite recursion along $L$ to define a total $(\mathcal{L}\oplus\langle Y_{c}\rangle_{c\in M})$-recursive function $f:L\to\mathbb{N}$ such that for each $a\in L$, $\Phi^{Y_{(2,a,1)}}_{f(a)}$ is total and defines the $a^{\text{th}}$ column $X_{a}$ of the jump hierarchy on $L$ which starts with $A$. Note that $\mathcal{L}$ uniformly computes the following case division. For the base case, first use $Y_{(2,0_{L},1)}=Y^{\prime}_{(2,0_{L},0)}$ to compute $Y_{(2,0_{L},0)}$. Now $(2,0_{L},0)$ is a limit, so $Y_{(2,0_{L},0)}$ uniformly computes $Y_{(0,0)}=\mathcal{M}$, which uniformly computes $A$ as desired. For $b>_{L}0_{L}$, since $(2,b,0)$ is a limit, $Y_{(2,b,0)}$ uniformly computes $Y_{(0,0)}=\mathcal{M}$, which uniformly computes $L$. Therefore $Y_{(2,b,0)}$ uniformly computes $\bigoplus_{a<_{L}b}Y_{(2,a,1)}$, and hence $\bigoplus_{a<_{L}b}\Phi^{Y_{(2,a,1)}}_{f(a)}=\bigoplus_{a<_{L}b}X_{a}$. Therefore $Y_{(2,b,1)}$ uniformly computes $X_{b}=(\bigoplus_{a<_{L}b}X_{a})^{\prime}$ as desired. This completes the definition of $f$, and hence the reduction from the version of $\mathsf{ATR}$ in Proposition 3.5 to the given problem. ∎ Thus far, we have seen that the Weihrauch degree of $\mathsf{ATR}$ is fairly robust with respect to the type of jump hierarchy that it outputs (Propositions 3.3, 3.5, 3.7). However, we still require some level of uniformity in the jump hierarchy produced: Proposition 3.8. The problem of producing the Turing jump of a given set is not Weihrauch reducible to the following problem: instances are pairs $(L,A)$ where $L$ is a well-ordering and $A\subseteq\mathbb{N}$, and solutions to $L$ are hierarchies $\langle X_{a}\rangle_{a\in L}$ where $X_{0_{L}}=A$ and for all $a<_{L}b$, $X^{\prime}_{a}\leq_{T}X_{b}$. Hence $\mathsf{ATR}$ is not Weihrauch reducible to the latter problem either. Proof. Towards a contradiction, fix forward and backward Turing functionals $\Gamma$ and $\Delta$ witnessing otherwise. We will show that $\Gamma$ and $\Delta$ could fail to produce $\emptyset^{\prime}$ from $\emptyset$. First, $\Gamma^{\emptyset}$ defines some computable $(L,A)$. We claim that there are finite $\langle\sigma_{a}\rangle_{a\in L}$ and $e$ such that $\sigma_{0_{L}}\prec A$ and $\Delta^{\emptyset\oplus\langle\sigma_{a}\rangle_{a\in L}}(e)\!\!\downarrow\neq% \emptyset^{\prime}(e)$. Suppose not. Then for each $e$, we may compute $\emptyset^{\prime}(e)$ by searching for $\langle\sigma_{a}\rangle_{a\in L}$ such that $\sigma_{0_{L}}\prec A$ and $\Delta^{\emptyset\oplus\langle\sigma_{a}\rangle_{a\in L}}(e)\!\!\downarrow$. Such $\langle\sigma_{a}\rangle_{a\in L}$ must exist because if $\langle X_{a}\rangle_{a\in L}$ is a hierarchy on $L$ which starts with $A$ (as defined in the proposition), then $\Delta^{\emptyset\oplus\langle X_{a}\rangle_{a\in L}}$ is total. This is a contradiction, thereby proving the claim. Fix any $\langle\sigma_{a}\rangle_{a\in L}$ which satisfies the claim. It is clear that $\langle\sigma_{a}\rangle_{a\in L}$ can be extended to a solution $\langle X_{a}\rangle_{a\in L}$ to $(L,A)$ for the given problem (e.g., by extending using columns of the usual jump hierarchy). But $\Delta^{\emptyset\oplus\langle X_{a}\rangle_{a\in L}}\neq\emptyset^{\prime}$, contradiction. ∎ If we are willing to allow arithmetic Weihrauch reductions, then $\mathsf{ATR}$ remains robust: Proposition 3.9. $\mathsf{ATR}$ is arithmetically Weihrauch reducible (hence arithmetically Weihrauch equivalent) to the problem in Proposition 3.8. For the proof, we refer to the reader to the proof of Proposition 8.11 later. (The only difference is that we use transfinite induction along the given well-ordering to show that we always output a jump hierarchy.) 4. Theorems about embeddings between well-orderings There are several theorems about embeddings between well-orderings which lie around $\mathsf{ATR}_{0}$ in reverse mathematics. Friedman (see [22, notes for Theorem V.6.8, pg. 199]) showed that comparability of well-orderings is equivalent to $\mathsf{ATR}_{0}$. Friedman and Hirst [10] then showed that weak comparability of well-orderings is also equivalent to $\mathsf{ATR}_{0}$. We formulate those two theorems about embeddings as problems: Definition 4.1. Define the following problems: $\mathsf{CWO}$: Given a pair of well-orderings, produce an embedding from one of them onto an initial segment of the other. $\mathsf{WCWO}$: Given a pair of well-orderings, produce an embedding from one of them into the other. Marcone proved the analog of Friedman’s result for Weihrauch reducibility: Theorem 4.2 (see Kihara, Marcone, Pauly [15]). $\mathsf{CWO}\equiv_{W}\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}\equiv_{W}\mathsf{ATR}$. (In fact, he proved the equivalence up to strong Weihrauch reducibility, which we will not define here.) In Theorem 6.3, we prove the analog of Friedman and Hirst’s result for Weihrauch reducibility, i.e., $\mathsf{WCWO}\equiv_{W}\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$. This answers a question of Marcone [15, Question 5.8]. Another class of examples of theorems about embeddings comes from Fraïssé’s conjecture (proved by Laver [16]), which asserts that the set of countable linear orderings is well-quasi-ordered (i.e., any infinite sequence contains a weakly increasing pair) by embeddability. Shore [20] studied the reverse mathematics of various restrictions of Fraïssé’s conjecture. We formulate them as problems: Definition 4.3. Define the following problems: $\mathsf{WQO}_{\mathrm{LO}}$: Given a sequence $\langle L_{i}\rangle$ of linear orderings, produce $i<j$ and an embedding from $L_{i}$ into $L_{j}$. $\mathsf{WQO}_{\mathrm{WO}}$: Given a sequence $\langle L_{i}\rangle$ of well-orderings, produce $i<j$ and an embedding from $L_{i}$ into $L_{j}$. $\mathsf{NDS}_{\mathrm{WO}}$: Given a sequence $\langle L_{i}\rangle$ of well-orderings, and embeddings $\langle F_{i}\rangle$ from each $L_{i+1}$ into $L_{i}$, produce $i<j$ and an embedding from $L_{i}$ into $L_{j}$. $\mathsf{NIAC}_{\mathrm{WO}}$: Given a sequence $\langle L_{i}\rangle$ of well-orderings, produce $i$ and $j$ (we may have $i>j$) and an embedding from $L_{i}$ into $L_{j}$. $\mathsf{NDS}_{\mathrm{LO}}$ and $\mathsf{NIAC}_{\mathrm{LO}}$ can be defined analogously, but we will not study them in this paper. $\mathsf{WQO}_{\mathrm{LO}}$ corresponds to Fraïssé’s conjecture. $\mathsf{WQO}_{\mathrm{WO}}$ is the restriction of Fraïssé’s conjecture to well-orderings. $\mathsf{NDS}_{\mathrm{WO}}$ asserts that there is no infinite strictly descending sequence of well-orderings. $\mathsf{NIAC}_{\mathrm{WO}}$ asserts that there is no infinite antichain of well-orderings. The definitions immediately imply that Proposition 4.4. $$\displaystyle\mathsf{NDS}_{\mathrm{WO}}\leq_{W}\mathsf{WQO}_{\mathrm{WO}}\leq_% {W}\mathsf{WQO}_{\mathrm{LO}}$$ $$\displaystyle\mathsf{NIAC}_{\mathrm{WO}}\leq_{W}\mathsf{WCWO}\leq_{W}\mathsf{CWO}$$ $$\displaystyle\mathsf{NIAC}_{\mathrm{WO}}\leq_{W}\mathsf{WQO}_{\mathrm{WO}}$$ It is not hard to show that all of the problems in Proposition 4.4, except for $\mathsf{WQO}_{\mathrm{LO}}$, are Weihrauch reducible to $\mathsf{ATR}$. (We bound the strength of $\mathsf{WQO}_{\mathrm{LO}}$ in Corollaries 8.5 and 8.8.) Proposition 4.5. $\mathsf{CWO}\leq_{W}\mathsf{ATR}$ and $\mathsf{WQO}_{\mathrm{WO}}\leq_{W}\mathsf{ATR}$. Proof. Let $Q$ denote the following apparent strengthening of $\mathsf{CWO}$: a $Q$-instance is a pair of well-orderings $(L,M)$, and a $Q$-solution consists of both a $\mathsf{CWO}$-solution $F$ to $(L,M)$ and an indication of whether $L<M$, $L\equiv M$, or $L>M$. Clearly $\mathsf{CWO}\leq_{W}Q$. (Marcone showed that $\mathsf{CWO}\equiv_{W}\mathsf{ATR}$ (Theorem 4.2), so actually $\mathsf{CWO}\equiv_{W}Q$.) We start by showing that $Q\leq_{W}\mathsf{ATR}$. Given $(L,M)$, define $N$ by adding a first element $0_{N}$ and a last element $m_{N}$ to $L$. Apply the version of $\mathsf{ATR}$ in Proposition 3.3 to obtain a hierarchy $\langle X_{a}\rangle_{a\in N}$ such that: • $X_{0_{N}}=L\oplus M$; • for all $b>_{N}0_{N}$, $X_{b}=\left(\bigoplus_{a<_{N}b}X_{a}\right)^{\prime\prime\prime}$. For the backward reduction, we start by using $\langle X_{a}\rangle_{a\in L}$-effective transfinite recursion along $L$ to define a total $\langle X_{a}\rangle_{a\in L}$-recursive function $f:L\to\mathbb{N}$ such that $\{(a,\Phi^{X_{a}}_{f(a)}(0))\in L\times M:\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}$ is an embedding of an initial segment of $L$ into an initial segment of $M$. To define $f$, if we are given any $b\in L$ and $f\restriction\{a:a<_{L}b\}$, we need to define $f(b)$, specifically $\Phi^{X_{b}}_{f(b)}(0)$. Use $X_{b}=(\bigoplus_{a<_{L}b}X_{a})^{\prime\prime\prime}$ to compute whether there is an $M$-least element above $\{\Phi^{X_{a}}_{f(a)}(0):a<_{L}b\}$ (equivalently, whether $M\backslash\{\Phi^{X_{a}}_{f(a)}(0):a<_{L}b\}$ is nonempty). If so, we output said $M$-least element; otherwise diverge. This completes the definition of $\Phi^{X_{b}}_{f(b)}(0)$. Apply the recursion theorem to the definition above to obtain a partial $\langle X_{a}\rangle_{a\in L}$-recursive function $f:L\to\mathbb{N}$. Now, to complete the definition of the backward reduction we consider the following cases. Case 1. $f$ is total. Then we output $\{(a,\Phi^{X_{a}}_{f(a)}(0)):a\in L\}$, which is an embedding from $L$ onto an initial segment of $M$. Case 2. Otherwise, $\{\Phi^{X_{a}}_{f(a)}(0):a\in L,\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}=M$. Then we output $\{(\Phi^{X_{a}}_{f(a)}(0),a):a\in L,\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}$, which is an embedding from $M$ onto an initial segment of $L$. Finally, note that the last column $X_{m_{N}}$ of $\langle X_{a}\rangle_{a\in N}$ can compute which case holds and compute the appropriate output for each case. If Case 1 holds but not Case 2, then $L<M$. If Case 2 holds but not Case 1, then $L>M$. If both Case 1 and 2 hold, then $L\equiv M$. Next, we turn our attention to $\mathsf{WQO}_{\mathrm{WO}}$. Observe that $\mathsf{WQO}_{\mathrm{WO}}\leq_{W}\widehat{Q}$: given a sequence $\langle L_{i}\rangle$ of well-orderings, apply $Q$ to each pair $(L_{i},L_{j})$, $i<j$. Search for the least $(i,j)$ such that $Q$ provides an embedding from $L_{i}$ into $L_{j}$, and output accordingly. Finally, $\widehat{Q}\leq_{W}\widehat{\mathsf{ATR}}\equiv_{W}\mathsf{ATR}$ (Proposition 3.6), so $\mathsf{WQO}_{\mathrm{WO}}\leq_{W}\mathsf{ATR}$ as desired. ∎ In the next few sections, we work toward some reversals. Central to a reversal (say, from $\mathsf{WCWO}$ to $\mathsf{ATR}$) is the ability to encode information into well-orderings such that we can extract information from an arbitrary embedding between them. Shore [20] showed how to do this if the well-orderings are indecomposable (and constructed appropriately). Definition 4.6. A well-ordering $X$ is indecomposable if it is embeddable in all of its final segments. Indecomposable well-orderings also played an essential role in Friedman and Hirst’s [10] proof that $\mathsf{WCWO}$ implies $\mathsf{ATR}_{0}$ in reverse mathematics. We state two useful properties about indecomposable well-orderings. First, it is easy to show by induction that: Lemma 4.7. If $M$ is indecomposable and $L_{i}$, $i<n$ each embed strictly into $M$, then $\left(\sum_{i<n}L_{i}\right)+M\equiv M$. Second, the following lemma will be useful for extracting information from embeddings between orderings. Lemma 4.8. Let $L$ be a linear ordering and let $M$ be an indecomposable well-ordering which does not embed into $L$. If $F$ embeds $M$ into a finite sum of $L$’s and $M$’s, then the range of $M$ under $F$ must be cofinal in some copy of $M$. Therefore, if $M\cdot k$ embeds into a finite sum of $L$’s and $M$’s, then there must be at least $k$ many $M$’s in the sum. Proof. There are three cases regarding the position of the range of $M$ in the sum. Case 1. $F$ maps some final segment of $M$ into some copy of $L$. Since $M$ is indecomposable, it follows that $M$ embeds into $L$, contradiction. Case 2. $F$ maps some final segment of $M$ into a bounded segment of some copy of $M$. Since $M$ is indecomposable, that implies that $M$ maps into a bounded segment of itself. This contradicts well-foundedness of $M$. Case 3. The remaining case is that the range of $M$ is cofinal in some copy of $M$, as desired. ∎ We remark that for our purposes, we do not need to pay attention to the computational content of the previous two lemmas. In addition, unlike in reverse mathematics, we do not need to distinguish between “$M$ does not embed into $L$” and “$L$ strictly embeds into $M$”. 5. An analog of Chen’s theorem In this section, given a labeled well-ordering $\mathcal{L}=(L,0_{L},S,p)$, $\langle Y_{a}\rangle_{a\in L}$ denotes the unique hierarchy on $L$, as defined in Proposition 3.7. (This notation persists for the next two sections, which use results from this section.) We present the technical ingredients needed for our reductions from $\mathsf{ATR}$ to theorems about embeddings between well-orderings. The main result is an analog of the following theorem of Chen, which suggests a bridge from computing jump hierarchies to comparing well-orderings. We will not need Chen’s theorem so we will not define the notation therein; see Shore [20, Theorem 3.5] for details. Theorem 5.1 (Chen [8, Corollary 10.2]). Fix $x\in\mathcal{O}$. There is a recursive function $k(a,n)$ such that for all $a<_{\mathcal{O}}x$ and $n\in\mathbb{N}$, (1) $k(a,n)$ is an index for a recursive well-ordering $K(a,n)$; (2) if $n\in H_{a}$, then $K(a,n)+1\leq\omega^{|x|}$; (3) if $n\notin H_{a}$, then $K(a,n)\equiv\omega^{|x|}$. We adapt Chen’s theorem to our setting, which involves well-orderings instead of notations. Our proof is a direct adaptation of Shore’s proof of Chen’s theorem. We begin by defining some computable operations on trees. Definition 5.2 (Shore [20, Definition 3.9], slightly modified). For any (possibly finite) sequence of trees $\langle T_{i}\rangle$, we define their maximum by joining all $T_{i}$’s at the root, i.e., $$\max(\langle T_{i}\rangle)=\{\langle\rangle\}\cup\{i\mathord{\mathchoice{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3pt}{\scalebox{.7}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{\raisebox{3.01pt}{% \scalebox{.5}{$\frown$}}}}\sigma:\sigma\in T_{i}\}.$$ Next, we define the minimum of a sequence of trees to be their “staggered common descent tree”. More precisely, for any (possibly finite) sequence of trees $\langle T_{i}\rangle$, a node at level $n$ of the tree $\min(\langle T_{i}\rangle)$ consists of, for each $i<n$ such that $T_{i}$ is defined, a chain in $T_{i}$ of length $n$. A node extends another node if for each $i$ in their common domain, the $i^{\text{th}}$ chain in the former node is an end-extension of the $i^{\text{th}}$ chain in the latter node. It is easy to see that the maximum and minimum operations play well with the ranks of trees: Lemma 5.3 (Shore [20, Lemma 3.10]). Let $\langle T_{i}\rangle$ be a (possibly finite) sequence of trees. (1) If $\mathrm{rk}(T_{i})\leq\alpha$ for all $i$, then $\mathrm{rk}(\max(\langle T_{i}\rangle))\leq\alpha$. (2) If there is some $i$ such that $T_{i}$ is ill-founded, then $\max(\langle T_{i}\rangle)$ is ill-founded. (3) If every $T_{i}$ is well-founded, then $\mathrm{rk}(\min(\langle T_{i}\rangle))\leq\mathrm{rk}(T_{i})+i$. (4) If every $T_{i}$ is ill-founded, then $\min(\langle T_{i}\rangle)$ is ill-founded as well. With the maximum and minimum operations in hand, we may prove an analog of Theorem 3.11 in Shore [20]: Theorem 5.4. Given a labeled well-ordering $\mathcal{L}$, we can uniformly compute sequences of trees $\langle g(a,n)\rangle_{n\in\mathbb{N},a\in L}$ and $\langle h(a,n)\rangle_{n\in\mathbb{N},a\in L}$ such that: • if $n\in Y_{a}$, then $\mathrm{rk}(g(a,n))\leq\omega\cdot\mathrm{otp}(L\restriction a)$ and $h(a,n)$ is ill-founded; • if $n\notin Y_{a}$, then $\mathrm{rk}(h(a,n))\leq\omega\cdot\mathrm{otp}(L\restriction a)$ and $g(a,n)$ is ill-founded. Proof. We define $g$ and $h$ by $\mathcal{L}$-effective transfinite recursion on $L$. For the base case (recall $Y_{0_{L}}=\mathcal{L}$), define $g(0_{L},n)$ to be an infinite path of $0$’s for all $n\notin\mathcal{L}$, and the empty node for all $n\in\mathcal{L}$. Define $h(0_{L},n)$ analogously. For $b$ limit, define $g(b,\langle a,n\rangle)=g(a,n)$ and $h(b,\langle a,n\rangle)=h(a,n)$ for any $n\in\mathbb{N}$ and $a<_{L}b$. For $b=a+1$, fix a Turing functional $W$ which computes $X$ from $X^{\prime}$ for any $X$. In particular, $$n\in Y_{b}\quad\text{iff}\quad(\exists\langle P,Q,n\rangle\in W)(P\subseteq Y_% {a}\text{ and }Q\subseteq Y^{c}_{a}).$$ Then define $$h(b,n)=\max(\langle\min(\langle\{h(a,p):p\in P\},\{g(a,q):q\in Q\}\rangle):% \langle P,Q,n\rangle\in W\rangle).$$ If $n\in Y_{b}$, then there is some $\langle P,Q,n\rangle\in W$ such that $P\subseteq Y_{a}$ and $Q\subseteq Y_{a}^{c}$. Then every tree in the above minimum for $\langle P,Q,n\rangle$ is ill-founded, so the minimum is itself ill-founded. Hence $h(b,n)$ is ill-founded. If $n\notin Y_{b}$, then for all $\langle P,Q,n\rangle\in W$, either $P\not\subseteq Y_{a}$ or $Q\not\subseteq Y^{c}_{a}$. Either way, all of the above minima have rank $<\omega\cdot\mathrm{otp}(L\restriction a)+\omega$. Hence $h(b,n)$ has rank at most $\omega\cdot\mathrm{otp}(L\restriction a)+\omega\leq\omega\cdot\mathrm{otp}(L% \restriction b)$. Similarly, define $$g(b,n)=\min(\langle\max(\langle\{g(a,p):p\in P\},\{h(a,q):q\in Q\}\rangle):% \langle P,Q,n\rangle\in W\rangle).$$ This completes the construction for the successor case. ∎ Next, we adapt the above construction to obtain well-founded trees. To that end, for each well-ordering $L$, we aim to compute a tree $(T(\omega\cdot L))^{\infty}$ which is universal for all trees of rank $\leq\omega\cdot\mathrm{otp}(L)$. Shore [20, Definition 3.12] constructs such a tree by effective transfinite recursion. Instead, we use a simpler construction of Greenberg and Montalbán [11]. Definition 5.5. Given a linear ordering $L$, define $T(L)$ to be the tree of finite $<_{L}$-decreasing sequences, ordered by extension. It is easy to see that $L$ is well-founded if and only if $T(L)$ is well-founded, and if $L$ is well-founded, then $\mathrm{rk}(T(L))=\mathrm{otp}(L)$. Definition 5.6 ([11, Definition 3.20]). Given a tree $T$, define a tree $$T^{\infty}=\{\langle(\sigma_{0},n_{0}),\dots,(\sigma_{k},n_{k})\rangle:\langle% \rangle\neq\sigma_{0}\subsetneq\dots\subsetneq\sigma_{k}\in T,n_{0},\dots,n_{k% }\in\mathbb{N}\},$$ ordered by extension. Lemma 5.7 ([11, $\lx@sectionsign$3.2.2]). Let $T$ be well-founded. Then (1) $T^{\infty}$ is well-founded and $\mathrm{rk}(T^{\infty})=\mathrm{rk}(T)$; (2) for every $\sigma\in T^{\infty}$ and $\gamma<\mathrm{rk}_{T^{\infty}}(\sigma)$, there are infinitely many immediate successors $\tau$ of $\sigma$ in $T^{\infty}$ such that $\mathrm{rk}_{T^{\infty}}(\tau)=\gamma$; (3) $\mathrm{KB}(T)$ embeds into $\mathrm{KB}(T^{\infty})$; (4) $\mathrm{KB}(T^{\infty})\equiv\omega^{\mathrm{rk}(T)}+1$, hence $\mathrm{KB}(T^{\infty})-\{\emptyset\}$ is indecomposable. (5) if $S$ is well-founded and $\mathrm{rk}(S)\leq\mathrm{rk}(T)$ ($\mathrm{rk}(S)<\mathrm{rk}(T)$ resp.), then $\mathrm{KB}(S)$ embeds (strictly resp.) into $\mathrm{KB}(T^{\infty})$. Proof. (3) and (5) are not stated in [11], so we give a proof. By (1), fix a rank function $r:T\to\mathrm{rk}(T^{\infty})+1$. We construct an embedding $f:T\to T^{\infty}$ which preserves rank (i.e., $r(\sigma)=\mathrm{rk}_{T^{\infty}}(f(\sigma))$), $<_{\mathrm{KB}}$, and level. Start by defining $f(\emptyset)=\emptyset$. Note that $r(\emptyset)=\mathrm{rk}(T^{\infty})=\mathrm{rk}_{T^{\infty}}(\emptyset)$. Suppose we have defined $f$ on $\sigma\in T$. Then, we extend $f$ by mapping each immediate successor $\tau$ of $\sigma$ to an immediate successor $f(\tau)$ of $f(\sigma)$ such that $r(\tau)=\mathrm{rk}_{T^{\infty}}(f(\tau))$. Such $f(\tau)$ exists by (2). Furthermore, by (2), if we start defining $f$ from the leftmost immediate successor of $\sigma$ and proceed to the right, we can extend $f$ in a way that preserves $<_{\mathrm{KB}}$. This proves (3). (5) follows from (3) applied to $S$ and (4) applied to $S$ and $T$. ∎ Finally, we prove our analog of Chen’s theorem (Theorem 5.1): Theorem 5.8. Given a labeled well-ordering $\mathcal{L}$, we can uniformly compute an indecomposable well-ordering $M$ and well-orderings $\langle K(a,n)\rangle_{n\in\mathbb{N},a\in L}$ such that: • if $n\in Y_{a}$, then $K(a,n)\equiv M$. • if $n\notin Y_{a}$, then $K(a,n)<M$. Proof. Given $\mathcal{L}$, we may use Theorem 5.4, Definition 5.5 and Definition 5.6 to uniformly compute $$\displaystyle M$$ $$\displaystyle=\mathrm{KB}(T(\omega\cdot L)^{\infty})-\{\emptyset\}$$ $$\displaystyle K(a,n)$$ $$\displaystyle=\mathrm{KB}(\min\{T(\omega\cdot L)^{\infty},h(a,n)\})-\{\emptyset\}$$ $$\displaystyle\text{for }n\in\mathbb{N},a\in L.$$ By Lemma 5.7(4), $M$ is indecomposable. Also, $$\displaystyle\mathrm{rk}(T(\omega\cdot L)^{\infty})$$ $$\displaystyle=\omega\cdot\mathrm{otp}(L)$$ $$\displaystyle\text{so}\qquad\mathrm{rk}(\min\{T(\omega\cdot L)^{\infty},h(a,n)\})$$ $$\displaystyle\leq\omega\cdot\mathrm{otp}(L).$$ It then follows from Lemma 5.7(5) that $K(a,n)\leq M$. If $n\in Y_{a}$, then $h(a,n)$ is ill-founded. Fix some descending sequence $\langle\sigma_{i}\rangle_{i}$ in $h(a,n)$. Then we may embed $T(\omega\cdot L)^{\infty}$ into $\min\{T(\omega\cdot L)^{\infty},h(a,n)\}$ while preserving $<_{\mathrm{KB}}$: map $\tau$ to $\langle\langle\tau\restriction i,\sigma_{i}\rangle\rangle_{i=0}^{|\tau|}$. Therefore $M\leq K(a,n)$, showing that $K(a,n)\equiv M$ in this case. If $n\notin Y_{a}$, then $\mathrm{rk}(h(a,n))\leq\omega\cdot\mathrm{otp}(L\restriction a)$. Therefore $$\mathrm{rk}(\min\{T(\omega\cdot L)^{\infty},h(a,n)\})\leq\omega\cdot\mathrm{% otp}(L\restriction a)+1.$$ Since $\omega\cdot\mathrm{otp}(L\restriction a)+1<\omega\cdot\mathrm{otp}(L)$, by Lemma 5.7(5), $K(a,n)<M$. ∎ 6. Reducing $\mathsf{ATR}$ to $\mathsf{WCWO}$ In this section, we apply Theorem 5.8 to show that $\mathsf{ATR}\leq_{W}\mathsf{WCWO}$ (Theorem 6.3). Together with Proposition 4.5, that implies that $\mathsf{WCWO}\equiv_{W}\mathsf{CWO}\equiv_{W}\mathsf{ATR}$. First we work towards some sort of modulus for jump hierarchies. The next two results are adapted from Shore [20, Theorem 2.3]. We have added uniformities where we need them. Proposition 6.1. Given a labeled well-ordering $\mathcal{L}$ and $a\in L$, we can uniformly compute an index for a $\Pi^{0,\mathcal{L}}_{1}$-singleton $\{f\}$ which is strictly increasing, and Turing reductions witnessing that $f\equiv_{T}Y_{a}$. Proof. By $\mathcal{L}$-effective transfinite recursion on $L$, we can compute an index for $Y_{a}$ as a $\Pi^{0,\mathcal{L}}_{2}$-singleton (see Sacks [19, Proposition II.4.1]). Define $f$ to be the join of $Y_{a}$ and the lex-minimal Skolem function which witnesses that $Y_{a}$ satisfies the $\Pi^{0,\mathcal{L}}_{2}$ predicate that we computed. Then we can compute an index for $f$ as a $\Pi^{0,\mathcal{L}}_{1}$-singleton (see Jockusch, McLaughlin [14, Theorem 3.1]). Clearly we can compute Turing reductions witnessing that $Y_{a}\leq_{T}f\leq_{T}\mathcal{L}\oplus Y_{a}$. Next, we can $\mathcal{L}$-uniformly compute a Turing reduction from $Y_{0_{L}}=\mathcal{L}$ to $Y_{a}$, and hence a Turing reduction from $\mathcal{L}\oplus Y_{a}$ to $Y_{a}$. Finally, without loss of generality, we can replace $f:\mathbb{N}\to\mathbb{N}$ with its cumulative sum, which is strictly increasing. ∎ Lemma 6.2. There are indices $e_{0}$, $e_{1}$, and $e_{2}$ such that for all labeled well-orderings $\mathcal{L}$ and $a\in L$, there is some strictly increasing $f:\mathbb{N}\to\mathbb{N}$ such that if $Y_{a}$ is the $a^{\text{th}}$ column of the unique hierarchy on $L$, then: (1) $\Phi_{e_{0}}^{\mathcal{L}\oplus a}$ is an index for a Turing reduction from $f$ to $Y_{a}$; (2) for all $g:\mathbb{N}\to\mathbb{N}$, $\Phi^{\mathcal{L}\oplus a\oplus g}_{e_{1}}(0)\!\!\downarrow$ if and only if $g$ does not majorize $f$; (3) for all $g$ which majorizes $f$, $\Phi^{\mathcal{L}\oplus a\oplus g}_{e_{2}}$ is total and defines $Y_{a}$. Proof. Given $\mathcal{L}$ and $a\in L$, first use Proposition 6.1 to compute a tree $T$ with a unique path $f$ which is strictly increasing, and Turing reductions witnessing that $f\equiv_{T}Y_{a}$. This shows (1). Given $g:\mathbb{N}\to\mathbb{N}$, we can compute the $g$-bounded subtree $T_{g}$ of $T$. If $g$ does not majorize $f$, then $T_{g}$ has no infinite path. In that case, $T_{g}$ is finite by König’s lemma, hence we can eventually enumerate that fact. This shows (2). If $g$ majorizes $f$, then we can compute $f$ as follows: $\sigma\prec f$ if and only if for all other $\tau$ with $|\tau|=|\sigma|$, the $g$-bounded subtree of $T$ above $\tau$ is finite. We can then compute $Y_{a}$ from $f$. This shows (3). ∎ We now combine Theorem 5.8 with the above lemma to prove that Theorem 6.3. $\mathsf{ATR}\leq_{W}\mathsf{WCWO}$. Proof. We reduce the version of $\mathsf{ATR}$ in Proposition 3.7 to $\mathsf{WCWO}$. Given a labeled well-ordering $\mathcal{L}$ and $a\in L$, by Lemma 6.2, there is some strictly increasing $f$ such that if $g$ majorizes $f$, then $\mathcal{L}\oplus a\oplus g$ uniformly computes $Y_{a}$. Furthermore, we may compute reductions witnessing $\mathrm{range}(f)\leq_{T}f\leq_{T}Y_{a}$. From that we may compute a many-one reduction $r$ from $\mathrm{range}(f)$ to $Y_{a+1}$ (the $(a+1)^{\text{th}}$ column of the unique hierarchy on $(L\restriction\{b:b\leq_{L}a\})+1$). Next, use $\mathcal{L}$ to compute labels for $(L\restriction\{b:b\leq_{L}a\})+1$. Apply Theorem 5.8 to $(L\restriction\{b:b\leq_{L}a\})+1$ (and its labels) to compute an indecomposable well-ordering $M$ and for each $n$, a well-ordering $L_{n}:=K(a+1,r(n))$, such that $$\displaystyle n\in\mathrm{range}(f)$$ $$\displaystyle\Leftrightarrow\quad r(n)\in Y_{a+1}\quad\Leftrightarrow\quad L_{% n}\equiv M$$ $$\displaystyle n\notin\mathrm{range}(f)$$ $$\displaystyle\Leftrightarrow\quad r(n)\notin Y_{a+1}\quad\Leftrightarrow\quad L% _{n}<M.$$ For the forward functional, consider the following $\mathsf{WCWO}$-instance: $$\sum_{n}M\quad\text{and}\quad\left(\sum_{n}L_{n}\right)+1.$$ Observe that by Lemma 4.7, $\sum_{n}L_{n}$ has the same ordertype as $\sum_{n}M$. Hence any $\mathsf{WCWO}$-solution $F$ must go from left to right. Furthermore, since $M$ is indecomposable, it has no last element, so $F$ must embed $\sum_{n}M$ into $\sum_{n}L_{n}$. For the backward functional, we start by uniformly computing any element $m_{0}$ of $M$. Then we use $F$ to compute the following function: $$g(n)=\pi_{0}(F(\langle n+1,m_{0}\rangle)).$$ We show that $g$ majorizes $f$. For each $n$, $F$ embeds $M\cdot n$ into $\sum_{i\leq g(n)}L_{i}$. It follows from Lemma 4.8 that at least $n$ of the $L_{i}$’s ($i\leq g(n)$) must have ordertype $M$. That means that there must be at least $n$ elements in the range of $f$ which lie below $g(n)$, i.e., $f(n)\leq g(n)$. Since $g$ majorizes $f$, $\mathcal{L}\oplus a\oplus g$ uniformly computes $Y_{a}$ by Lemma 6.2, as desired. ∎ Using Theorem 6.3 and Proposition 4.5, we conclude that Corollary 6.4. $\mathsf{CWO}\equiv_{W}\mathsf{ATR}\equiv_{W}\mathsf{WCWO}$. 7. Reducing $\mathsf{ATR}$ to $\mathsf{NDS}_{\mathrm{WO}}$ and $\mathsf{NIAC}_{\mathrm{WO}}$ Shore [20, Theorem 3.7] showed that in reverse mathematics, $\mathsf{NDS}_{\mathrm{WO}}$ (formulated as a $\Pi^{1}_{2}$ sentence) implies $\mathsf{ATR}_{0}$ over $\mathsf{RCA}_{0}$. We adapt his proof to show that Theorem 7.1. $\mathsf{ATR}\leq_{W}\mathsf{C}_{\mathbb{N}}\ast\mathsf{NDS}_{\mathrm{WO}}$. In particular, $\mathsf{ATR}\leq_{c}\mathsf{NDS}_{\mathrm{WO}}$ and $\mathsf{ATR}\leq_{W}^{\mathrm{arith}}\mathsf{NDS}_{\mathrm{WO}}$. Proof. We reduce the version of $\mathsf{ATR}$ in Proposition 3.7 to $\mathsf{NDS}_{\mathrm{WO}}$. Given a labeled well-ordering $\mathcal{L}$ and $a\in L$, by Lemma 6.2, there is some strictly increasing $f$ such that if $g$ majorizes $f$, then $\mathcal{L}\oplus a\oplus g$ uniformly computes $Y_{a}$. Furthermore, as in the proof of Theorem 6.3, we may compute a many-one reduction $r$ from $f$ to $Y_{a+1}$. Next, use $\mathcal{L}$ to compute labels for $(L\restriction\{b:b\leq_{L}a\})+1$. Apply Theorem 5.8 to $(L\restriction\{b:b\leq_{L}a\})+1$ to compute an indecomposable well-ordering $M$ and for each $i$ and $n$, a well-ordering $K(a+1,r(i,n))$, such that $$\displaystyle f(i)=n$$ $$\displaystyle\Leftrightarrow\quad r(i,n)\in Y_{a+1}\quad\Leftrightarrow\quad K% (a+1,r(i,n))\equiv M$$ $$\displaystyle f(i)\neq n$$ $$\displaystyle\Leftrightarrow\quad r(i,n)\notin Y_{a+1}\quad\Leftrightarrow% \quad K(a+1,r(i,n))<M.$$ For the forward functional, define for each $j$ and $n$: $$\displaystyle L_{j,n}$$ $$\displaystyle=\sum_{j\leq i<n}K(a+1,r(i,n))$$ $$\displaystyle N_{j}$$ $$\displaystyle=\sum_{n}L_{j,n}.$$ For each $j$ and $n$, $L_{j+1,n}$ uniformly embeds into $L_{j,n}$. So for each $j$, we can uniformly embed $N_{j+1}$ into $N_{j}$. Hence $\langle N_{j}\rangle_{j}$ (with said embeddings) is an $\mathsf{NDS}_{\mathrm{WO}}$-instance. Apply $\mathsf{NDS}_{\mathrm{WO}}$ to obtain some embedding $F:N_{j}\to N_{k}$, $j<k$. For the backward functional, we aim to compute a sequence $\langle h_{q}\rangle_{q}$ of functions, such that $h_{q}$ majorizes $f$ for all sufficiently large $q$. We start by uniformly computing any element $m_{0}$ of $M$. Then for each $q$, define $$h_{q}(0)=q\quad\text{and}\quad h_{q}(n+1)=\pi_{0}(F(\langle h_{q}(n)+1,m_{0}% \rangle)).$$ We show that $h_{f(k)}$ majorizes $f$. (Hence for all $q\geq f(k)$, $h_{q}$ majorizes $f$.) For this proof, temporarily set $q=f(k)$. We show by induction on $n$ that $h_{q}(n)\geq f(k+n)$. The base case $n=0$ holds by definition of $q$. Suppose $h_{q}(n)\geq f(k+n)$. For each $j\leq i\leq k+n$, $K(a+1,r(i,f(i)))$ is a summand in $L_{j,f(i)}$ (because $f(i)>i$), which is in turn a summand in $\sum_{m\leq h_{q}(n)}L_{j,m}$. That implies that $M\cdot(k+n-j+1)$ embeds into $\sum_{m\leq h_{q}(n)}L_{j,m}$, which lies below $\langle h_{q}(n)+1,m_{0}\rangle$ in $N_{j}$. Composing with $F$, we deduce that $M\cdot(k+n-j+1)$ embeds into the initial segment of $N_{k}$ below $F(\langle h_{q}(n)+1,m_{0}\rangle)$, which is contained in $\sum_{m\leq h_{q}(n+1)}L_{k,m}$. It follows from Lemma 4.8 that there are at least $k+n-j+1$ many copies of $M$ in $\sum_{m\leq h_{q}(n+1)}L_{k,m}$. Therefore, there are at least $k+n-j+1$ many elements in $\{f(i):i\geq k\}$ below $h_{q}(n+1)$. It follows that $$h_{q}(n+1)\geq f(k+n-j+k)\geq f(k+n+1)$$ as desired. This completes the proof of the inductive step. We have shown that $h_{f(k)}$ majorizes $f$. Finally, by Lemma 6.2(2), given $\mathcal{L}\oplus a\oplus\langle h_{q}\rangle_{q}$, we may apply $\mathsf{C}_{\mathbb{N}}$ (Definition 2.6) to compute some $q$ such that $h_{q}$ majorizes $f$. Then $\mathcal{L}\oplus a\oplus h_{q}$ uniformly computes $Y_{a}$ by Lemma 6.2(3), as desired. ∎ The above proof can be easily modified to show that Theorem 7.2. $\mathsf{ATR}\leq_{W}\mathsf{C}_{\mathbb{N}}\ast\mathsf{NIAC}_{\mathrm{WO}}$. In particular, $\mathsf{ATR}\leq_{c}\mathsf{NIAC}_{\mathrm{WO}}$ and $\mathsf{ATR}\leq_{W}^{\mathrm{arith}}\mathsf{NIAC}_{\mathrm{WO}}$. Proof. Given $\mathcal{L}$ and $a\in L$, compute $\langle L_{j,n}\rangle_{j,n}$ and $\langle N_{j}\rangle_{j}$ as in the proof of Theorem 7.1. Then consider the $\mathsf{NIAC}_{\mathrm{WO}}$-instance $\langle N_{j}+j\rangle_{j}$. Given an embedding $F:N_{j}+j\to N_{k}+k$, first observe that by Lemma 4.7, $N_{j}$ and $N_{k}$ have the same ordertype, namely that of $M\cdot\omega$. Hence $j<k$. Furthermore, since $M$ is indecomposable, $F$ must embed $N_{j}$ into $N_{k}$. The backward functional is then identical to that in Theorem 7.1. ∎ We do not know if $\mathsf{ATR}\leq_{W}\mathsf{NDS}_{\mathrm{WO}}$, $\mathsf{ATR}\leq_{W}\mathsf{NIAC}_{\mathrm{WO}}$, or even $\mathsf{ATR}\leq_{W}\mathsf{WQO}_{\mathrm{WO}}$. 8. Two-sided problems Many of the problems we have considered thus far have domains which are $\Pi^{1}_{1}$. For instance, the domain of $\mathsf{CWO}$ is the set of pairs of well-orderings. In that case, being outside the domain is a $\Sigma^{1}_{1}$ property. Now, any $\Sigma^{1}_{1}$ property can be thought of as a problem whose instances are sets satisfying said property and solutions are sets which witness that said property holds. This suggests that we combine a problem which has a $\Pi^{1}_{1}$ domain with the problem corresponding to the complement of its domain. One obvious way to combine such problems is to take their union. For example, a “two-sided” version of $\mathsf{WCWO}$ could map pairs of well-orderings to any embedding between them, and map other pairs of linear orderings to any infinite descending sequence in either linear ordering. We will not consider such problems in this paper, because they are not Weihrauch reducible (or even arithmetically Weihrauch reducible) to $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$. (Any such reduction could be used to give a $\Sigma^{1}_{1}$ definition for the set of indices of pairs of well-orderings. See also Brattka, de Brecht, Pauly [4, Theorem 7.7].) On the other hand, it is not hard to see that the problems corresponding to Fraïssé’s conjecture ($\mathsf{WQO}_{\mathrm{LO}}$) and König’s duality theorem (see section 9) are Weihrauch reducible to $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$. However, note that embeddings between linear orderings can still exist even if either linear ordering is ill-founded! This suggests an alternative method of combination, resulting in the following “two-sided” extensions of $\mathsf{CWO}$ and $\mathsf{WCWO}$. Definition 8.1. Define the following problems: $\mathsf{CWO}_{2}$: Given linear orderings $L$ and $M$, either produce an embedding from one of them onto an initial segment of the other, or an infinite descending sequence in either ordering. In either case we indicate which type of solution we produce. $\mathsf{WCWO}_{2}$: Given linear orderings $L$ and $M$, either produce an embedding from one of them into the other, or an infinite descending sequence in either ordering. In either case we indicate which type of solution we produce. It is not hard to see that whether solutions to instances of the above problems come with an indication of their type does not affect the Weihrauch degree of the problems. Hence we include the type for our convenience. Next, we define a two-sided version of $\mathsf{ATR}$. In section 9, we will show that it is closely related to König’s duality theorem (Theorem 9.25). Recall our definition of a jump hierarchy: Definition 3.1. Given a linear ordering $L$ with first element $0_{L}$ and a set $A\subseteq\mathbb{N}$, a jump hierarchy on $L$ which begins with $A$ is a set $\langle X_{a}\rangle_{a\in L}$ such that • $X_{0_{L}}=A$; • for every $b\in L$, $X_{b}=\left(\bigoplus_{a<_{L}b}X_{a}\right)^{\prime}$. Jump hierarchies on ill-founded linear orderings were first studied by Harrison [12], and are often called pseudohierarchies. See, for example, [22, Section V.4]). Definition 8.2. We define a two-sided version of $\mathsf{ATR}$ as follows: $\mathsf{ATR}_{2}$: Given a linear ordering $L$ and a set $A\subseteq\mathbb{N}$, either produce an infinite $<_{L}$-descending sequence $S$, or a jump hierarchy $\langle X_{a}\rangle_{a\in L}$ on $L$ which begins with $A$. In either case we indicate which type of solution we produce.222Just as for $\mathsf{CWO}_{2}$ and $\mathsf{WCWO}_{2}$, this does not affect the Weihrauch degree of $\mathsf{ATR}_{2}$. Just as for $\mathsf{CWO}$ and $\mathsf{WCWO}$, if we require an $\mathsf{ATR}_{2}$-solution to an ill-founded $L$ to be an infinite $<_{L}$-descending sequence, then the resulting problem is not Weihrauch reducible to $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$. The same holds if we require an $\mathsf{ATR}_{2}$-solution to $L$ to be a jump hierarchy whenever $L$ supports a jump hierarchy, because Theorem 8.3 (Harrington, personal communication). The set of indices for linear orderings which support a jump hierarchy is $\Sigma^{1}_{1}$-complete. A Weihrauch reduction from the aforementioned variant of $\mathsf{ATR}_{2}$ to $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ would yield a $\Pi^{1}_{1}$ definition of the set of indices for linear orderings which support a jump hierarchy, contradicting Harrington’s result. Next, we determine the positions of $\mathsf{CWO}_{2}$, $\mathsf{WCWO}_{2}$, and $\mathsf{ATR}_{2}$ relative to $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ and $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ in the Weihrauch degrees. In addition, even though we are not viewing $\mathsf{WQO}_{\mathrm{LO}}$ (Fraïssé’s conjecture) as a two-sided problem, most of our arguments and results hold for $\mathsf{WQO}_{\mathrm{LO}}$ as well. First observe that each of $\mathsf{CWO}$, $\mathsf{WCWO}$, and $\mathsf{ATR}$ is trivially Weihrauch reducible to its two-sided version. By Corollary 6.4 and the fact that $\mathsf{ATR}\equiv_{W}\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ (Kihara, Marcone, Pauly [15]), these two-sided problems lie above $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ in the Weihrauch degrees. We do not know if $\mathsf{WQO}_{\mathrm{LO}}$ lies above $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$ in the Weihrauch degrees. Next observe that $\mathsf{CWO}_{2}$, $\mathsf{WCWO}_{2}$, $\mathsf{ATR}_{2}$, and $\mathsf{WQO}_{\mathrm{LO}}$ are each defined by an arithmetic predicate on an arithmetic domain. It easily follows that they lie below $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ in the Weihrauch degrees. In fact, they lie strictly below $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$: Proposition 8.4. Suppose that $P$ is an arithmetically defined multivalued function such that $\mathrm{dom}(P)$ is not $\Pi^{1}_{1}$. If $Q$ is arithmetically defined and $\mathrm{dom}(Q)$ is arithmetic, then $P$ is not arithmetically Weihrauch reducible to $Q$. Proof. If $P$ is arithmetically Weihrauch reducible to $Q$ via arithmetically defined functionals $\Phi$ and $\Psi$, then we could give a $\Pi^{1}_{1}$ definition for $\mathrm{dom}(P)$ as follows: $X\in\mathrm{dom}(P)$ if and only if $$\Phi(X)\in\mathrm{dom}(Q)\land\forall Y[Y\in Q(\Phi(X))\to\Psi(X\oplus Y)\in P% (X)].$$ Contradiction. ∎ Corollary 8.5. $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ is not arithmetically Weihrauch reducible to any of $\mathsf{CWO}_{2}$, $\mathsf{WCWO}_{2}$, $\mathsf{ATR}_{2}$, or $\mathsf{WQO}_{\mathrm{LO}}$. Proof. Each of $\mathsf{CWO}_{2}$, $\mathsf{WCWO}_{2}$, $\mathsf{ATR}_{2}$, and $\mathsf{WQO}_{\mathrm{LO}}$ are arithmetically defined with arithmetic domain. $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ is also arithmetically defined, but its domain is $\Sigma^{1}_{1}$-complete. Apply Proposition 8.4. ∎ Next we show that $\mathsf{CWO}_{2}$, $\mathsf{WCWO}_{2}$, $\mathsf{ATR}_{2}$, and $\mathsf{WQO}_{\mathrm{LO}}$ are not Weihrauch reducible (or even computably reducible) to $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$. First we have a boundedness argument: Lemma 8.6. Suppose $P(X,Y)$ is a $\Pi^{1}_{1}$ predicate and $D$ is a $\Sigma^{1}_{1}$ set of reals. If for every $X\in D$, there is some hyperarithmetic $Y$ such that $P(X,Y)$ holds, then there is some $b\in\mathcal{O}$ such that for every $X\in D$, there is some $Y\leq_{T}H_{b}$ such that $P(X,Y)$. Proof. Consider the following $\Pi^{1}_{1}$ predicate of $X$ and $a$: $$X\notin D\lor(a\in\mathcal{O}\land(\exists e)(\Phi^{H_{a}}_{e}\text{ is total % and }P(X,\Phi^{H_{a}}_{e}))).$$ By $\Pi^{1}_{1}$-uniformization, there is a $\Pi^{1}_{1}$ predicate $Q(X,a)$ uniformizing it. Then the set $$\{a:(\exists X\in D)Q(X,a)\}=\{a:(\exists X\in D)(\forall b\neq a)\neg Q(X,b)\}$$ is $\Sigma^{1}_{1}$ and contained in $\mathcal{O}$. Therefore it is bounded by some $b\in\mathcal{O}$, proving the desired statement. ∎ Corollary 8.7. Each of $\mathsf{WCWO}_{2}$, $\mathsf{CWO}_{2}$, $\mathsf{ATR}_{2}$, and $\mathsf{WQO}_{\mathrm{LO}}$ have a computable instance with no hyperarithmetic solution. Proof. By the contrapositive of Lemma 8.6, it suffices to show that for all $b\in\mathcal{O}$, there is a computable instance of each problem with no $H_{b}$-computable solution. Observe that for all $b\in\mathcal{O}$, there is a computable instance of $\mathsf{ATR}$ such that none of its solutions are computable in $H_{b}$.333Note that the domain of $\mathsf{ATR}$ is not $\Sigma^{1}_{1}$, so we cannot apply Lemma 8.6 to show that there is a computable instance of $\mathsf{ATR}$ with no hyperarithmetic solution. (The latter statement is clearly false.) The following reductions imply that the same holds for $\mathsf{WCWO}_{2}$, $\mathsf{CWO}_{2}$, $\mathsf{ATR}_{2}$, and $\mathsf{WQO}_{\mathrm{LO}}$: $$\displaystyle\mathsf{ATR}$$ $$\displaystyle\leq_{W}\mathsf{WCWO}\leq_{W}\mathsf{WCWO}_{2}\leq_{W}\mathsf{CWO% }_{2}$$ Theorem 6.3 $$\displaystyle\mathsf{ATR}$$ $$\displaystyle\leq_{W}\mathsf{ATR}_{2}$$ $$\displaystyle\mathsf{ATR}$$ $$\displaystyle\leq_{c}\mathsf{WQO}_{\mathrm{LO}}$$ Theorem 7.1 This completes the proof. ∎ Corollary 8.7 implies that Corollary 8.8. $\mathsf{WCWO}_{2}$, $\mathsf{CWO}_{2}$, $\mathsf{ATR}_{2}$, and $\mathsf{WQO}_{\mathrm{LO}}$ are not computably reducible or arithmetically Weihrauch reducible to $\mathsf{UC}_{\mathbb{N}^{\mathbb{N}}}$. 8.1. $\mathsf{ATR}_{2}$ and variants thereof In this subsection, we prove some results regarding $\mathsf{ATR}_{2}$ and its variants. First we have several results showing that $\mathsf{ATR}_{2}$ is fairly robust. Next we show that $\mathsf{CWO}_{2}\leq_{W}\mathsf{ATR}_{2}$ (Theorem 8.12), in analogy with $\mathsf{CWO}\leq_{W}\mathsf{ATR}$ (Proposition 4.5). We start with the following analog of Proposition 3.3: Proposition 8.9. $\mathsf{ATR}_{2}$ is Weihrauch equivalent to the following problem. Instances are triples $(L,A,\Theta)$ where $L$ is a linear ordering, $A\subseteq\mathbb{N}$, and $\Theta(n,Y,A)$ is an arithmetical formula whose only free variables are $n$, $Y$ and $A$. Solutions are either infinite $<_{L}$-descending sequences, or hierarchies $\langle Y_{a}\rangle_{a\in L}$ such that for all $b\in L$, $Y_{b}=\{n:\Theta(n,\bigoplus_{a<_{L}b}Y_{a},A)\}$. (As usual, solutions come with an indication of their type.) Proof. Roughly speaking, we extend the reductions defined in Proposition 3.3. First, $\mathsf{ATR}_{2}$ is Weihrauch reducible to the above problem: for the forward reduction, given $(L,A)$, consider $(L,A,\Theta)$ where $\Theta(n,Y,A)$ holds if either $Y=\emptyset$ and $n\in A$, or $n\in Y^{\prime}$. The backward reduction is the identity. Conversely, given $(L,A,\Theta)$, let $k$ be one greater than the number of quantifier alternations in $\Theta$. Apply $\mathsf{ATR}_{2}$ to $(1+k\cdot L+2,L\oplus A)$. If we obtain an infinite descending sequence in $1+k\cdot L+2$, we can uniformly compute an infinite descending sequence in $L$ and output that. Otherwise, we obtain a jump hierarchy $\langle X_{\alpha}\rangle_{\alpha\in 1+k\cdot L+2}$. We want to use it to either compute a hierarchy on $L$, or an infinite $<_{L}$-descending sequence. We start by using the recursion theorem to compute a $\langle X_{(a,k-1)}\rangle_{a\in L}$-partial recursive function $f:L\to\mathbb{N}$, as described in the proof of Proposition 3.3. Note that $f$ may not be total. Next, we compute $(\langle X_{(a,k-1)}\rangle_{a\in L})^{\prime\prime}$ and use that to decide whether $f$ is total. If so, following the proof of Proposition 3.3, we may compute a hierarchy on $L$ with the desired properties. If not, we use $(\langle X_{(a,k-1)}\rangle_{a\in L})^{\prime\prime}$ to compute the complement of the domain of $f$ in $L$. This set has no $<_{L}$-least element, by construction of $f$. Therefore, we can uniformly compute an infinite $<_{L}$-descending sequence within it. ∎ Just as we defined labeled well-orderings, we may also define labeled linear orderings if said linear orderings have first elements. Then we have the following analog of Proposition 3.5: Proposition 8.10. $\mathsf{ATR}_{2}$ is Weihrauch equivalent to the following problem: an instance is a labeled linear ordering $\mathcal{L}$ and a set $A\subseteq\mathbb{N}$, and a solution is an $\mathsf{ATR}_{2}$-solution to $(L,A)$. Proof. It suffices to reduce $\mathsf{ATR}_{2}$ to the given problem. Given $(L,A)$, we start by computing $\omega\cdot(1+L)$ and labels for it. Then we apply the given problem to $\omega\cdot(1+L)$ (and its labels) and the set $L\oplus A$. If we obtain an infinite descending sequence in $\omega\cdot(1+L)$, we can uniformly compute an infinite descending sequence in $L$ and output that. Otherwise, we obtain a jump hierarchy $\langle X_{(n,\alpha)}\rangle_{n\in\omega,\alpha\in 1+L}$ which starts with $L\oplus A$. First use this hierarchy to compute $L^{\prime\prime}$, which tells us whether $L$ has a first element. If not, we can uniformly compute an infinite descending sequence in $L$ and output that. Otherwise, we use the recursion theorem to compute a partial $\langle X_{(0,b)}\rangle_{b\in L}$-recursive function $f:L\to\mathbb{N}$, as described in the proof of Proposition 3.5. Then we compute $$S=\left\{b\in L:\langle\Phi^{X_{(0,a)}}_{f(a)}\rangle_{a<_{L}b}\text{ defines % a jump hierarchy}\right\}$$ and consider two cases. Case 1. If $S$ is all of $L$, then we output $\langle\Phi^{X_{(0,a)}}_{f(a)}\rangle_{a\in L}$, which is a jump hierarchy on $L$ which starts with $A$. Case 2. Otherwise, observe that by construction of $f$, $L\backslash S$ has no $<_{L}$-least element. Then we can compute an infinite $<_{L}$-descending sequence in $L\backslash S$ and output that. Finally, note that $\langle X_{(n,\alpha)}\rangle_{n\in\omega,\alpha\in 1+L}$ can compute the above case division and the output in each case. ∎ Proposition 8.10 will be useful in section 9. Using similar ideas, we can show that Proposition 8.11. $\mathsf{ATR}_{2}$ is arithmetically Weihrauch equivalent to the following problem: an instance is a linear ordering $L$ and a set $A\subseteq\mathbb{N}$, and a solution is an infinite $<_{L}$-descending sequence, or some $\langle X_{a}\rangle_{a\in L}$ such that $X_{0_{L}}=A$ and $X^{\prime}_{a}\leq_{T}X_{b}$ for all $0_{L}\leq_{L}a<_{L}b$. Proof. It suffices to construct an arithmetic Weihrauch reduction from $\mathsf{ATR}_{2}$ to the given problem. Given $(L,A)$, the forward functional outputs $(L,L\oplus A)$. To define the backward functional: if the above problem gives us some infinite $<_{L}$-descending sequence then we output that. Otherwise, suppose we are given $\langle X_{a}\rangle_{a\in L}$ such that $X_{0_{L}}=A$ and $X^{\prime}_{a}\leq_{T}X_{b}$ for all $0_{L}\leq_{L}a<_{L}b$. We start by attempting to use $(\langle X_{a}\rangle_{a\in L})^{\prime\prime\prime}$-effective transfinite recursion along $L$ to define a partial $(\langle X_{a}\rangle_{a\in L})^{\prime\prime\prime}$-recursive function $f:L\to\mathbb{N}$ such that $\langle\Phi^{X_{a}}_{f(a)}\rangle_{a\in L}$ is a jump hierarchy on $L$ which starts with $A$. For the base case, we use $X_{0_{L}}=L\oplus A$ to uniformly compute $A$. For $b>_{L}0_{L}$, first use $(\bigoplus_{a\leq_{L}b}X_{a})^{\prime\prime\prime}$ to find Turing reductions (for each $a<_{L}b$) witnessing that $X^{\prime}_{a}\leq_{T}X_{b}$. Then we can use $X_{b}$ to compute $(\bigoplus_{a<_{L}b}\Phi^{X_{a}}_{f(a)})^{\prime}$. This completes the definition of $f$. Next, compute $$S=\left\{b\in L:\langle\Phi^{X_{a}}_{f(a)}\rangle_{a<_{L}b}\text{ defines a % jump hierarchy}\right\}$$ and consider two cases. Case 1. If $S$ is all of $L$, then we output $\langle\Phi^{X_{a}}_{f(a)}\rangle_{a\in L}$, which is a jump hierarchy on $L$ which starts with $A$. Case 2. Otherwise, observe that by construction of $f$, $L\backslash S$ has no $<_{L}$-least element. Then we can compute an infinite $<_{L}$-descending sequence in $L\backslash S$ and output that. Finally, note that by choosing $n$ sufficiently large, $(\langle X_{a}\rangle_{a\in L})^{(n)}$ can compute the above case division and the output in each case. ∎ Next, in analogy with $\mathsf{CWO}\leq_{W}\mathsf{ATR}$ (Proposition 4.5), we have that Theorem 8.12. $\mathsf{CWO}_{2}\leq_{W}\mathsf{ATR}_{2}$. Proof. Given linear orderings $(L,M)$, define $N$ by adding a first element $0_{N}$ and a last element $m_{N}$ to $L$. Apply $\mathsf{ATR}_{2}$ to the linear ordering $N$ and the set $L\oplus M$. If we obtain an infinite descending sequence in $N$, we can use that to uniformly compute an infinite descending sequence in $L$. Otherwise, using Proposition 8.9, we may assume that we obtain a hierarchy $\langle X_{a}\rangle_{a\in N}$ such that: • $X_{0_{N}}=L\oplus M$; • for all $b>_{N}0_{N}$, $X_{b}=\left(\bigoplus_{a<_{N}b}X_{a}\right)^{\prime\prime\prime}$. We start by attempting to use $\langle X_{a}\rangle_{a\in L}$-effective transfinite recursion along $L$ to define a partial $\langle X_{a}\rangle_{a\in L}$-recursive function $f:L\to\mathbb{N}$ such that $\{(a,\Phi^{X_{a}}_{f(a)}(0))\in L\times M:\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}$ is an embedding of an initial segment of $L$ into an initial segment of $M$. To define $f$, if we are given any $b\in L$ and $f\restriction\{a:a<_{L}b\}$, we need to define $f(b)$, specifically $\Phi^{X_{b}}_{f(b)}(0)$. First use $X_{b}=(\bigoplus_{a<_{L}b}X_{a})^{\prime\prime\prime}$ to compute whether all of the following hold: (1) for all $a<_{L}b$, $\Phi^{X_{a}}_{f(a)}(0)$ converges and outputs some element of $M$; (2) $\{\Phi^{X_{a}}_{f(a)}(0):a<_{L}b\}$ is an initial segment of $M$; (3) there is an $M$-least element above $\{\Phi^{X_{a}}_{f(a)}(0):a<_{L}b\}$. If so, we output said $M$-least element; otherwise diverge. This completes the definition of $\Phi^{X_{b}}_{f(b)}(0)$. Apply the recursion theorem to the definition above to obtain a partial $\langle X_{a}\rangle_{a\in L}$-recursive function $f:L\to\mathbb{N}$. Now, to complete the definition of the backward reduction we consider the following cases. Case 1. $f$ is total. Then following the proof of Proposition 4.5, we output $\{(a,\Phi^{X_{a}}_{f(a)}(0)):a\in L\}$, which is an embedding from $L$ onto an initial segment of $M$. Case 2. There is no $L$-least element above $\{a\in L:\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}$. Then we can output an infinite $L$-descending sequence above $\{a\in L:\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}$. Case 3. $\{\Phi^{X_{a}}_{f(a)}(0):a\in L,\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}=M$. Then following the proof of Proposition 4.5, we output $\{(\Phi^{X_{a}}_{f(a)}(0),a):a\in L,\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}$, which is an embedding from $M$ onto an initial segment of $L$. Case 4. There is no $M$-least element above $\{\Phi^{X_{a}}_{f(a)}(0):a\in L,\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow~{}\!\}$. Then we can output an infinite $M$-descending sequence above $\{\Phi^{X_{a}}_{f(a)}(0):a\in L,\Phi^{X_{a}}_{f(a)}(0)\!\!\downarrow\}$. Finally, note that the last column $X_{m_{N}}$ of $\langle X_{a}\rangle_{a\in N}$ can compute the above case division and the appropriate output for each case. ∎ 9. König’s duality theorem In this section, we study König’s duality theorem from the point of view of computable reducibilities. First we state some definitions from graph theory. A graph $G$ is bipartite if its vertex set can be partitioned into two sets such that all edges in $G$ go from one of the sets to the other. It is not hard to see that $G$ is bipartite if and only if it has no odd cycle. (Hence the property of being bipartite is $\Pi^{0}_{1}$.) A matching in a graph is a set of edges which are vertex-disjoint. A (vertex) cover in a graph is a set of vertices which contains at least one endpoint from every edge. König’s duality theorem states that: Theorem 9.1. For any bipartite graph $G$, there is a matching $M$ and a cover $C$ which are dual, i.e., $C$ is obtained by choosing exactly one vertex from each edge in $M$. Such a pair $(C,M)$ is said to be a König cover. König proved the above theorem for finite graphs, where it is commonly stated as “the maximum size of a matching is equal to the minimum size of a cover”. For infinite graphs, this latter form would have little value. Instead of merely asserting the existence of a bijection, we want such a bijection to respect the structure of the graph. Hence the notion of a König cover. Podewski and Steffens [18] proved König’s duality theorem for countable graphs. Finally, Aharoni [1] proved it for graphs of arbitrary cardinality. In this paper, we will study the theorem for countable graphs. Definition 9.2. $\mathsf{KDT}$ is the following problem: given a (countable) bipartite graph $G$, produce a König cover $(C,M)$. Aharoni, Magidor, Shore [2] studied König’s duality theorem for countable graphs from the point of view of reverse mathematics. They showed that $\mathsf{ATR}_{0}$ is provable from König’s duality theorem. They also showed that König’s duality theorem is provable in the system $\Pi^{1}_{1}$-$\mathsf{CA}_{0}$, which is strictly stronger than $\mathsf{ATR}_{0}$. Simpson [21] then closed the gap by showing that König’s duality theorem is provable in (hence equivalent to) $\mathsf{ATR}_{0}$. The proof of $\mathsf{ATR}_{0}$ from König’s duality theorem in [2] easily translates into a Weihrauch reduction from $\mathsf{ATR}$ to $\mathsf{KDT}$. We adapt their proof to show that $\mathsf{ATR}_{2}$ is Weihrauch reducible to $\mathsf{LPO}\ast\mathsf{KDT}$ (Theorem 9.25). Next, we adapt [21]’s proof of König’s duality theorem from $\mathsf{ATR}_{0}$ to show that $\mathsf{KDT}$ is arithmetically Weihrauch reducible to $\mathsf{ATR}_{2}$ (Theorem 9.27). It follows that $\mathsf{ATR}_{2}$ and $\mathsf{KDT}$ are arithmetically Weihrauch equivalent. Since both $\mathsf{ATR}_{2}$ and $\mathsf{KDT}$ have computational difficulty far above the arithmetic (see, for example, Corollary 8.7), this shows that $\mathsf{ATR}_{2}$ and $\mathsf{KDT}$ have roughly the same computational difficulty. Before constructing the above reductions, we make some easy observations about $\mathsf{KDT}$. Proposition 9.3. $\mathsf{KDT}\leq_{W}\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$, but $\mathsf{C}_{\mathbb{N}^{\mathbb{N}}}$ is not even arithmetically Weihrauch reducible to $\mathsf{KDT}$. Proof. The first statement holds because $\mathsf{KDT}$ is defined by an arithmetic predicate on an arithmetic domain. The second statement follows from Proposition 8.4. ∎ Proposition 9.4. $\mathsf{KDT}$ is parallelizable, i.e., $\widehat{\mathsf{KDT}}\leq_{W}\mathsf{KDT}$. Proof. This holds because the disjoint union of bipartite graphs is bipartite, and any König cover of a disjoint union of graphs restricts to a König cover on each graph. ∎ We do not know if $\mathsf{ATR}_{2}$ is parallelizable; a negative answer would separate $\mathsf{ATR}_{2}$ and $\mathsf{KDT}$ up to Weihrauch reducibility. Since being a bipartite graph is a $\Pi^{0}_{1}$ property (in particular $\Pi^{1}_{1}$), we could define two-sided $\mathsf{KDT}$ ($\mathsf{KDT}_{2}$): given a graph, produce an odd cycle (witnessing that the given graph is not bipartite) or a König cover. This produces a problem which is Weihrauch equivalent to $\mathsf{KDT}$, however: Proposition 9.5. $\mathsf{KDT}_{2}\leq_{W}\mathsf{LPO}\times\mathsf{KDT}$, hence $\mathsf{KDT}\equiv_{W}\mathsf{KDT}_{2}$. Proof. Given a $\mathsf{KDT}_{2}$-instance $G$ (i.e., a graph), we can uniformly compute a graph $H$ which is always bipartite and is equal to $G$ if $G$ is bipartite: $H$ has the same vertices as $G$, but as we enumerate edges of $G$ into $H$, we omit any edges that would result in an odd cycle in the graph we have enumerated thus far. For the reduction, we apply $\mathsf{LPO}\times\mathsf{KDT}$ to $(G,H)$. If $\mathsf{LPO}$ (Definition 2.6) tells us that $G$ is bipartite, we output a $\mathsf{KDT}$-solution to $H=G$. Otherwise, we can uniformly compute and output an odd cycle in $G$. Finally, to conclude that $\mathsf{KDT}\equiv_{W}\mathsf{KDT}_{2}$, we use Proposition 9.4 and the fact that $\mathsf{LPO}\leq_{W}\mathsf{KDT}$, which trivially follows from Theorem 9.19 later. ∎ 9.1. Reducing $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$ For both of our forward reductions (from $\mathsf{ATR}$ or $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$), the bipartite graphs we construct are sequences of subtrees of $\mathbb{N}^{<\mathbb{N}}$. In subsection 2.2, we defined these objects and described how we represent them. In this section, we will use “tree” as a shorthand for “rooted subtree of $\mathbb{N}^{<\mathbb{N}}$”. Before we describe the forward reductions in more detail, we describe our backward reduction for $\mathsf{ATR}\leq_{W}\mathsf{KDT}$. It only uses the cover in a König cover and not the matching. First we define a coding mechanism: Definition 9.6. Given a tree $T$ (with root $r$) and a König cover $(C,M)$ of $T$, we can decode the bit $b$, which is the Boolean value of $r\in C$. We say that $(C,M)$ codes $b$. More generally, given any sequence of trees $\langle T_{n}:n\in X\rangle$ (with roots $r_{n}$) and a König cover $(C_{n},M_{n})$ for each $T_{n}$, we can uniformly decode the following set from the set $\langle(C_{n},M_{n})\rangle$: $$A=\{n\in X:r_{n}\in C_{n}\}.$$ We say that $\langle(C_{n},M_{n})\rangle$ codes $A$. A priori, different König covers of the same tree or sequence of trees can code different bits or sets respectively. A tree or sequence of trees is good if that cannot happen: Definition 9.7. A tree $T$ is good if its root $r$ lies in $C$ for every König cover $(C,M)$ of $T$, or lies outside $C$ for every König cover $(C,M)$ of $T$. A sequence of trees $\langle T_{n}\rangle$ is good if every $T_{n}$ is good. In other words, $\langle T_{n}\rangle$ is good if all of its König covers code the same set. If $\langle T_{n}\rangle$ is good and every (equivalently, some) König cover of $\langle T_{n}\rangle$ codes $A$, we say that $\langle T_{n}\rangle$ codes $A$. We will use this coding mechanism to define the backward reduction in $\mathsf{ATR}\leq_{W}\mathsf{KDT}$. Here we make a trivial but important observation: for any $s\in\mathbb{N}^{<\mathbb{N}}$ and any tree $T$, the König covers of $T$ and the König covers of $s\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T$ are in obvious correspondence, which respects whichever bit is coded. Hence $T$ is good if and only if $s\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T$ is good. Next, we set up the machinery for our forward reductions. Aharoni, Magidor, and Shore’s [2] proof of $\mathsf{ATR}_{0}$ from $\mathsf{KDT}$ uses effective transfinite recursion along the given well-ordering to construct good trees which code complicated sets. The base case is as follows: Lemma 9.8. Given any $A\subseteq\mathbb{N}$, we can uniformly compute a sequence of trees $\langle T_{n}\rangle$ which codes $A$. Proof. The tree $\{\langle\rangle\}$ codes the bit $0$. This is because any matching must be empty, hence any dual cover must be empty. The tree $\{\langle\rangle,\langle 0\rangle,\langle 1\rangle\}$ codes the bit $1$. This is because any matching must contain exactly one of the two edges. Hence any cover dual to that must consist of a single node. But the root node is the only node which would cover both edges. By defining each $T_{n}$ to be either of the above trees as appropriate, we obtain a sequence $\langle T_{n}\rangle$ which codes $A$. ∎ We may use this as the base case for our construction as well. As for the successor case, however, we want to extract extra information from the construction in [2]. The issue is that when reducing $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$, “effective transfinite recursion” on ill-founded linear orderings may produce garbage. (Of particular concern is that the resulting trees may not be good.) Nevertheless, we may attempt it anyway. If we detect inconsistencies in the resulting trees and König covers (using the extra information we have extracted), then we may use them to compute an infinite descending sequence in the given linear ordering. Otherwise, we may decode the resulting König covers to produce a jump hierarchy. In order to describe our construction in detail, we need to examine the construction in [2] closely. First we state a sufficient condition on a König cover of a tree and a node in said tree which ensures that the given König cover, when restricted to the subtree above the given node, remains a König cover. The set of all nodes satisfying the former condition form a subtree: Definition 9.9. For any tree $T$ (with root $r$) and any König cover $(C,M)$ of $T$, define the subtree $T^{\ast}$ (with root $r$): $$T^{\ast}=\{t\in T:\forall s(r\prec s\preceq t\rightarrow(s\notin C\lor(s% \restriction(|s|-1),s)\notin M))\}.$$ The motivation behind the definition of $T^{\ast}$ is as follows. Suppose $(C,M)$ is a König cover of $T$. If $s\in C$ and $(s\restriction(|s|-1),s)\in M$, then $C$ restricted to the subtree of $T$ above $s$ would contain $s$, but $M$ restricted to said subtree would not contain any edge with endpoint $s$. This means that the restriction of $(C,M)$ to said subtree is not a König cover. Hence we define $T^{\ast}$ to avoid this situation. According to [2, Lemma 4.5], this is the only situation we need to avoid. When we use the notation $T^{\ast}$, the cover $(C,M)$ will always be clear from context. Observe that $T^{\ast}$ is uniformly computable from $T$ and $(C,M)$. Lemma 9.10. For any $T$ and any König cover $(C,M)$ of $T$, define $T^{\ast}$ as above. Then for any $t\in T^{\ast}$, $(C,M)$ restricts to a König cover of the subtree of $T$ (not $T^{\ast}$!) above $t$. Proof. Proceed by induction on the level of $t$ using [2, Lemma 4.5]. ∎ Using Definition 9.9 and Lemma 9.10, we may easily show that: Proposition 9.11. Let $(C,M)$ be a König cover of $T$. Suppose that $t\in T^{\ast}$. Let $S$ denote the subtree of $T$ above $t$. Then $S^{\ast}$ is contained in $T^{\ast}$, where $S^{\ast}$ is calculated using the restriction of $(C,M)$ to $S$. Next, we define a computable operation on trees which forms the basis of the proofs of [2, Lemmas 4.9, 4.10]. Definition 9.12. Given a (possibly finite) sequence of trees $\langle T_{i}\rangle$, each with the empty node as root, we may combine it to form a single tree $S$, by adjoining two copies of each $T_{i}$ to a root node $r$. Formally, $$S=\{r\}\cup\{r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}(i,0)\mathord{% \mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3pt}{% \scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}\sigma:\sigma\in T_{i}\}\cup\{(i,1% )\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}\sigma:\sigma\in T_{i}\}.$$ Logically, the combine operation can be thought of as $\neg\forall$: Lemma 9.13. Suppose $\langle T_{i}:i\in X\rangle$ combine to form $S$. Let $r$ denote the root of $S$, and for each $i\in X$, let $r_{i,0}$ and $r_{i,1}$ denote the roots of the two copies of $T_{i}$ in $S$ (i.e., $r_{i,0}=r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}(i,0)$ and $r_{i,1}=r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}(i,1)$). Given any König cover $(C,M)$ of $S$, for each $i\in X$, we can uniformly computably choose one of $r_{i,0}$ or $r_{i,1}$ (call our choice $r_{i}$) such that: • $r_{i}\in S^{\ast}$; • $r\notin C$ if and only if for all $i\in X$, $r_{i}\in C$. Therefore if $\langle T_{n}:n\in X\rangle$ codes the set $A\subseteq X$, then $S$ codes the bit $0$ if and only if $A=X$. Proof. Given a König cover $(C,M)$ of $S$ and some $i\in X$, we choose $r_{i}$ as follows. If neither $(r,r_{i,0})$ nor $(r,r_{i,1})$ lie in $M$, then define $r_{i}=r_{i,0}\in S^{\ast}$. Otherwise, since $M$ is a matching, exactly one of $(r,r_{i,0})$ and $(r,r_{i,1})$ lie in $M$, say $(r,r_{i,j})$. If $r\notin C$, we choose $r_{i}=r_{i,1-j}\in S^{\ast}$. If $r\in C$, note that since $(r,r_{i,j})\in M$, we have (by duality) that $r_{i,j}\notin C$. Then we choose $r_{i}=r_{i,j}\in S^{\ast}$. This completes the definition of $r_{i}$. If $r\notin C$, then for all $i\in X$ and $j<2$, $r_{i,j}\in C$ because $(r,r_{i,j})$ must be covered by $C$. In particular, $r_{i}\in C$ for all $i\in X$. If $r\in C$, then (by duality) there is a unique $i\in X$ and $j<2$ such that $(r,r_{i,j})\in M$. In that case, we chose $r_{i}=r_{i,j}\notin C$. ∎ In the above lemma, it is important to note that our choice of each $r_{i}$ depends on the König cover $(C,M)$; in fact it depends on both $C$ and $M$. We can now use the combine operation to implement $\neg$. Definition 9.14. The complement of $T$, denoted $\overline{T}$, is defined by combining the single-element sequence $\langle T\rangle$. By Lemma 9.13, if $T$ codes the bit $i$, then $\overline{T}$ codes the bit $1-i$. Next, we work towards iterating the combine operation to implement the jump, with the eventual goal of proving a generalization of [2, Lemma 4.7]. In order to reason about trees which are formed by iterating the combine operation, we generalize Lemma 9.13 slightly: Lemma 9.15. Suppose $\langle T_{i}:i\in X\rangle$ combine to form the subtree of $S$ above some $r\in S$. For each $i\in X$, let $r_{i,0}$ and $r_{i,1}$ denote the roots of the two copies of $T_{i}$ in $S$ above $r$. Given any König cover $(C,M)$ of $S$ such that $r\in S^{\ast}$, for each $i$, we can uniformly computably choose one of $r_{i,0}$ or $r_{i,1}$ (call our choice $r_{i}$) such that • $r_{i}\in S^{\ast}$; • $r\notin C$ if and only if for all $i\in X$, $r_{i}\in C$. Proof. By Lemma 9.10, $(C,M)$ restricts to a König cover of the subtree of $S$ above $r$. Apply Lemma 9.13 to the subtree of $S$ above $r$, then use Proposition 9.11. ∎ We may now present a more general and more informative version of [2, Lemma 4.7]. Lemma 9.16. Given a sequence of trees $\langle T_{i}:i\in\mathbb{N}\rangle$ (each with the empty node as root), we can uniformly compute a sequence of trees $\langle S_{e}:e\in\mathbb{N}\rangle$ (each with the empty node as root) such that given a König cover $(C_{e},M_{e})$ of $S_{e}$, we can uniformly compute a sequence of sets of nodes $\langle R_{e,i}\rangle_{i}$ in $S^{\ast}_{e}$ such that (1) each $r\in R_{e,i}$ has length two or three; (2) for each $i$ and each $r\in R_{e,i}$, the subtree of $S_{e}$ above $r$ is $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T_{i}$; (3) if the set $A\subseteq\mathbb{N}$ is such that $$\displaystyle i\in A$$ $$\displaystyle\Rightarrow\quad R_{e,i}\subseteq C_{e}$$ $$\displaystyle i\notin A$$ $$\displaystyle\Rightarrow\quad R_{e,i}\subseteq\overline{C_{e}},$$ then $e\in A^{\prime}$ if and only if the root of $S_{e}$ lies in $C_{e}$. Therefore, if $\langle T_{i}\rangle$ codes a set $A$, then $\langle S_{e}\rangle$ codes $A^{\prime}$. Iterating the combine operation (as we will do in the following proof) introduces a complication, which necessitates the assumption in (3). For each $e$ and $i$, instead of choosing a single node $r_{i}$ as in Lemma 9.15, we now have to choose a set of nodes $R_{e,i}$. This is because we might want to copy the tree $T_{i}$ more than twice, at multiple levels of the tree $S_{e}$. If $T_{i}$ is not good (Definition 9.7), these copies could code different bits (according to appropriate restrictions of $(C_{e},M_{e})$), so we could have $R_{e,i}\not\subseteq C_{e}$ and $R_{e,i}\not\subseteq\overline{C_{e}}$. In that case, we have little control over whether the root of $S_{e}$ lies in $C_{e}$. Also, in the assumption of (3), we write $\Rightarrow$ instead of $\Leftrightarrow$ because writing $\Leftrightarrow$ would require us to specify separately that we do not restrict whether $i\in A$ in the case that $R_{e,i}$ is empty. (In the following proof, $R_{e,i}$ could be empty if the construction of $S_{e}$ does not involve $T_{i}$ at all.) Proof of Lemma 9.16. We start by constructing $S_{e}$. Observe that $e\in A^{\prime}$ if and only if $$\displaystyle\neg\forall(\sigma,s)\in\{(\sigma,s):\Phi^{\sigma}_{e,s}(e)\!\!% \downarrow\}\neg\forall i\in\mathrm{dom}(\sigma)$$ $$\displaystyle[(\sigma(i)=1\land i\in A)$$ $$\displaystyle\lor(\sigma(i)=0\land\neg(i\in A))].$$ Each occurrence of $\neg\forall$ or $\neg$ corresponds to one application of the combine operation in our construction of $S_{e}$. Formally, for each finite partial $\sigma:\mathbb{N}\to 2$ and $i\in\mathrm{dom}(\sigma)$, define $T^{\sigma}_{i}=T_{i}$ if $\sigma(i)=1$, otherwise define $T^{\sigma}_{i}=\overline{T_{i}}$. Now, for each $\sigma$ and $s$ such that $\Phi^{\sigma}_{e,s}(e)\!\!\downarrow$, define $T_{\sigma,s}$ by combining $\langle T^{\sigma}_{i}:i\in\mathrm{dom}(\sigma)\rangle$. Finally, combine $\langle T_{\sigma,s}:\Phi^{\sigma}_{e,s}(e)\!\!\downarrow\rangle$ to form $S_{e}$. Next, given a König cover $(C_{e},M_{e})$ of $S_{e}$, we construct $\langle R_{e,i}\rangle_{i}$ as follows. First apply Lemma 9.15 to $\langle T_{\sigma,s}:\Phi^{\sigma}_{e,s}(e)\!\!\downarrow\rangle$ and $(C_{e},M_{e})$ to choose $\langle r_{\sigma,s}:\Phi^{\sigma}_{e,s}(e)\!\!\downarrow\rangle\subseteq S^{% \ast}_{e}$ such that • the subtree of $S_{e}$ above each $r_{\sigma,s}$ is $r_{\sigma,s}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T_{\sigma,s}$; • the root of $S_{e}$ lies in $C_{e}$ if and only if there is some $\sigma$ and $s$ such that $\Phi^{\sigma}_{e,s}(e)\!\!\downarrow$ and $r_{\sigma,s}\notin C_{e}$. Next, for each $\sigma$ and $s$ such that $\Phi^{\sigma}_{e,s}(e)\!\!\downarrow$, apply Lemma 9.15 to $\langle T^{\sigma}_{i}:i\in\mathrm{dom}(\sigma)\rangle$ and the König cover $(C_{e},M_{e})$ restricted to the subtree of $S_{e}$ above $r_{\sigma,s}$. This produces $\langle r^{\sigma,s}_{i}:i\in\mathrm{dom}(\sigma)\rangle\subseteq S^{\ast}_{e}$ (all extending $r_{\sigma,s}$) such that • the subtree of $S_{e}$ above each $r^{\sigma,s}_{i}$ is $r^{\sigma,s}_{i}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}% }{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{\sigma}_{i}$; • $r_{\sigma,s}\notin C_{e}$ if and only if $r^{\sigma,s}_{i}\in C_{e}$ for all $i\in\mathrm{dom}(\sigma)$. Finally, for each $\sigma$ and $s$ such that $\Phi^{\sigma}_{e,s}(e)\!\!\downarrow$ and each $i$ such that $\sigma(i)=0$, apply Lemma 9.15 to the single-element sequence $\langle T_{i}\rangle$ and $(C_{e},S_{e})$ restricted to the subtree of $S_{e}$ above $r^{\sigma,s}_{i}$ to obtain $\overline{r}^{\sigma,s}_{i}\in S^{\ast}_{e}$ extending $r^{\sigma,s}_{i}$ such that • the subtree of $S_{e}$ above $\overline{r}^{\sigma,s}_{i}$ is $\overline{r}^{\sigma,s}_{i}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}% {$\frown$}}}{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{% \scalebox{.5}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T_{i}$; • $r^{\sigma,s}_{i}\in C_{e}$ if and only if $\overline{r}^{\sigma,s}_{i}\notin C_{e}$. Define $$R_{e,i}=\{r^{\sigma,s}_{i}:\Phi^{\sigma}_{e,s}(e)\!\!\downarrow,\sigma(i)=1\}% \cup\{\overline{r}^{\sigma,s}_{i}:\Phi^{\sigma}_{e,s}(e)\!\!\downarrow,\sigma(% i)=0\}.$$ First observe that each $r^{\sigma,s}_{i}$ has length two and each $\overline{r}^{\sigma,s}_{i}$ has length three. Hence (1) holds. Next, since $T^{\sigma}_{i}=T_{i}$ if $\sigma(i)=1$, the subtree of $S_{e}$ above each $r\in R_{e,i}$ is $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T_{i}$, i.e., (2) holds. We prove that (3) holds. Suppose that $A\subseteq\mathbb{N}$ is such that $$\displaystyle i\in A$$ $$\displaystyle\Rightarrow\quad R_{e,i}\subseteq C_{e}$$ $$\displaystyle i\notin A$$ $$\displaystyle\Rightarrow\quad R_{e,i}\subseteq\overline{C_{e}}.$$ Now, $e\in A^{\prime}$ if and only if there is some $\sigma\prec A$ and $s$ such that $\Phi^{\sigma}_{e,s}(e)\!\!\downarrow$. By our assumption on $A$ and the definition of $R_{e,i}$, that holds if and only if there is some $\sigma$ and $s$ such that $\Phi^{\sigma}_{e,s}(e)\!\!\downarrow$ and for all $i\in\mathrm{dom}(\sigma)$: $$\displaystyle\sigma(i)=1$$ $$\displaystyle\Leftrightarrow\quad r^{\sigma,s}_{i}\in C_{e}$$ $$\displaystyle\sigma(i)=0$$ $$\displaystyle\Leftrightarrow\quad\overline{r}^{\sigma,s}_{i}\notin C_{e}.$$ Chasing through the above definitions, we see that the above holds if and only if the root of $S_{e}$ lies in $C_{e}$, as desired. Finally, suppose that $\langle T_{i}\rangle$ codes the set $A$. We show that $\langle S_{e}\rangle$ codes $A^{\prime}$. Fix a König cover $\langle(C_{e},M_{e})\rangle$ of $\langle S_{e}\rangle$. First we show that the assumption in (3) holds for $A$. Fix $e,i\in\mathbb{N}$. If $R_{e,i}$ is empty, the desired statement holds. Otherwise, fix $r\in R_{e,i}$. Since $r$ lies in $S^{\ast}_{e}$, Lemma 9.10 says that $(C_{e},M_{e})$ restricts to a König cover of the subtree of $S_{e}$ above $r$. By (2), the subtree of $S_{e}$ above $r$ is $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T_{i}$. Since $T_{i}$ codes $A(i)$, so does $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T_{i}$. We conclude that $$r\in C_{e}\quad\Leftrightarrow\quad\text{ the root of }T_{i}\in C_{i}\quad% \Leftrightarrow\quad i\in A.$$ It follows that the assumption in (3) holds for $A$. Now by (3), $e\in A^{\prime}$ if and only if the root of $S_{e}$ lies in $C_{e}$. Since this holds for every König cover $\langle(C_{e},M_{e})\rangle$ of $\langle S_{e}\rangle$, $\langle S_{e}\rangle$ codes $A^{\prime}$ as desired. ∎ Remark 9.17. In the proof of Lemma 9.16, we could just as well have defined $R_{e,i}$ to be the set of all nodes in $S^{\ast}_{e}$ which are roots of copies of $T_{i}$. (Formally, for each $T_{\sigma,s}$ such that $\Phi^{\sigma}_{e,s}(e)\!\!\downarrow$, we could include the roots of the component $T^{\sigma}_{i}$’s if $\sigma(i)=1$, and the roots of the component $T_{i}$’s in the $T^{\sigma}_{i}$’s if $\sigma(i)=0$, as long as they lie in $S^{\ast}_{e}$.) Next, we make two small tweaks to Lemma 9.16. First, we adjust conclusion (3) to fit our definition of jump hierarchy (Definition 3.1). Second, we broaden the scope of our conclusions to include König covers of copies of $S_{n}$, not just König covers of $S_{n}$ itself. Lemma 9.18 is the central lemma behind our reductions from $\mathsf{ATR}$ and $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$. Lemma 9.18. Given a sequence of sequences of trees $\langle\langle T^{a}_{n}\rangle_{n}\rangle_{a}$ (each with the empty node as root), we can uniformly compute a sequence of trees $\langle S_{n}\rangle_{n}$ (each with the empty node as root) such that for any $s_{n}\in\mathbb{N}^{<\mathbb{N}}$ and any König cover $(C_{n},M_{n})$ of $s_{n}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox% {4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}S_{n}$, we can uniformly compute a sequence of sets of nodes $\langle R^{a}_{n,i}\rangle_{a,i}$ in $(s_{n}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}S_{n})^{\ast}$ such that (1) each $r\in R^{a}_{n,i}$ has length two or three (plus the length of $s_{n}$); (2) for each $a$, $i$, and each $r\in R^{a}_{n,i}$, the subtree of $s_{n}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox% {4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}S_{n}$ above $r$ is $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{a}_{i}$; (3) suppose that for each $a$, the set $Y_{a}\subseteq\mathbb{N}$ is such that $$\displaystyle i\in Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{a}_{n,i}\subseteq C_{n}$$ $$\displaystyle i\notin Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{a}_{n,i}\subseteq\overline{C_{n}},$$ then $n\in\left(\bigoplus_{a}Y_{a}\right)^{\prime}$ if and only if $s_{n}$ lies in $C_{n}$. Therefore, if for each $a$, $\langle T^{a}_{n}\rangle_{n}$ codes a set $Y_{a}$, then $\langle S_{n}\rangle_{n}$ codes $\left(\bigoplus_{a}Y_{a}\right)^{\prime}$. Proof. Apply Lemma 9.16 to $\langle T^{a}_{n}\rangle_{a,n}$. Given a König cover $(C_{n},M_{n})$ of $s_{n}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox% {4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}S_{n}$, we may compute the corresponding König cover of $S_{n}$ (as we observed after Definition 9.7). Then apply Lemma 9.16 to obtain $\langle R^{a}_{n,i}\rangle_{n,i}$ in $S_{n}^{\ast}$. It is straightforward to check that $\langle s_{n}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{% \raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$% \frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}R^{a}_{n,i}\rangle_{n,i}$ satisfies conclusions (1)–(3). ∎ As a warmup for our reduction from $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$, we use Lemma 9.18 to prove that $\mathsf{ATR}\leq_{W}\mathsf{KDT}$. Our proof is essentially the same as that of [2, Theorem 4.11]. Note that we do not use the sets $R^{a}_{n,i}$ in the following proof, only the final conclusion of Lemma 9.18. (The sets $R^{a}_{n,i}$ will be used in our reduction from $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$.) Theorem 9.19. $\mathsf{ATR}\leq_{W}\mathsf{KDT}$. Proof. We reduce the version of $\mathsf{ATR}$ in Proposition 3.5 to $\mathsf{KDT}$. Given a labeled well-ordering $\mathcal{L}$ and a set $A$, we will use $(\mathcal{L}\oplus A)$-effective transfinite recursion on $L$ to define an $(\mathcal{L}\oplus A)$-recursive function $f:L\to\omega$ such that for each $b\in L$, $\Phi^{\mathcal{L}\oplus A}_{f(b)}$ is interpreted as a sequence of trees $\langle T^{b}_{n}\rangle_{n}$ (each with the empty node as root). We will show that $\langle T^{b}_{n}\rangle_{n}$ codes the $b^{\text{th}}$ column of the jump hierarchy on $L$ which starts with $A$. For the base case, we use Lemma 9.8 to compute a sequence of trees $\langle T^{0_{L}}_{n}\rangle_{n}$ which codes $A$. Otherwise, for $b>_{L}0_{L}$, we use Lemma 9.18 to compute a sequence of trees $\langle T^{b}_{n}\rangle_{n}$ such that if for each $a<_{L}b$, $\Phi^{\mathcal{L}\oplus A}_{f(a)}$ is (interpreted as) a sequence of trees $\langle T^{a}_{n}\rangle_{n}$ which codes $Y_{a}$, then $\langle T^{b}_{n}\rangle_{n}$ codes $\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$. Note that $f$ is total: for any $b$, we can interpret $\langle\Phi^{\mathcal{L}\oplus A}_{f(a)}\rangle_{a<_{L}b}$ as a sequence of sequences of trees and apply Lemma 9.18 to obtain $\langle T^{b}_{n}\rangle_{n}$. This also means that every $\langle T^{b}_{n}\rangle_{n}$ (for $b>_{L}0_{L}$) was obtained using Lemma 9.18. We may view the disjoint union of $\langle\langle T^{b}_{n}\rangle_{n}\rangle_{b\in L}$ as a $\mathsf{KDT}$-instance. This defines the forward reduction from $\mathsf{ATR}$ to $\mathsf{KDT}$. For the backward reduction, let $\langle\langle(C^{b}_{n},M^{b}_{n})\rangle_{n}\rangle_{b\in L}$ be a solution to the above $\mathsf{KDT}$-instance. We may uniformly decode said solution to obtain a sequence of sets $\langle Y_{b}\rangle_{b\in L}$. By transfinite induction along $L$ using Lemmas 9.8 and 9.18, $\langle T^{b}_{n}\rangle_{n}$ is good for all $b\in L$, and $\langle Y_{b}\rangle_{b\in L}$ is the jump hierarchy on $L$ which starts with $A$. ∎ What if we want to use the forward reduction from $\mathsf{ATR}$ to $\mathsf{KDT}$ in our reduction from $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$? If the given $\mathsf{ATR}_{2}$-instance $\mathcal{L}$ is ill-founded, things could go wrong in the “effective transfinite recursion”. Specifically, there may be some $a\in L$ and $i\in\mathbb{N}$ such that $T^{a}_{i}$ is not good, i.e., there may be some $r,s\in\mathbb{N}^{<\mathbb{N}}$ and some König covers of $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{a}_{i}$ and $s\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{a}_{i}$ which code different bits. In order to salvage the situation, we will modify the backward reduction to check for such inconsistencies. If they are present, we use them to compute an infinite $<_{L}$-descending sequence. In order to detect inconsistencies, for each $b\in L$ and $n\in\mathbb{N}$, we need to keep track of the internal structure of $(C^{b}_{n},M^{b}_{n})$ in the $\mathsf{KDT}$-solution. According to Lemma 9.18 and our construction of $T^{b}_{n}$, for each $a<_{L}b$ and $i\in\mathbb{N}$, there is a set of nodes $R^{a}_{n,i}$ in $(T^{b}_{n})^{\ast}$ such that: • for each $r\in R^{a}_{n,i}$, the subtree of $T^{b}_{n}$ above $r$ is $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{a}_{i}$; • if for each $i$, either $R^{a}_{n,i}\subseteq C^{b}_{n}$ or $R^{a}_{n,i}\subseteq\overline{C^{b}_{n}}$, then $(C^{b}_{n},M^{b}_{n})$ codes the $n^{\text{th}}$ bit of $(\bigoplus_{a}Y_{a})^{\prime}$, where each $Y_{a}$ satisfies the assumption in Lemma 9.18(3). The “consistent” case is if for each $a<_{L}b$ and $i\in\mathbb{N}$, $(C^{a}_{i},T^{a}_{i})$ codes the same bit as the restriction of $(C^{b}_{n},M^{b}_{n})$ to the subtree above each $r$ in $R^{a}_{n,i}$. (This must happen if each $T^{a}_{i}$ is good, but it could also happen “by chance”.) We will show that this ensures that for each $a$ and $i$, either $R^{a}_{n,i}\subseteq C^{b}_{n}$ or $R^{a}_{n,i}\subseteq\overline{C^{b}_{n}}$. Furthermore, for each $a$, the $Y_{a}$ coded by $\langle T^{a}_{i}\rangle_{i}$ must satisfy the assumptions in Lemma 9.18(3), so we correctly calculate the next column of our jump hierarchy. On the other hand, what if there are some $a<_{L}b$, $i\in\mathbb{N}$, and $r_{0}\in R^{a}_{n,i}$ such that $(C^{a}_{i},M^{a}_{i})$ codes a different bit from the restriction of $(C^{b}_{n},M^{b}_{n})$ to the subtree above $r_{0}$? Then consider $T^{a}_{i}$ and the subtree of $T^{b}_{n}$ above $r_{0}$. The latter tree is a copy of $T^{a}_{i}$ (specifically, it is $r_{0}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox% {4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{a}_{i}$), yet its König cover codes a different bit from that of $T^{a}_{i}$, so we can use Lemma 9.18 to find a subtree of $T^{a}_{i}$ and a subtree of $T^{b}_{n}$ above $r_{0}$ (both subtrees are copies of $T^{a_{0}}_{i_{0}}$ for some $a_{0}<_{L}a$, $i_{0}\in\mathbb{N}$) on which appropriate restrictions of $(C^{a}_{i},M^{a}_{i})$ and $(C^{b}_{n},M^{b}_{n})$ code different bits. By repeating this process, we can obtain an infinite $<_{L}$-descending sequence. In order to formalize the above arguments, we organize the above recursive process using the sets $R^{b,a}_{n,i}$, defined as follows: Definition 9.20. Fix a labeled linear ordering $\mathcal{L}$ and use the forward reduction in Theorem 9.19 to compute $\langle\langle T^{b}_{n}\rangle_{n}\rangle_{b\in L}$. For each $n$ and $b$, fix a König cover $(C^{b}_{n},M^{b}_{n})$ of $T^{b}_{n}$. For each $a<_{L}b$ and each $i,n\in\mathbb{N}$, we define a set of nodes $R^{b,a}_{n,i}$ in $T^{b}_{n}$ as follows: $R^{b,a}_{n,i}$ is the set of all $r$ for which there exist $j\geq 1$ and $$\langle\rangle=r_{0}$$ $$\prec$$ $$r_{1}$$ $$\prec$$ $$\cdots$$ $$\prec$$ $$r_{j}=r$$ in $$T^{b}_{n}$$ $$b=c_{0}$$ $$>_{L}$$ $$c_{1}$$ $$>_{L}$$ $$\cdots$$ $$>_{L}$$ $$c_{j}=a$$ in $$L$$ $$n=i_{0}$$ , $$i_{1}$$ , $$\cdots$$ , $$i_{j}=i$$ in $$\mathbb{N}$$ such that for all $0<l\leq j$, $r_{l}$ lies in $R^{c_{l}}_{i_{l-1},i_{l}}$ as calculated by applying Lemma 9.18 to $(C^{b}_{n},M^{b}_{n})$ restricted to the subtree of $T^{b}_{n}$ above $r_{l-1}$. We make two easy observations about $R^{b,a}_{n,i}$: (1) By induction on $l$, $r_{l}$ lies in $(T^{b}_{n})^{\ast}$ and the subtree of $T^{b}_{n}$ above $r_{l}$ is $r_{l}\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox% {4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{c_{l}}_{i_{l}}$. In particular, for each $r\in R^{b,a}_{n,i}$, $r\in(T^{b}_{n})^{\ast}$ and the subtree of $T^{b}_{n}$ above $r$ is $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{a}_{i}$. (2) $R^{b,a}_{n,i}$ is uniformly c.e. in $\mathcal{L}\oplus(C^{b}_{n},M^{b}_{n})$. (A detailed analysis shows that $R^{b,a}_{n,i}$ is uniformly computable in $\mathcal{L}\oplus(C^{b}_{n},M^{b}_{n})$, but we do not need that.) With the $R^{b,a}_{n,i}$’s in hand, we can make precise what we mean by consistency: Definition 9.21. In the same context as the previous definition, we say that $a\in L$ is consistent if for all $i\in\mathbb{N}$: $$\displaystyle\text{the root of }T^{a}_{i}\in C^{a}_{i}$$ $$\displaystyle\Rightarrow\quad R^{b,a}_{n,i}\subseteq C^{b}_{n}\text{ for all }% b>_{L}a,n\in\mathbb{N}$$ $$\displaystyle\text{the root of }T^{a}_{i}\notin C^{a}_{i}$$ $$\displaystyle\Rightarrow\quad R^{b,a}_{n,i}\subseteq\overline{C^{b}_{n}}\text{% for all }b>_{L}a,n\in\mathbb{N}.$$ Observe that if $T^{a}_{i}$ is good for all $i$, then observation (1) above implies that $a$ is consistent, regardless of what $\langle(C^{b}_{n},M^{b}_{n})\rangle_{b,n}$ may be. However, unless $L$ is well-founded, we cannot be certain that $T^{a}_{i}$ is good. Consistency is a weaker condition which suffices to ensure that we can still obtain a jump hierarchy on $L$, as we show in Corollary 9.24. We will also show that inconsistency cannot come from nowhere, i.e., if $b_{0}$ is inconsistent, then there is some $b_{1}<_{L}b_{0}$ which is inconsistent, and so on, yielding an infinite $<_{L}$-descending sequence of inconsistent elements. Furthermore, consistency is easy to check: by observation (2) above, whether $a$ is consistent is $\Pi^{0}_{1}$ (in $\mathcal{L}\oplus\langle(C^{b}_{n},M^{b}_{n})\rangle_{b,n}$). We prove two lemmas that will yield the desired result when combined: Lemma 9.22. Fix König covers $\langle(C^{b}_{n},M^{b}_{n})\rangle_{b,n}$ for $\langle T^{b}_{n}\rangle_{b,n}$. Now fix $n$ and $b$. Suppose that for each $a<_{L}b$, the set $Y_{a}\subseteq\mathbb{N}$ is such that $$\displaystyle i\in Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{b,a}_{n,i}\subseteq C^{b}_{n}$$ $$\displaystyle i\notin Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{b,a}_{n,i}\subseteq\overline{C^{b}_{n}}.$$ Then $n\in\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$ if and only if the root of $T^{b}_{n}$ lies in $C^{b}_{n}$. Proof. Recall that $\langle T^{b}_{n}\rangle_{n\in\mathbb{N}}$ is computed by applying Lemma 9.18 to $\langle\langle T^{a}_{n}\rangle_{n\in\mathbb{N}}\rangle_{a<_{L}b}$. By definition of $R^{b,a}_{n,i}$, $R^{a}_{n,i}$ (as obtained from Lemma 9.18) is a subset of $R^{b,a}_{n,i}$ (this is the case $j=1$). So for all $a<_{L}b$, $$\displaystyle i\in Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{a}_{n,i}\subseteq R^{b,a}_{n,i}\subseteq C^{b% }_{n}$$ $$\displaystyle i\notin Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{a}_{n,i}\subseteq R^{b,a}_{n,i}\subseteq% \overline{C^{b}_{n}}.$$ The desired result follows from Lemma 9.18(3). ∎ Lemma 9.23. Fix König covers $\langle(C^{c}_{m},M^{c}_{m})\rangle_{c,m}$ for $\langle T^{c}_{m}\rangle_{c,m}$. Now fix $m$ and $b<_{L}c$. Suppose that for each $a<_{L}b$, the set $Y_{a}\subseteq\mathbb{N}$ is such that $$\displaystyle i\in Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{c,a}_{m,i}\subseteq C^{c}_{m}$$ $$\displaystyle i\notin Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{c,a}_{m,i}\subseteq\overline{C^{c}_{m}}.$$ Then for all $n\in\mathbb{N}$, $$\displaystyle n\in\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$$ $$\displaystyle\Rightarrow\quad R^{c,b}_{m,n}\subseteq C^{c}_{m}$$ $$\displaystyle n\notin\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$$ $$\displaystyle\Rightarrow\quad R^{c,b}_{m,n}\subseteq\overline{C^{c}_{m}}.$$ Proof. If $R^{c,b}_{m,n}$ is empty, then the desired result is vacuously true. Otherwise, consider $r\in R^{c,b}_{m,n}$. As we observed right after Definition 9.20, $r\in(T^{c}_{m})^{\ast}$ and the subtree of $T^{c}_{m}$ above $r$ is $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{b}_{n}$. $T^{b}_{n}$ was constructed by applying Lemma 9.18 to $\langle\langle T^{a}_{n}\rangle_{n\in\mathbb{N}}\rangle_{a<_{L}b}$, so we can use the restriction of $(C^{c}_{m},M^{c}_{m})$ to $r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.3% pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{b}_{n}$ to compute sets $\langle R^{a}_{n,i}\rangle_{a<_{L}b,i\in\mathbb{N}}$ of nodes in $(r\mathord{\mathchoice{\raisebox{4.3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{4.% 3pt}{\scalebox{.7}{$\frown$}}}{\raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}{% \raisebox{3.01pt}{\scalebox{.5}{$\frown$}}}}T^{b}_{n})^{\ast}$ satisfying the conclusions of Lemma 9.18. We claim that for all $a<_{L}b$, $R^{a}_{n,i}\subseteq R^{c,a}_{m,i}$. Proof of claim. Consider $s\in R^{a}_{n,i}$. We know that $s$ extends $r$ and $r\in R^{c,b}_{m,n}$. Fix $j\geq 1$ and $$\langle\rangle=r_{0}$$ $$\prec$$ $$r_{1}$$ $$\prec$$ $$\cdots$$ $$\prec$$ $$r_{j}=r$$ in $$T^{c}_{m}$$ $$c=c_{0}$$ $$>_{L}$$ $$c_{1}$$ $$>_{L}$$ $$\cdots$$ $$>_{L}$$ $$c_{j}=b$$ in $$L$$ $$m=i_{0}$$ , $$i_{1}$$ , $$\cdots$$ , $$i_{j}=n$$ in $$\mathbb{N}$$ which witness that $r\in R^{c,b}_{m,n}$. Then we can append one column: $$\langle\rangle=r_{0}$$ $$\prec$$ $$r_{1}$$ $$\prec$$ $$\cdots$$ $$\prec$$ $$r_{j}=r$$ $$\prec$$ $$r_{j+1}=s$$ in $$T^{c}_{m}$$ $$c=c_{0}$$ $$>_{L}$$ $$c_{1}$$ $$>_{L}$$ $$\cdots$$ $$>_{L}$$ $$c_{j}=b$$ $$>_{L}$$ $$c_{j+1}=a$$ in $$L$$ $$m=i_{0}$$ , $$i_{1}$$ , $$\cdots$$ , $$i_{j}=n$$ , $$i_{j+1}=i$$ in $$\mathbb{N}$$ Since $s\in R^{a}_{n,i}$, this witnesses that $s\in R^{c,a}_{m,i}$. ∎ By our claim, we have that $$\displaystyle i\in Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{a}_{n,i}\subseteq R^{c,a}_{m,i}\subseteq C^{c% }_{m}$$ $$\displaystyle i\notin Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{a}_{n,i}\subseteq R^{c,a}_{m,i}\subseteq% \overline{C^{c}_{m}}.$$ By Lemma 9.18(3), $n\in\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$ if and only if $r\in C^{c}_{m}$. This concludes the proof. ∎ Putting the previous two lemmas together, we obtain Corollary 9.24. Fix König covers $\langle(C^{b}_{n},M^{b}_{n})\rangle_{b,n}$ for $\langle T^{b}_{n}\rangle_{b,n}$. For each $b\in L$, define $Y_{b}$ by decoding $\langle(C^{b}_{n},M^{b}_{n})\rangle_{n}$, i.e., $$Y_{b}=\{n\in\mathbb{N}:\text{the root of }T^{b}_{n}\text{ lies in }C^{b}_{n}\}.$$ If all $a<_{L}b$ are consistent, then $b$ is consistent and $Y_{b}=\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$. Proof. $0_{L}$ is consistent because every $T^{0_{L}}_{n}$ is good (Lemma 9.8). Consider now any $b>_{L}0_{L}$. Every $a<_{L}b$ is consistent, so for all $a<_{L}b$: $$\displaystyle i\in Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{c,a}_{m,i}\subseteq C^{c}_{m}\text{ for all }% c>_{L}a,m\in\mathbb{N}$$ $$\displaystyle i\notin Y_{a}$$ $$\displaystyle\Rightarrow\quad R^{c,a}_{m,i}\subseteq\overline{C^{c}_{m}}\text{% for all }c>_{L}a,m\in\mathbb{N}.$$ By Lemma 9.22, $Y_{b}=\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$. Also, by Lemma 9.23, for all $n\in\mathbb{N}$: $$\displaystyle n\in\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$$ $$\displaystyle\Rightarrow\quad R^{c,b}_{m,n}\subseteq C^{c}_{m}\text{ for all }% c>_{L}b,m\in\mathbb{N}$$ $$\displaystyle n\notin\left(\bigoplus_{a<_{L}b}Y_{a}\right)^{\prime}$$ $$\displaystyle\Rightarrow\quad R^{c,b}_{m,n}\subseteq\overline{C^{c}_{m}}\text{% for all }c>_{L}b,m\in\mathbb{N}.$$ It follows that $b$ is consistent. ∎ We are finally ready to construct a reduction from $\mathsf{ATR}_{2}$ to $\mathsf{KDT}$. Theorem 9.25. $\mathsf{ATR}_{2}\leq_{W}\mathsf{LPO}\ast\mathsf{KDT}$. In particular, $\mathsf{ATR}_{2}\leq_{c}\mathsf{KDT}$ and $\mathsf{ATR}_{2}\leq_{W}^{\mathrm{arith}}\mathsf{KDT}$. Proof. Given a labeled linear ordering $\mathcal{L}$ (we may assume that $L$ is labeled by Proposition 8.10) and a set $A$, we apply the forward reduction in Theorem 9.19 to produce some $\mathsf{KDT}$-instance $\langle T^{b}_{n}\rangle_{b,n}$. For the backward reduction, given a $\mathsf{KDT}$-solution $\langle\langle(C^{b}_{n},M^{b}_{n})\rangle_{n}\rangle_{b\in L}$, we start by uniformly decoding it to obtain a sequence of sets $\langle Y_{b}\rangle_{b\in L}$. Next, since $R^{b,a}_{n,i}$ is uniformly c.e. in $\mathcal{L}\oplus(C^{b}_{n},M^{b}_{n})$, whether some $a\in L$ is inconsistent is uniformly c.e. in $\mathcal{L}\oplus\langle(C^{b}_{n},M^{b}_{n})\rangle_{b,n}$. Therefore we can use $\mathsf{LPO}$ (Definition 2.6) to determine whether every $a\in L$ is consistent. If so, by Corollary 9.24, $\langle Y_{b}\rangle_{b\in L}$ is a jump hierarchy on $L$ which starts with $A$. If not, by Corollary 9.24, every inconsistent element is preceded by some other inconsistent element. Since whether some $a\in L$ is inconsistent is uniformly c.e. in $\mathcal{L}\oplus\langle(C^{b}_{n},M^{b}_{n})\rangle_{b,n}$, we can use it to compute an infinite $<_{L}$-descending sequence of inconsistent elements. ∎ 9.2. Reducing $\mathsf{KDT}$ to $\mathsf{ATR}_{2}$ This section presumes an understanding of the proofs in Simpson [21]. First, he proved in $\mathsf{ATR}_{0}$ that for any set $G$, there is a countable coded $\omega$-model of $\Sigma^{1}_{1}$-$\mathsf{AC}$ which contains $G$. His proof [21, Lemma 1] also shows that Lemma 9.26. If $\langle X_{a}\rangle_{a\in L}$ is a jump hierarchy on $L$ and $I$ is a proper cut of $L$ which is not computable in $\langle X_{a}\rangle_{a\in L}$, then the countable coded $\omega$-model $\mathcal{M}=\{A:\exists a\in I(A\leq_{T}X_{a})\}$ satisfies $\Sigma^{1}_{1}$-$\mathsf{AC}$. Sketch of proof. Given an instance $\varphi(n,Y)$ of $\Sigma^{1}_{1}$-$\mathsf{AC}$, for each $n$, let $a_{n}\in I$ be $<_{L}$-least such that $X_{a_{n}}$ computes a solution to $\varphi(n,\cdot)$. Since $I$ is a proper cut, for any $a\in I$ and $b\in L\backslash I$, $X_{b}$ computes every $X_{a}$-hyperarithmetic set. Therefore if $b\in L\backslash I$, then $X_{b}$ computes $(a_{n})_{n\in\omega}$. Hence $(a_{n})_{n\in\omega}$ is not cofinal in $I$, otherwise $I$ would be computable in $\langle X_{a}\rangle_{a\in L}$. Fix $b\in I$ which bounds $(a_{n})_{n\in\omega}$. Then there is a $\Sigma^{1}_{1}$-$\mathsf{AC}$-solution to $\varphi$ which is arithmetic in $X_{b}$ (and hence lies in $\mathcal{M}$), as desired. ∎ We now adapt [21]’s proof of König’s duality theorem in $\mathsf{ATR}_{0}$ to show that Theorem 9.27. $\mathsf{KDT}$ is arithmetically Weihrauch reducible to $\mathsf{ATR}_{2}$. Proof. Given a bipartite graph $G$, we would like to use $\mathsf{ATR}_{2}$ to produce a countable coded $\omega$-model of $\Sigma^{1}_{1}$-$\mathsf{AC}$ which contains $G$. In order to do that, we define a $G$-computable linear ordering (i.e., an instance of $\mathsf{ATR}_{2}$) using the recursion theorem, as follows. First define a predicate $P(G,e,X)$ to hold if $X$ is a jump hierarchy on $L^{G}_{e}$ which starts with $G$ and does not compute any proper cut in $L^{G}_{e}$. Notice that $P(G,e,X)$ is arithmetic. The total $G$-computable function to which we apply the recursion theorem is as follows. Given any $G$-computable linear ordering $L^{G}_{e}$, consider the $G$-computable tree $H^{G}_{e}$ whose paths (if any) are solutions to $P(G,e,\cdot)$ (with Skolem functions). Then output an index for the Kleene-Brouwer ordering of $H^{G}_{e}$. By the recursion theorem, we can $G$-uniformly compute a fixed point $e$ for the above computable transformation. Observe that the following are (consecutively) equivalent: (1) $L^{G}_{e}$ has an infinite $G$-hyperarithmetic descending sequence; (2) $H^{G}_{e}$ has a $G$-hyperarithmetic path; (3) $P(G,e,\cdot)$ has a $G$-hyperarithmetic solution, i.e., there is a $G$-hyperarithmetic jump hierarchy on $L^{G}_{e}$ which starts with $G$ and does not compute any proper cut in $L^{G}_{e}$; (4) $L^{G}_{e}$ is well-founded. (The only nontrivial implication is (3) $\Rightarrow$ (4), which holds because no jump hierarchy on a $G$-computable ill-founded linear ordering can be $G$-hyperarithmetic.) But (1) and (4) contradict each other, so (1)–(4) are all false. Hence $L^{G}_{e}$ must be ill-founded and cannot have any infinite $G$-hyperarithmetic descending sequence. It follows that every infinite $L^{G}_{e}$-descending sequence defines a proper cut in $L^{G}_{e}$. Next, we show that given an $\mathsf{ATR}_{2}$-solution to $L^{G}_{e}$, we can arithmetically uniformly compute some proper cut $I$ in $L^{G}_{e}$ and a solution to $P(G,e,\cdot)$, i.e., a jump hierarchy $\langle X_{a}\rangle_{a\in L^{G}_{e}}$ which does not compute any proper cut in $L^{G}_{e}$. Then by Lemma 9.26, the countable coded $\omega$-model of all sets which are computable in some $X_{a}$, $a\in I$, satisfies $\Sigma^{1}_{1}$-$\mathsf{AC}$ as desired. If $\mathsf{ATR}_{2}$ gives us an infinite $L^{G}_{e}$-descending sequence $S$, then we can use $S$ to arithmetically uniformly compute a proper cut in $L^{G}_{e}$. Since $L^{G}_{e}$ is the Kleene-Brouwer ordering of $H^{G}_{e}$, we can also use $S$ to arithmetically uniformly compute a path on $H^{G}_{e}$. From said path, we can uniformly compute a solution to $P(G,e,\cdot)$. If $\mathsf{ATR}_{2}$ gives us a jump hierarchy $X$ on $L^{G}_{e}$, we show how to arithmetically uniformly compute an infinite $L^{G}_{e}$-descending sequence. We may then proceed as in the previous case. First arithmetically uniformly check whether $X$ computes any proper cut in $L^{G}_{e}$. If so, we can arithmetically uniformly find an index for such a computation, and produce a proper cut in $L^{G}_{e}$. From that, we may uniformly compute an infinite $L^{G}_{e}$-descending sequence. If not, then $X$ is a solution to $P(G,e,\cdot)$, so we can arithmetically uniformly compute a path on $H^{G}_{e}$, and hence an infinite $L^{G}_{e}$-descending sequence. We have produced a countable coded $\omega$-model of $\Sigma^{1}_{1}$-$\mathsf{AC}$ which contains the given graph $G$. Call it $\mathcal{M}$. With $\mathcal{M}$ in hand, we follow the rest of Simpson’s proof in order to obtain a $\mathsf{KDT}$-solution to $G$. His idea is to “relativize” Aharoni, Magidor, Shore’s [2] proof of $\mathsf{KDT}$ in $\Pi^{1}_{1}$-$\mathsf{CA}_{0}$ to $\mathcal{M}$. In the following, we will often write $\mathcal{M}$ instead of “the code of $\mathcal{M}$”. Let $G=(X,Y,E)$. (If we are not given a partition $(X,Y)$ of the vertex set of $G$ witnessing that $G$ is bipartite, we can arithmetically uniformly compute such a partition.) Recall a definition from [2]: if $A\subseteq X$, then the demand set is defined by $$D_{G}(A)=\{y\in Y:xEy\rightarrow x\in A\}.$$ Note that if $A\in\mathcal{M}$, then $D_{G}(A)$ is uniformly arithmetic in $\mathcal{M}$ and the code of $A$. Next, consider the set of pairs $$S=\{(A,F)\in\mathcal{M}:A\subseteq X\text{ and }F:A\to D_{G}(A)\text{ is a % matching}\}.$$ (Note that $A$ and $F$ may be infinite.) $S$ (specifically the set of codes of $(A,F)\in S$) is arithmetic over $\mathcal{M}$. So is the set $\bigcup\{A:(A,F)\in S\}\subseteq X$, which we denote by $A^{\ast}$. Next, for each $x\in A^{\ast}$, we define $F^{\ast}(x)$ to be $F(x)$, where $(A,F)$ is the least (with respect to the enumeration of $\mathcal{M}$) pair in $S$ such that $x\in A$. Then $F^{\ast}:A^{\ast}\to D_{G}(A^{\ast})$ is a matching ([21, Lemma 2]). Note that $F^{\ast}$ is arithmetic over $\mathcal{M}$. Next, define $X^{\ast}=X-A^{\ast}$ and $Y^{\ast}=Y-D_{G}(A^{\ast})$. Both sets are arithmetic over $\mathcal{M}$. Simpson then constructs (by recursion along $\omega$) a matching $H$ from $Y^{\ast}$ to $X^{\ast}$ which is arithmetic in $G\oplus\mathcal{M}$, as follows. Each step of the recursion proceeds by searching for a pair of adjacent vertices (one in $X^{\ast}$, one in $Y^{\ast}$) whose removal does not destroy goodness: a cofinite induced subgraph $G^{\prime}$ (with vertices partitioned into $X^{\prime}\subseteq X$ and $Y^{\prime}\subseteq Y$) of $G$ is good if for any $A\subseteq X^{\prime}$ in $\mathcal{M}$ and any matching $F:A\to D_{G^{\prime}}(A)$ in $\mathcal{M}$, $D_{G^{\prime}}(A)-\mathrm{range}(F)$ and $Y^{\ast}$ are disjoint. (This definition is not related to Definition 9.7.) This recursion eventually matches every vertex in $Y^{\ast}$ to some vertex in $X^{\ast}$ ([21, Lemmas 3, 5]). The property of goodness (where each $G^{\prime}$ is encoded by the finite set of vertices in $G\backslash G^{\prime}$) is arithmetic over $\mathcal{M}$. Hence the resulting matching $H$ is arithmetic over $\mathcal{M}$. Finally, we arrive at a $\mathsf{KDT}$-solution to $G$: $F^{\ast}\cup H$ is a matching in $G$, with corresponding dual cover $A^{\ast}\cup Y^{\ast}$. $(F^{\ast}\cup H,A^{\ast}\cup Y^{\ast})$ can be arithmetically uniformly computed from $\mathcal{M}$. ∎ Using Theorems 9.25 and 9.27, we conclude that Corollary 9.28. $\mathsf{ATR}_{2}$ and $\mathsf{KDT}$ are arithmetically Weihrauch equivalent. References [1] Ron Aharoni. König’s duality theorem for infinite bipartite graphs. J. London Math. Soc. (2), 29(1):1–12, 1984. [2] Ron Aharoni, Menachem Magidor, and Richard A. Shore. On the strength of König’s duality theorem for infinite bipartite graphs. J. Combin. Theory Ser. B, 54(2):257–290, 1992. [3] Dwight R. Bean. Effective coloration. J. Symbolic Logic, 41(2):469–480, 1976. [4] Vasco Brattka, Matthew de Brecht, and Arno Pauly. Closed choice and a uniform low basis theorem. Ann. Pure Appl. Logic, 163(8):986–1008, 2012. [5] Vasco Brattka, Guido Gherardi, and Alberto Marcone. The Bolzano-Weierstrass theorem is the jump of weak König’s lemma. Ann. Pure Appl. Logic, 163(6):623–655, 2012. [6] Vasco Brattka, Guido Gherardi, and Arno Pauly. Weihrauch complexity in computable analysis. arXiv:1707.03202, 2018. [7] Vasco Brattka and Arno Pauly. On the algebraic structure of Weihrauch degrees. Log. Methods Comput. Sci., 14(4):Paper No. 4, 36, 2018. [8] Keh Hsun Chen. Recursive well-founded orderings. Ann. Math. Logic, 13(2):117–147, 1978. [9] Damir D. Dzhafarov. Cohesive avoidance and strong reductions. Proc. Amer. Math. Soc., 143(2):869–876, 2015. [10] Harvey M. Friedman and Jeffry L. Hirst. Weak comparability of well orderings and reverse mathematics. Ann. Pure Appl. Logic, 47(1):11–29, 1990. [11] Noam Greenberg and Antonio Montalbán. Ranked structures and arithmetic transfinite recursion. Trans. Amer. Math. Soc., 360(3):1265–1307, 2008. [12] Joseph Harrison. Recursive pseudo-well-orderings. Trans. Amer. Math. Soc., 131:526–543, 1968. [13] Jeffry Lynn Hirst. Combinatorics in Subsystems of Second Order Arithmetic. ProQuest LLC, Ann Arbor, MI, 1987. Thesis (Ph.D.)–The Pennsylvania State University. [14] C. G. Jockusch, Jr. and T. G. McLaughlin. Countable retracing functions and $\Pi^{0}_{2}$ predicates. Pacific J. Math., 30:67–93, 1969. [15] Takayuki Kihara, Alberto Marcone, and Arno Pauly. Searching for an analogue of ATR in the Weihrauch lattice. arXiv:1812.01549, 2018. [16] Richard Laver. On Fraïssé’s order type conjecture. Ann. of Math. (2), 93:89–111, 1971. [17] Arno Pauly. Computability on the countable ordinals and the Hausdorff-Kuratowski theorem. CoRR, abs/1501.00386, 2015. [18] Klaus-Peter Podewski and Karsten Steffens. Injective choice functions for countable families. J. Combinatorial Theory Ser. B, 21(1):40–46, 1976. [19] Gerald E. Sacks. Higher recursion theory. Perspectives in Mathematical Logic. Springer-Verlag, Berlin, 1990. [20] Richard A. Shore. On the strength of Fraïssé’s conjecture. In Logical methods (Ithaca, NY, 1992), volume 12 of Progr. Comput. Sci. Appl. Logic, pages 782–813. Birkhäuser Boston, Boston, MA, 1993. [21] Stephen G. Simpson. On the strength of König’s duality theorem for countable bipartite graphs. J. Symbolic Logic, 59(1):113–123, 1994. [22] Stephen G. Simpson. Subsystems of second order arithmetic. Perspectives in Logic. Cambridge University Press, Cambridge; Association for Symbolic Logic, Poughkeepsie, NY, second edition, 2009.
On classification of Mori contractions: Elliptic curve case Yuri G. Prokhorov [email protected] Department of Mathematics (Algebra Section), Moscow University, 117234 Moscow, Russia Abstract. We study Mori’s three-dimensional contractions $f\colon X\to Z$. It is proved that on the “good” model $(\overline{X},\overline{S})$ there are no elliptic components of $\operatorname{Diff}_{\overline{S}}$ with coefficients $\geq 6/7$. This work was partially supported by grants INTAS-OPEN-97-2072 and RFFI 99-01-01132 1. Introduction 1.1. Let $f\colon X\to Z\ni o$ be an extremal log terminal contraction over ${\mathbb{C}}$, that is: $X$ is a normal algebraic ${\mathbb{Q}}$-factorial threefold with at worst log terminal singularities, $f$ is a projective morphism such that $f_{*}{\mathcal{O}}_{X}={\mathcal{O}}_{Z}$, $\rho(X/Z)=1$ and $-K_{X}$ is $f$-ample. We assume that $\dim(Z)\geq 1$ and regard $(Z\ni o)$ as a sufficiently small Zariski neighborhood. Such contractions naturally appear in the Minimal Model Program [KMM]. By $\operatorname{Exc}(f)\subset X$ denote the exceptional locus of $f$. According to the general principle introduced by Shokurov [Sh] all such contractions can be divided into two classes: exceptional and nonexceptional. A contraction $f\colon X\to Z\ni o$ such as in 1.1 is said to be exceptional if for any complement $K_{X}+D$ near $f^{-1}(o)$ there is at most one divisor $E$ of the function field ${\mathcal{K}}(X)$ with discrepancy $a(E,D)=-1$. The following is a particular case of the theorem proved in [Sh1] and [P2] (see also [PSh]). Theorem 1.2. Notation as above. Assume that $f\colon X\to Z\ni o$ is nonexceptional. Then for some $n\in\{1,2,3,4,6\}$ there is a member $F\in|-nK_{X}|$ such that the pair $\left(X,\frac{1}{n}F\right)$ is log canonical near $f^{-1}(o)$. Thus nonexceptional contractions have a “good” member in $|-nK_{X}|$, $n\in\{1,2,3,4,6\}$. The most important case is the case of Mori contractions, i. e. when $X$ has only terminal singularities: Conjecture 1.3. Notation as in 1.1. Assume that $X$ has at worst terminal singularities. Then $f\colon X\to Z\ni o$ is nonexceptional. Similar to the classification of three-dimensional terminal singularities, this fact should be the key point in the classification Mori contractions. For example it is very helpful in the study of three-dimensional flips [K], [Mo], [Sh]. Methods of [Sh1], [P2], [PSh] use inductive procedure of constructing divisors in $|-nK|$. This procedure works on so-called good model of $X$ over $Z$. Roughly speaking, a good model is a birational model $\overline{Y}$ equipped with a prime divisor $\overline{S}$ such that the pair $(\overline{Y},\overline{S})$ is plt and $-(K_{\overline{Y}}+\overline{S})$ is nef and big over $Z$. If $f$ is exceptional, then $\overline{S}$ is a projective surface. Adjunction Formula 2.1 gives us that $(\overline{S},\operatorname{Diff}_{\overline{S}})$ is a klt log del Pezzo surface. Moreover, exceptionality of $f$ implies that the projective log pair $(\overline{S},\operatorname{Diff}_{\overline{S}})$ is exceptional, by definition this means that any complement $K_{\overline{S}}+\operatorname{Diff}_{\overline{S}}^{+}$ is klt [P2, Prop.  2.4]. Thus our construction gives the following correspondence: $$\begin{array}[]{ccc}\text{\begin{tabular}[]{|l|}\hline Exceptional % contractions\\ $f\colon X\to Z$ as in \ref{notation}\\ \hline\end{tabular}}&\longrightarrow&\text{\begin{tabular}[]{|l|}\hline Exceptional% log del Pezzos\\ $(\overline{S},\Delta=\operatorname{Diff}_{\overline{S}})$\\ \hline\end{tabular}}\\ \end{array}$$ 1.4. For exceptional log del Pezzos $(\overline{S},\Delta)$ Shokurov introduced the following invariant: $$\begin{array}[]{ll}\delta=\delta(\overline{S},\Delta)=&\text{number of % divisors ${E}$ of ${\mathcal{K}}(\overline{S})$}\\ &\text{with discrepancy $a(E,\Delta)\leq-6/7$.}\\ \end{array}$$ He proved that $\delta\leq 2$, classified log surfaces with $\delta=2$ and showed that in the case $\delta=1$ the (unique) divisor $E$ with $a(E,\Delta)\leq-6/7$ is represented by a curve of arithmetical genus $\leq 1$ (see [Sh1], [P3]). The aim of this short note is to exclude the case of Mori contractions with $\delta=1$ and elliptic curve $E$: Theorem 1.5. Notation as in 1.1. Assume that ${\delta}(\overline{S},\operatorname{Diff}_{\overline{S}})=1$. Write $\operatorname{Diff}_{\overline{S}}=\sum\delta_{i}\overline{\Delta}_{i}$, where $\overline{\Delta}_{i}$ are irreducible curves. If $\delta_{i_{0}}\geq 6/7$ for some $i_{0}$, then $p_{a}(\overline{\Delta}_{i_{0}})=0$. The following example shows that Theorem 1.5 cannot be generalized to the klt case. Example 1.6 ([IP]). Let $(Z\ni o)$ be the hypersurface canonical singularity $x_{1}^{2}+x_{3}^{3}+x_{3}^{11}+x_{4}^{12}=0$ and let $f\colon X\to Z$ be the weighted blowup with weights $(66,44,12,11)$. Then $f$ satisfies conditions of 1.1 and we have case 3.2. It was computed in [IP] that $S$ is the weighted projective plane ${\mathbb{P}}(3,2,1)$ and $\operatorname{Diff}_{S}=\frac{10}{11}C+\frac{1}{2}L$, where $C$ is an elliptic curve. Log del Pezzo surfaces of elliptic type (like $(\overline{S},\operatorname{Diff}_{\overline{S}})$ in the above theorem) were classified by T. Abe [Ab]. Our proof uses different, very easy arguments. 2. Preliminary results In this paper we use terminologies of Minimal Model Program [KMM], [Ut]. For the definition of complements and their properties we refer to [Sh, Sect. 5], [Ut, Ch. 19] and [P3]. Definition 2.1 ([Sh, Sect. 3], [Ut, Ch. 16]). Let $X$ be a normal variety and let $S\neq\varnothing$ be an effective reduced divisor on $X$. Let $B$ be a ${\mathbb{Q}}$-divisor on $X$ such that $S$ and $B$ have no common components. Assume that $K_{X}+S$ is lc in codimension two. Then the different of $B$ on $S$ is defined by $$K_{S}+\operatorname{Diff}_{S}{\left(B\right)}\equiv(K_{X}+S+B)|_{S}.$$ Usually we will write simply $\operatorname{Diff}_{S}$ instead of $\operatorname{Diff}_{S}{\left(0\right)}$. Theorem 2.2 (Inversion of Adjunction [Ut, 17.6]). Let $X$ be a normal variety and let $D$ be a boundary on $X$. Write $D=S+B$, where $S=\left\lfloor D\right\rfloor$. Assume that $K_{X}+S+B$ is ${\mathbb{Q}}$-Cartier. Then $(X,S+B)$ is plt near $S$ iff $S$ is normal and $(S,\operatorname{Diff}_{S}{\left(B\right)})$ is klt. Definition 2.3 ([P1]). Let $X$ be a normal variety and let $g\colon Y\to X$ be a birational contraction such that the exceptional locus of $g$ contains exactly one irreducible divisor, say $S$. Assume that $K_{Y}+S$ is plt and $-(K_{Y}+S)$ is $f$-ample. Then $g\colon(Y\supset S)\to X$ is called a plt blowup of $X$. The key point in the proof of Theorem 1.5 is the following proposition. Proposition 2.4. Let $(X\ni P)$ be a three-dimensional terminal singularity and let $g\colon(Y,S)\to X$ be a plt blowup with $f(S)=P$. Write $\operatorname{Diff}_{S}=\sum\delta_{i}\Delta_{i}$, where $\Delta_{i}$ are irreducible curves, and assume that $\delta_{0}\geq 6/7$ for some $i_{0}$. Further, assume that $S$ is smooth at singular points of $\Delta_{i_{0}}$. Then $p_{a}(\Delta_{i_{0}})=0$. Lemma 2.5 (cf. [P1, Cor. 5]). Let $(X\ni P)$ be a three-dimensional terminal singularity and let $g\colon(Y,S)\to X$ be a plt blowup with $f(S)=P$. Then there is a boundary $\Upsilon\geq\operatorname{Diff}_{S}$ on $S$ such that (i) $\left\lfloor\Upsilon\right\rfloor\neq 0$; (ii) $-(K_{S}+\Upsilon)$ is ample. Moreover, $K_{S}+\operatorname{Diff}_{S}$ has a non-klt $1$, $2$, $3$, $4$, or $6$-complement. Proof. Regard $(X\ni P)$ as an analytic germ. It was shown in [YPG, Sect.  6.4] that the general element $F\in|-K_{X}|$ has a normal Du Val singularity at $P$. By Inversion of Adjunction 2.2, $K_{X}+F$ is plt. Consider the crepant pull-back $K_{Y}+aS+F_{Y}=f^{*}(K_{X}+F)$, where $F_{Y}$ is the proper transform of $F$ and $a<1$. Since both $K_{Y}+S$ and $g^{*}K_{X}$ are ${\mathbb{Q}}$-Cartier, so are $S$ and $F_{Y}$. Clearly, $-(K_{Y}+S+F_{Y})$ is $f$-ample. Let $\Upsilon^{\prime}:=\operatorname{Diff}_{S}{\left(F_{Y}\right)}$. Then $\left\lfloor\Upsilon\right\rfloor^{\prime}\neq 0$ and $-(K_{S}+\Upsilon^{\prime})$ is ample. Therefore $\Upsilon$ can be found in the form $\Upsilon=\operatorname{Diff}_{S}{\left(F_{Y}\right)}+t(\Upsilon^{\prime}-% \operatorname{Diff}_{S}{\left(F_{Y}\right)})$ for suitable $0<t\leq 1$. Take $\Delta=\operatorname{Diff}_{S}{\left(F_{Y}\right)}+\lambda(\Upsilon-% \operatorname{Diff}_{S}{\left(F_{Y}\right)})$ for $0<\lambda\leq 1$ so that $K_{S}+\Delta$ is lc but not klt (and $-(K_{S}+\Delta)$ is ample). By [Sh1, Sect. 2] (see also [P3, 5.4.1]) there exists either an $1$, $2$, $3$, $4$, or $6$-complement of $K_{S}+\Delta$ which is not klt. ∎ A very important problem is to prove the last lemma without using [YPG, Sect. 6.4], i.e. the classification of terminal singularities. This can be a way in higher-dimensional generalizations. Proof of Proposition 2.4. Put $C:=\Delta_{i_{0}}$ and let $\delta_{0}=1-1/m$, $m\geq 7$. Assume that $p_{a}(C)\geq 1$. Let $\Upsilon$ be such as in Lemma 2.5 and let $K_{S}+\Theta$ be a non-klt $1$, $2$, $3$, $4$, or $6$-complement of $K_{S}+\operatorname{Diff}_{S}$. Using that the coefficients of $\operatorname{Diff}_{S}$ are standard [Sh, Prop. 3.9] it is easy to see that $\Theta\geq\operatorname{Diff}_{S}$ and $\Theta\geq C$ [P3, Sect. 4.7]. In particular, $K_{S}+C$ is lc. Further, $C\not\subset\left\lfloor\Upsilon\right\rfloor$. Indeed, otherwise by Adjunction we have $$-\deg K_{C}\geq-\deg(K_{C}+\operatorname{Diff}_{C}{\left(\Upsilon-C\right)})=-% (K_{S}+\Upsilon)\cdot C>0$$ This implies $p_{a}(C)=0$, a contradiction. By Lemma 2.6 below, $\Theta=C$, $p_{a}(C)=1$, $S$ is smooth along $C$ and has only Du Val singularities outside. Therefore, $\operatorname{Diff}_{S}=(1-1/m)C$ and $-K_{S}\equiv C\equiv-m(K_{S}+(1-1/m)C)$ is ample (see 2.3). Thus $S$ is a del Pezzo surface with at worst Du Val singularities. Since $C\not\subset\left\lfloor\Upsilon\right\rfloor$, we can write $\Upsilon=\alpha C+L+\Upsilon^{o}$, where $L$ is an irreducible curve, $1>\alpha\geq 1-1/m\geq 6/7$, $C\not\subset\operatorname{Supp}(\Upsilon^{o})$ and $\Upsilon^{o}\geq 0$. Further, $$\displaystyle 0<K_{S}\cdot(K_{S}+\Upsilon)=K_{S}\cdot(K_{S}+\alpha C+L+% \Upsilon^{o})\\ \displaystyle\leq K_{S}\cdot((1-\alpha)K_{S}+L)\leq\frac{1}{7}K_{S}^{2}+K_{S}% \cdot L.$$ (1) Thus $K_{S}^{2}>-7K_{S}\cdot L\geq 7$. Let $S^{\min}\to S$ be the minimal resolution. By Noether’s formula, $K_{S}^{2}+\rho(S^{\min})=K_{S^{\min}}^{2}+\rho(S^{\min})=10$. Thus, $8\leq K_{S}^{2}\leq 9$ and $\rho(S^{\min})\leq 2$. In particular, $S$ either is smooth or has exactly one singular point which is of type $A_{1}$. By (1), $-K_{S}\cdot L=1$. Similar to (1) we have $$0<-(K_{S}+\Upsilon)\cdot L\leq-(1-\alpha)K_{S}\cdot L-L^{2}.$$ Hence $L^{2}<1-\alpha\leq 1/7$, so $L^{2}\leq 0$. This means that the curve $L$ generates an extremal ray on $S$ and $\rho(S)=2$. Therefore, $S$ is smooth and $K_{S}^{2}=8$. In this case, $S$ is a rational ruled surface (${\mathbb{P}}^{1}\times{\mathbb{P}}^{1}$ or ${\mathbb{F}}_{1}$). Let $\ell$ be a general fiber of the rulling. Then $$0<-(K_{S}+\alpha C+L+\Upsilon^{o})\cdot\ell\leq-(1-\alpha)K_{S}\cdot\ell-L% \cdot\ell\leq\frac{2}{7}-L\cdot\ell.$$ Hence $L\cdot\ell=0$, so $L$ is a fiber of $S\to{\mathbb{P}}^{1}$ and $-K_{S}\cdot L=2$. This contradicts (1). ∎ Lemma 2.6 (see [P3, Lemma 8.3.6]). Let $(S,C+\Xi)$ be a rational projective log surface, where $C$ is the reduced and $\Xi$ is an arbitrary boundary. Assume that $K_{S}+C+\Xi$ is lc, $S$ is smooth at singular points of $C$, $K_{S}+C+\Xi\equiv 0$, $C$ is connected and $p_{a}(C)\geq 1$. Then $\Xi=0$, $K_{S}+C\sim 0$, $S$ is smooth along $C$ and has only DuVal singularities outside. Proof. By Adjunction, $K_{C}+\operatorname{Diff}_{C}{\left(\Xi\right)}=0$. Hence $\operatorname{Diff}_{C}{\left(\Xi\right)}=0$. This shows that $C\cap\operatorname{Supp}(\Xi)=\varnothing$ and $S$ has no singularities at points of $C\setminus\operatorname{Sing}(C)$ (see [Ut, Prop. 16.6, Cor. 16.7]). Let $\mu\colon\tilde{S}\to S$ be the minimal resolution and let $\tilde{C}$ be the proper transform of $C$ on $\tilde{S}$. Define $\tilde{\Xi}$ as the crepant pull-back: $K_{\tilde{S}}+\tilde{C}+\tilde{\Xi}=\mu^{*}(K_{S}+C+\Xi)$. It is sufficient to show that $\tilde{\Xi}=0$. Assume the converse. Replace $(S,C+\Xi)$ with $(\tilde{S},\tilde{C}+\tilde{\Xi})$. It is easy to see that all the assumptions of the lemma holds for this new $(S,C+\Xi)$. Contractions of $(-1)$-curves again preserve the assumptions. Since $C$ and $\operatorname{Supp}(\Xi)$ are disjoint, whole $\Xi$ cannot be contracted. Thus we get $S\simeq{\mathbb{P}}^{2}$ or $S\simeq{\mathbb{F}}_{n}$ (a rational ruled surface). In both cases simple computations gives us $\Xi=0$. ∎ 3. Construction of a good model Notation as in 1.1. We recall briefly the construction of [P2] (see also [PSh]). Assume that $f\colon X\to Z\ni o$ is exceptional. Let $K_{X}+F$ be a complement which is not klt. There is a divisor $S$ of ${\mathcal{K}}(X)$ such that $a(S,F)=-1$. Since $f$ is exceptional, this divisor is unique. 3.1. First we assume that the center of $S$ on $X$ is a curve or a point. Then $\left\lfloor F\right\rfloor=0$. Let $g\colon Y\to X$ be a minimal log terminal modification of $(X,F)$ [Ut, 17.10], i. e. $g$ is a birational projective morphism such that $Y$ is ${\mathbb{Q}}$-factorial and $K_{Y}+S+A=g^{*}(K_{X}+F)$ is dlt, where $A$ is the proper transform of $F$. In our situation, $K_{Y}+S+A$ is plt. By [Ut, Prop. 2.17], $K_{Y}+S+(1+\varepsilon)A$ is also plt for sufficiently small positive $\varepsilon$. 3.1.1. Run $(K+S+(1+\varepsilon)A)$-Minimal Model Program over $Z$: $$\begin{array}[]{ccc}Y&\dashrightarrow&\overline{Y}\\ \downarrow\omit\hfil\span\omit\span\omit\@ADDCLASS{ltx_eqn_lefteqn}$% \displaystyle\scriptstyle{g}$\mbox{}\cr&&\downarrow\hbox to 0.0pt{$% \displaystyle\scriptstyle{q}$}\\ X&\stackrel{{\scriptstyle f}}{{\longrightarrow}}&Z\\ \end{array}$$ Note that $K_{\overline{Y}}+\overline{S}+(1+\varepsilon)\overline{A}\equiv\varepsilon% \overline{A}\equiv-\varepsilon(K_{\overline{Y}}+\overline{S})$. At the end we get so-called good model, i. e. a log pair $(\overline{Y},\overline{S}+\overline{A})$ such that one of the following holds: (3.1.A) $\rho(\overline{Y}/Z)=2$ and $-(K_{\overline{Y}}+\overline{S})$ is nef and big over $Z$; (3.1.B) $\rho(\overline{Y}/Z)=1$ and $-(K_{\overline{Y}}+\overline{S})$ is ample over $Z$. 3.2. Now assume that the center of $S$ on $X$ is of codimension one. Then $S=\left\lfloor F\right\rfloor$. In this case, we put $g=\operatorname{id}$, $Y=X$ and $A=F-S$. If $-(K_{X}+S)$ is nef, then we also put $\overline{Y}=X$, $\overline{S}=S$. Assume $-(K_{X}+S)\equiv A$ is not nef. Since $A$ is effective, $f$ is birational. If $f$ is divisorial, then it must contract a component of $\operatorname{Supp}(A)$. Thus $S$ is not a compact surface, a contradiction (see [P2, Prop. 2.2]). Therefore $f$ is a flipping contraction. In this case, in diagram (3.1.1) the map $X=Y\dashrightarrow\overline{Y}$ is the corresponding flip. Since $f$ is exceptional, in both cases 3.1 and 3.2 we have $f(g(S))=q(\overline{S})=o$ (see [P2, Prop. 2.2]). Adjunction Formula 2.1 gives us that $(\overline{S},\operatorname{Diff}_{\overline{S}})$ is a klt log del Pezzo surface, i. e. $(\overline{S},\operatorname{Diff}_{\overline{S}})$ is klt and $-(K_{\overline{S}}+\operatorname{Diff}_{\overline{S}})$ is nef and big (see [P2, Lemma 2.4]). Moreover, exceptionality of $f$ implies that the pair $(\overline{S},\operatorname{Diff}_{\overline{S}})$ is exceptional, i. e. any complement $K_{\overline{S}}+\operatorname{Diff}_{\overline{S}}^{+}$ is klt. Proposition 3.3. Let $f\colon X\to Z\ni o$ be as in 1.1. Assume that $f$ is exceptional. Furthermore, (i) if $f$ is divisorial, we assume that the point $(Z\ni o)$ is terminal; (ii) in the case $\dim(Z)=1$, we assume that singularities of $X\setminus f^{-1}(o)$ are canonical. Then case (3.1.B) does not occur. Proof. Assume the converse. Then $\rho(\overline{Y}/Z)=1$ and $q\colon\overline{Y}\to Z$ is also an exceptional contraction as in 1.1. First, we consider the case when $f$ is divisorial. Then $q$ is a plt blowup of a terminal point $(Z\ni o)$ and $q(\overline{S})=o$ (see [P2, Prop. 2.2]). By Lemma 2.5, $(\overline{S},\operatorname{Diff}_{\overline{S}})$ has a non-klt complement. This contradicts [P2, Prop. 2.4]. Clearly, $f$ cannot be a flipping contraction (because, in this case, the map $Y\dashrightarrow\overline{Y}$ must be an isomorphism in codimension one). If $\dim(Z)=2$, then $q$ is not equidimensional, a contradiction. Finally, we consider the case $\dim(Z)=1$ (and $\overline{S}$ is the central fiber of $q$). Let $\overline{F}$ be a general fiber of $q$ (a del Pezzo surface with at worst Du Val singularities). Consider the exact sequence $$0\longrightarrow{\mathcal{O}}_{\overline{Y}}(-K_{\overline{Y}}-\overline{F})% \longrightarrow{\mathcal{O}}_{\overline{Y}}(-K_{\overline{Y}})\longrightarrow{% \mathcal{O}}_{\overline{F}}(-K_{\overline{F}})\longrightarrow 0$$ By Kawamata-Viehweg Vanishing [KMM, Th. 1-2-5], $R^{1}q_{*}{\mathcal{O}}_{\overline{Y}}(-K_{\overline{Y}}-\overline{F})=0$. Hence there is the surjection $$H^{0}(\overline{Y},{\mathcal{O}}_{\overline{Y}}(-K_{\overline{Y}}))% \longrightarrow H^{0}(\overline{F},{\mathcal{O}}_{\overline{F}}(-K_{\overline{% F}}))\longrightarrow 0.$$ Here $H^{0}(\overline{F},{\mathcal{O}}_{\overline{F}}(-K_{\overline{F}}))\neq 0$ (because $-K_{\overline{F}}$ is Cartier and ample). Therefore, $H^{0}(\overline{Y},{\mathcal{O}}_{\overline{Y}}(-K_{\overline{Y}}))\neq 0$. Let $\overline{G}\in|-K_{\overline{Y}}|$ be any member. Take (positive) $c\in{\mathbb{Q}}$ so that $K_{\overline{Y}}+\overline{S}+c\overline{G}$ is lc and not plt. Clearly $c\leq 1$, so $-(K_{\overline{Y}}+\overline{S}+c\overline{G})\equiv-(1-c)K_{\overline{Y}}$ is $q$-nef. By Base Point Free Theorem [KMM, Th. 3-1-1] there is a complement $K_{\overline{Y}}+\overline{S}+c\overline{G}+L$, where $nL\in|-n(K_{\overline{Y}}+\overline{S}+c\overline{G})|$ for sufficiently big and divisible $n$ and this complement is not plt, a contradiction with exceptionality (see [P2, Prop. 2.4]). ∎ 4. Proof of Theorem 1.5 In this section we use notation and assumptions of 1.1 and Theorem 1.5. If $g=\operatorname{id}$, then $Y=X$ and $\overline{Y}$ have only terminal singularities (see 3.2 and [KMM, 5-1-11]). Then $\operatorname{Diff}_{\overline{S}}=0$, a contradiction. From now on we assume that $g\neq\operatorname{id}$. Denote $\overline{C}:=\overline{\Delta}_{i_{0}}$ and let $\delta_{i_{0}}=1-1/m$. Since $\delta(\overline{S},\operatorname{Diff}_{\overline{S}})=1$, there are no divisors $E\neq\overline{C}$ of ${\mathcal{K}}(\overline{S})$ with $a(E,\operatorname{Diff}_{\overline{S}})\leq-6/7$. This gives us that $\overline{S}$ is smooth at $\operatorname{Sing}(\overline{C})$ whenever $\operatorname{Sing}(\overline{C})\neq\varnothing$ (see [P3, Lemma 9.1.8]). By our assumptions, $\overline{Y}$ is singular along $\overline{C}$. Moreover, at the general point of $\overline{C}$ we have an analytic isomorphism (see [Ut, 16.6]): $$(\overline{Y},\overline{S},\overline{C})\simeq({\mathbb{C}}^{3},\{x_{3}=0\},\{% x_{1}-\text{axis}\})/{\mathbb{Z}}_{m}(0,1,q),\quad(m,q)=1.$$ (2) Lemma 4.1. Notation as above. Assume that $p_{a}(\overline{C})\geq 1$. Then the map $\overline{Y}\dashrightarrow Y$ is an isomorphism at the general point of $\overline{C}$. Moreover, if $\overline{P}\in\overline{C}$ is a singular point, then $\overline{Y}\dashrightarrow Y$ is an isomorphism at $\overline{P}$. In particular, the proper transform $C$ of $\overline{C}$ is a curve with $p_{a}(C)\geq 1$. 4.2. First we show that Lemma 4.1 implies Theorem 1.5. Assume $p_{a}(\overline{C})\geq 1$. Clearly, $C\subset S$. By Lemma 4.1, $p_{a}(C)\geq 1$ and $$\#\operatorname{Sing}(C)\geq\#\operatorname{Sing}(\overline{C})$$ (3) From (2), we have $\operatorname{Diff}_{S}=(1-1/m)C+(\text{other terms})$. 4.3. Consider the case when $g(S)$ is a point. By Lemma 2.5, as in the proof of Proposition 2.4, one can show that there exists $1$, $2$, $3$, $4$, or $6$-complement of the form $K_{S}+C+(\text{other terms})$ on $S$. By Adjunction, $p_{a}(C)=1$. Therefore, $p_{a}(\overline{C})=1$ and we have equality in (3). Thus $S$ is smooth at $\operatorname{Sing}(C)$ (whenever $\operatorname{Sing}(C)\neq\varnothing$). We have a contradiction by Proposition 2.4. 4.4. Consider the case when $g(S)$ is a curve. Note that $S$ is rational (because $(\overline{S},\operatorname{Diff}_{\overline{S}})$ is a klt log del Pezzo, see e. g. [P3, Sect. 5.5]) and so is $g(S)$. Consider the restriction $g_{S}\colon S\to g(S)$. Since $p_{a}(C)\geq 1$, $C$ is not a section of $g_{S}$. Let $\ell$ be the general fiber of $g_{S}$. Then $\ell\simeq{\mathbb{P}}^{1}$ and $$2=-K_{S}\cdot\ell>\operatorname{Diff}_{S}\cdot\ell\geq(1-1/m)C\cdot\ell\geq% \frac{6}{7}C\cdot\ell.$$ Thus $C\cdot\ell=2$ and $C$ is a $2$-section of $g_{S}$. Moreover, $$(\operatorname{Diff}_{S}-(1-1/m)C)\cdot\ell<2-2(1-1/m)=2/m<1/2.$$ Hence $\operatorname{Diff}_{S}$ has no horizontal components other than $C$. Let $P:=g(\ell)$, let $X^{\prime}$ be a germ of a general hyperplane section through $P$ and let $Y^{\prime}\colon=g^{-1}(X^{\prime})$. Consider the induced (birational) contraction $g^{\prime}\colon Y^{\prime}\to X^{\prime}$. Since singularities of $X$ are isolated, $X^{\prime}$ is smooth. By Bertini Theorem, $K_{X^{\prime}}+\ell$ is plt. Further, $Y^{\prime}$ has exactly two singular points $C\cap\ell$ and these points are analytically isomorphic to ${\mathbb{C}}^{2}/{\mathbb{Z}}_{m}(1,q)$ (see (2)). This contradicts the following lemma. Lemma 4.5 (see [P3, Sect. 6]). Let $\phi\colon Y^{\prime}\to X^{\prime}\ni o^{\prime}$ be a birational contraction of surfaces and let $\ell:=\phi^{-1}(o^{\prime})_{\operatorname{red}}$. Assume that $K_{Y^{\prime}}+\ell$ is plt and $X^{\prime}\ni o^{\prime}$ is smooth. Then $Y^{\prime}$ has on $\ell$ at most two singular points. Moreover, if $Y^{\prime}$ has on $\ell$ exactly two singular points, then these are of types $\frac{1}{m_{1}}(1,q_{1})$ and $\frac{1}{m_{2}}(1,q_{2})$, where $\gcd(m_{i},q_{i})=1$ and $\gcd(m_{1},m_{2})=1$. Proof. We need only the second part of the lemma. So we omit the proof of the first part. Let $\operatorname{Sing}(Y^{\prime})=\{P_{1},P_{2}\}$. We use topological arguments. Regard $Y^{\prime}$ as a analytic germ along $\ell$. Since $(X^{\prime}\ni o^{\prime})$ is smooth, $\pi_{1}(Y^{\prime}\setminus\ell)\simeq\pi_{1}(X^{\prime}\setminus\{o^{\prime}% \})\simeq\pi_{1}(X^{\prime})=\{1\}$. On the other hand, for a sufficiently small neighborhood $Y^{\prime}\supset U_{i}\ni P_{i}$ the map $\pi_{1}(U_{i}\cap\ell\setminus P_{i})\to\pi_{1}(U_{i}\setminus P_{i})$ is surjective (see [K, Proof of Th. 9.6]). Using Van Kampen’s Theorem as in [Mo, 0.4.13.3] one can show that $$\pi_{1}(Y^{\prime}\setminus\{P_{1},P_{2}\})\simeq\langle\tau_{1},\tau_{2}% \rangle/\{\tau_{1}^{m_{1}}=\tau_{2}^{m_{2}}=1,\ \tau_{1}\tau_{2}=1\}.$$ This group is nontrivial if $\gcd(m_{1},m_{2})\neq 1$, a contradiction. ∎ Proof of Lemma 4.1. The map $Y\dashrightarrow\overline{Y}$ is a composition of log flips: $$\begin{array}[]{lcccccccccc}Y=Y_{0}&&&&Y_{1}&&&&Y_{2}&\cdots&Y_{N}=\overline{Y% }\\ \downarrow\omit\hfil\span\omit\span\omit\@ADDCLASS{ltx_eqn_lefteqn}$% \displaystyle\scriptstyle{g}$\mbox{}\cr&\searrow&&\swarrow&&\searrow&&\swarrow% &&&\\ X&&W_{1}&&&&W_{2}&&&&\end{array}$$ where every contraction $\searrow$ is $(K+S+(1+\varepsilon)A)$-negative and every $\swarrow$ is $(K+S+(1-\varepsilon)A)$-negative. Kawamata-Viehweg Vanishing [KMM, Th. 1-2-5] implies that exceptional loci of these contractions are trees of smooth rational curves [Mo, Cor. 1.3]. Thus Lemma 4.1 is obvious if the curve $\overline{C}$ is nonrational. From now on we assume that $\overline{C}$ is a (singular) rational curve. Lemma 4.6. Notation as above. Let $S_{i}$ be the proper transform of $S$ on $Y_{i}$. If $f$ is not a flipping contraction, then $-S_{i}$ is ample over $W_{i}$ for $i=1,\dots,N$. In particular, all nontrivial fibers of $Y_{i}\to W_{i}$ are contained in $S_{i}$. Proof. We claim that $-S_{i}$ is not nef over $Z$ for $i=1,\dots,N-1$. Indeed, assume $\operatorname{Exc}(f)\neq f^{-1}(o)$. Take $o^{\prime}\in f(\operatorname{Exc}(f))$, $o^{\prime}\neq o$ and let $\ell\subset g^{-1}(f^{-1}(o^{\prime}))$ be any compact irreducible curve. Clearly, $Y\dashrightarrow Y_{i}$ is an isomorphism along $\ell$. Let $\ell_{i}$ be the proper transform of $\ell$ on $Y_{i}$. Since $S\cdot\ell=0$, we have $S_{i}\cdot\ell_{i}=0$. The curve $\ell_{i}$ cannot generate an extremal ray (because extremal contractions on $Y_{1},\dots,Y_{N-1}$ are flipping). If $-S_{i}$ is nef over $Z$, then taking into account that $\rho(Y_{i}/Z)=2$ we obtain $S_{i}\equiv 0$, a contradiction. Thus we may assume that $\operatorname{Exc}(f)=f^{-1}(o)$ is a (prime) divisor. Then the exceptional locus of $Y_{i}\to Z$ is compact. If $-S_{i}$ is nef, this implies $S_{i}=\operatorname{Exc}(Y_{i}\to Z)$. Again we have a contradiction because the proper transform of $\operatorname{Exc}(f)$ does not coincide with $S_{i}$. We prove our lemma by induction on $i$. It is easy to see that $-S$ is ample over $W_{0}:=X$. The Mori cone $\overline{NE}(Y_{i})$ is generated by two extremal rays. Denote them by $R_{i}$ and $Q_{i}$, where $R_{i}$ (resp. $Q_{i}$) corresponds to the contraction $Y_{i}\to W_{i}$ (resp. $Y_{i}\to W_{i+1}$). Suppose that our assertion holds on $Y_{i-1}$, i. e. $S_{i-1}\cdot R_{i-1}<0$. By our claim above, $S_{i-1}\cdot Q_{i-1}>0$ and after the flip $Y_{i-1}\dashrightarrow Y_{i}$ we have $S_{i}\cdot R_{i}<0$. This completes the proof of the lemma. ∎ 4.7. Let us consider the case when $f$ is not a flipping contraction. Let $C^{(i)}$ be the proper transform of $\overline{C}$ on $Y_{i}$. If $p_{a}(C^{(i)})\geq 1$, then $Y_{i}\to W_{i}$ cannot contract $C^{(i)}$. Thus $C^{(i+1)}$ is well defined. Now we need to show only that on each step of (4.4) no flipping curves $\operatorname{Exc}(Y_{i}\to W_{i})$ pass through singular points of $C^{(i)}$. (Then $Y_{i}\dashrightarrow Y_{i+1}$ is an isomorphism near singular points of $C^{(i)}$ and we are done). By Lemma 4.6 all flipping curves $\operatorname{Exc}(Y_{i}\to W_{i})$ are contained in $S_{i}$. Therefore we can reduce problem in dimension two. The last claim $\operatorname{Exc}(Y_{i}\to W_{i})\cap\operatorname{Sing}(C^{(i)})=\varnothing$ easily follows by the lemma below. Lemma 4.8. Let $\varphi\colon S\to\widehat{S}\ni\hat{o}$ be a birational contraction of surfaces and let $\Delta=\sum\delta_{i}\Delta_{i}$ be a boundary on $S$ such that $K_{S}+\Delta$ is klt and $-(K_{S}+\Delta)$ is $\varphi$-ample. Put $\Theta:=\sum_{\delta_{i}\geq 6/7}\Delta_{i}$. Assume that $\varphi$ does not contract components of $\Theta$. Then $\Theta$ is smooth on $\varphi^{-1}(\hat{o})\setminus\operatorname{Sing}(S)$. Proof. Assume the converse and let $P\in\operatorname{Sing}(\Theta)\cap\bigl{(}\varphi^{-1}(\hat{o})\setminus% \operatorname{Sing}(S)\bigr{)}$. Let $\Gamma$ be a component of $\varphi^{-1}(\hat{o})$ passing through $P$. Then $\Gamma\simeq{\mathbb{P}}^{1}$. There is an $n$-complement $\Delta^{+}=\sum\delta_{i}^{+}\Delta_{i}$ of $K_{S}+\Delta$ near $\varphi^{-1}(\hat{o})$ for $n\in\{1,2,3,4,6\}$ (see [Sh, Th. 5.6], [Ut, Cor. 19.10], or [P3, Sect. 6]). By the definition of complements, $\delta_{i}^{+}\geq\min\{1,\left\lfloor(n+1)\delta_{i}\right\rfloor/n\}$ for all $i$. In particular, $\delta_{i}^{+}=1$ whenever $\delta_{i}\geq 6/7$, i. e. $\Theta\leq\Delta^{+}$. This means that $K_{S}+\Theta$ is lc. Since $P\in\operatorname{Sing}(\Theta)$, $K_{S}+\Theta$ is not plt at $P$. Therefore $\Theta=\Delta^{+}$ near $P$ and $\Gamma$ is not a component of $\Delta^{+}$. We claim that $K_{S}+\Gamma$ is lc. Indeed, $\Gamma$ is lc at $P$ (because both $\Gamma$ and $S$ are smooth at $P$). Assume that $K_{S}+\Gamma$ is not lc at $Q\neq P$. Then $K_{S}+(1-\varepsilon)\Gamma+\Delta^{+}$ is not lc at $P$ and $Q$ for $0<\varepsilon\ll 1$. This contradicts Connectedness Lemma [Sh, 5.7], [Ut, 17.4]. Thus $K_{S}+\Gamma$ is lc and we can apply Adjunction: $$(K_{S}+\Delta^{+}+\Gamma)|_{\Gamma}\geq(K_{S}+\Theta+\Gamma)|_{\Gamma}=K_{% \Gamma}+\operatorname{Diff}_{\Gamma}{\left(\Theta\right)}.$$ Since $K_{S}+\Delta^{+}\equiv 0$ over $\widehat{S}$ and $\Gamma\simeq{\mathbb{P}}^{1}$, we have $\deg\operatorname{Diff}_{\Gamma}{\left(\Theta\right)}<2$. On the other hand, the coefficient of $\operatorname{Diff}_{\Gamma}{\left(\Theta\right)}$ at $P$ is equal to $(\Theta\cdot\Gamma)_{P}\geq 2$, a contradiction. ∎ 4.9. Finally let us consider the case when $f$ is a flipping contraction. If $-S_{i}$ is ample over $W_{i}$ for $i=1,\dots,N$, then we can use arguments of 4.7. From now on we assume that $S_{I}$ is nef over $W_{I}$ for some $1\leq I\leq N$. Let $L$ be an effective divisor on $Z$ passing through $o$. Take $c\in{\mathbb{Q}}$ so that $K_{X}+cf^{*}L$ is lc but not klt. By Base Point Free Theorem [KMM, 3-1-1], there is a member $M\in\left|-n(K_{X}+cf^{*}L)\right|$ for some $n\in{\mathbb{N}}$ such that $K_{X}+cf^{*}L+\frac{1}{n}M$ is lc (but not klt). Thus, we may assume $F=cf^{*}L+\frac{1}{n}M$. Let $K_{Y}+S+B^{\prime}=g^{*}(K_{X}+cf^{*}L)$ be the crepant pull-back. Write $B=B^{\prime}+B^{\prime\prime}$, where $B^{\prime},\ B^{\prime\prime}\geq 0$. Then $-(K_{Y}+S+B^{\prime})$ is nef over $Z$ and trivial on fibers of $g$. Run $(K_{Y}+S+B^{\prime})$-MMP over $Z$. Since $K_{Y}+S+B^{\prime}\equiv-B^{\prime\prime}\neq 0$, this ${\mathbb{Q}}$-divisor cannot be nef until $S$ is not contracted. Therefore after a number of flips we get a divisorial contraction: $$Y\dashrightarrow Y_{1}\dashrightarrow\cdots\dashrightarrow Y_{N}=\overline{Y}% \dashrightarrow\cdots Y_{N^{\prime}}\longrightarrow X^{\prime}.$$ (4) Since $\rho(Y_{i}/Z)=2$ the cone $\overline{NE}(Y_{i}/Z)$ has exactly two extremal rays. Hence the sequence (4.4) is contained in (4). Claim 4.10. $S_{j}$ is nef over $W_{j}$ and $-S_{j}$ is ample over $W_{j+1}$ for $I\leq j\leq N^{\prime}$. Proof. Clearly, $-S_{I}$ is ample over $W_{I+1}$ (because $S_{I}$ cannot be nef over $Z$). After the flip $Y_{I}\dashrightarrow Y_{I+1}$ we have that $S_{I+1}$ is ample over $W_{I+1}$. Continuing the process we get our claim. ∎ Further, $X^{\prime}$ has only terminal singularities. Indeed, $X^{\prime}$ is ${\mathbb{Q}}$-factorial, $\rho(X^{\prime}/Z)=1$ and $X^{\prime}\to Z$ is an isomorphism in codimension one. Therefore, one of the following holds: (i) $-K_{X^{\prime}}$ is ample over $Z$, then $X^{\prime}\simeq X$; (ii) $K_{X^{\prime}}$ is numerically trivial over $Z$, then so is $K_{X}$, a contradiction; (iii) $K_{X^{\prime}}$ is ample over $Z$, then $X\dashrightarrow X^{\prime}$ is a flip and $X^{\prime}$ has only terminal singularities [KMM, 5-1-11]. This shows also that $Y_{N^{\prime}}\to X^{\prime}$ is a plt blowup. Then we can replace $X$ with $X^{\prime}$ and apply arguments of 4.7. This finishes the proof of Theorem 1.5. ∎ Concluding Remark Shokurov’s classification of exceptional log del Pezzos with $\delta\geq 1$ uses reduction to the case $\rho=1$. More precisely, this method uses the following modifications: $\overline{S}\longleftarrow S^{\bullet}\longrightarrow S^{\circ}$, where $S^{\bullet}\to\overline{S}$ is the blow up of all divisors with discrepancy $a(E,\operatorname{Diff}_{\overline{S}}\geq 6/7$, $S^{\bullet}\to S^{\circ}$ is a sequence of some extremal contractions and $\rho(S^{\circ})=1$. Then all divisors with discrepancy $\geq 6/7$ are nonexceptional on $S^{\circ}$. In our case, a smooth elliptic curve with coefficient $\geq 6/7$ on $S^{\circ}$ cannot be contracted to a point on $\overline{S}$ (because the singularities of $\overline{S}$ are rational). By Theorem 1.5 this case does not occur. The situation in the case of a singular rational curve with coefficient $\geq 6/7$ on $S^{\circ}$ which is contracted to a point on $\overline{S}$ is more complicated. This case will be discussed elsewhere. Acknowledgements I thank V. V. Shokurov for encouraging me to write up this note. I was working on this problem at Tokyo Institute of Technology during my stay 1999-2000. I am very grateful to the staff of the institute and especially to Professor S. Ishii for hospitality. References [Ab] Abe T. Classification of Exceptional Complements: Elliptic Curve Case, e-print alg-geom/9711029 [HW] Hidaka F., Watanabe K. Normal Gorenstein surfaces with ample anticanonical divisor, Tokyo J. Math. 4 (1981) 319–330 [IP] Ishii S. & Prokhorov Yu. G. Hypersurface exceptional singularities, preprint TITECH-MATH 06-99 (#92) [K] Kawamata, Y., The crepant blowing-up of $3$-dimensional canonical singularities and its application to the degeneration of surfaces, Ann. Math. 127 (1988) 93–163 [KMM] Kawamata Y., Matsuda K., Matsuki K. Introduction to the minimal model program, Adv. Stud. Pure Math. 10 (1987) 283–360 [Mo] Mori S. Flip theorem and the existence of minimal models for $3$-folds, J. Amer. Math. Soc. 1 (1988) 117–253 [P1] Prokhorov Yu. G. Blow-ups of canonical singularities, Algebra. Proc. Internat. Algebraic Conf. on the Occasion of the 90th Birthday of A. G. Kurosh, Moscow, Russia, May 25-30, 1998, Yu. Bahturin ed., Walter de Gruyter, Berlin (2000), 301–317 [P2] Prokhorov Yu. G. Boundedness of nonbirational extremal contractions, Internat. J. Math (2000) 11 (2000) no. 3, 393–411 [P3] Prokhorov Yu. G. Lectures on complements on log surfaces, e-print math.AG/9912111 [PSh] Prokhorov Yu. G. & Shokurov V. V. The first main theorem on complements: from global to local, preprint TITECH-MATH 08-99 (#94) [YPG] Reid M. Young persons guide to canonical singularities, in “Algebraic Geometry, Bowdoin, 1985” Proc. Symp. Pure Math. 46 (1987) 345–414 [Sh] Shokurov V. V. $3$-fold log flips, Izv. AN SSSR, Ser. mat. 56 (1992), 105–201 & 57 (1993), 141–175; English transl. Russian Acad. Sci. Izv. Math. 40 (1993), 93–202 & 43 (1994), 527–558 [Sh1] Shokurov V. V. Complements on surfaces, e-print, alg-geom/9711024 [Ut] Kollár J. et al. Flips and abundance for algebraic threefolds, A summer seminar at the University of Utah, Salt Lake City, 1991. Astérisque. 211 (1992)
[ A.A. Chernitskii (November 22, 2020) Abstract The field nature of spin in the framework of the field electromagnetic particle concept is considered. A mathematical character of the fine structure constant is discussed. Three topologically different field models for charged particle with spin are investigated in the scope of the linear electrodynamics. A using of these field configurations as an initial approximation for an appropriate particle solution of nonlinear electrodynamics is discussed. Keywords:Spin, Elementary particle, Unified field theory : 11.00.00, 12.10.-g \SetInternalRegister 8000 The Field Nature of Spin] The Field Nature of Spin for Electromagnetic Particle \copyrightyear 2006 1 Introduction The field electromagnetic particle concept in the framework of an unified nonlinear electrodynamics was discussed in my articles (see, for example, Chernitskii (1999, 2004, 2006a, 2006b)). Here I continue this theme. 2 Electromagnetic particle with spin Let us consider the electromagnetic particle which is a space-localized solution for a nonlinear electrodynamics field model. A field configuration corresponding to the solution is a three-dimensional electromagnetic soliton. It is not unreasonable to consider the field configuration which is more complicated than the simplest spherically symmetrical one with point singularity. The purely Coulomb field is the known example for such simplest configuration. We can consider the field configuration with singly or multiply connected singular region. This singular region can be considered to be small, so that it do not manifest explicitly in experiment. But its implicit manifestation is the existence of the spin and the magnetic moment of the particle. Mass, spin, charge, and magnetic moment of the particle appear naturally in the presented approach when the long-range interaction between the particles is considered with the help of a perturbation method Chernitskii (2006b). The classical equations of motion for electromagnetic particle in external electromagnetic field are derived but not postulated. These equations are a manifestation of the nonlinearity of the field model. Charge and magnetic moment in this approach characterize the particle solution at infinity. But mass and spin characterize the particle solution in the localization region and appear as the integrated energy and angular momentum accordingly. Thus we have the following definition for spin: $$\mathtt{\mathbf{s}}=\int\boldsymbol{\mathcal{M}}{\rm d}V\;\;,$$ (1) where $\boldsymbol{\mathcal{M}}\doteqdot\mathbf{r}\times\boldsymbol{\mathcal{P}}$ is an angular momentum density (spin density), $\mathbf{r}$ is a position vector, $\boldsymbol{\mathcal{P}}\doteqdot\,(\mathbf{D}\times\mathbf{B})/4\pi$ is a momentum density (Poynting vector). The angular momentum density can appear in axisymmetric static electromagnetic field configurations with crossing electric and magnetic fields. In this case the crossing electric and magnetic fields give birth to the momentum (Poynting vector) density which is tangent to a circle with center located at the axis. Because of the axial symmetry, the full angular momentum contains only an appropriate axial component of the angular momentum density. Thus we have the spin density directed on the axis $z$: $\boldsymbol{\mathcal{M}}_{z}=\boldsymbol{\rho}\times\boldsymbol{\mathcal{P}}$, where $\boldsymbol{\rho}$ is a vector component of the position vector which is perpendicular to the axis $z$. This configuration is shown on Fig. 1. 3 Spin and fine structure constant A value of the electron spin which we have in experiment is $$\mathtt{s}=\frac{\hbar}{2}=\frac{e^{2}}{2\,\alpha}\quad.$$ (2) A particle solution of nonlinear electrodynamics, which can be appropriate to electron, has a free parameter associated with electron charge. The existence of this free parameter is connected with a scale invariance of the field model. Thus we have eleven free parameters for electromagnetic particle in all, viz. ten parameters for Poincaré group and one scale parameter. One further known continuous symmetry can be considered. This is so called dual symmetry between electric and magnetic fields. An appropriate transformed field will have a magnetic charge. But this continuous transformation for the electromagnetic field be accompanied by an appearance of a vector potential field with a singular infinite line. Because this it is reasonable to consider that an appropriate free parameter does not exist for a removed particle. We can assume that one particle solution has only these eleven free parameters. But a separate particle included in a many-particle solution of a nonlinear field model has not these free parameters. In the case of linear electrodynamics this separate particle has the free parameters because of the known superposition property for the solutions. But for the case of nonlinear electrodynamics only the entire many-particle solution has the free parameters, and an included particle has not the free parameters. Thus we can assume that the specified value of the electron charge is connected with the nonlinearity of the model which is the cause of the interaction between particles in the world solution. According to formula (2) we have that the dimensionless constant $2\alpha$ is the aspect ratio between the square of the electron charge $e^{2}$ and the value of electron spin $\mathtt{s}$. We can consider that the electron charge is the given constant. But the value of electron spin is calculated in the presented approach by the formula (1). Thus we can consider the fine structure constant as a mathematical one calculated by the formula $$\alpha=\frac{e^{2}}{2\,\mathtt{s}}\quad,$$ (3) where $\mathtt{s}$ is calculated by the formula (1) for the appropriate particle solution. A value of the constant $\alpha$ calculated by the formula (3) for a particle solution in the framework of a field model can be considered as a test for an appropriateness of these field model and particle solution. 4 Three topologically different models for charged particle with spin Let us consider three topologically different models of the electron in the scope of the linear electrodynamics. These solutions can be considered as an initial approximation to the appropriate veritable solution for a nonlinear electrodynamics model. Two-dyon configuration with equal electric and opposite magnetic charges was considered in my article Chernitskii (1999). The spin calculated by formula (1) with the appropriate solution of the linear electrodynamics does not depend on a distance between the dyons. It is defined by the values of the electric and the magnetic charges. If an absolute value of the ratio between the electric and the magnetic charges (which is an additional free parameter) equals to the fine structure constant and the full charge equals to the electron charge, then the value of spin equals to electron spin (2). For the details see my article Chernitskii (1999). Configuration with singular disk is obtained from the Coulomb field by a space shift with a complex number parameter (see, for example, Burinskii (1974)). We have the field in the complex representation for the electromagnetic field by means of the following formal complex shift: ${\bf{D}}+i{\bf{H}}\,={{e{\bf{r}}}}/{{r^{3}}}$, where $x_{1}\to x_{1}$, $x_{2}\to x_{2}$, $x_{3}\to x_{3}+i\rho$. The spin calculated by formula (1) with this field does not depend on the shift parameter $\rho$ (radius of the disk). But the numerical value of the spin for this case is not equal to the electron spin: $\mathtt{s}\approx 0.027\,{e^{2}}/({2\,\alpha})$. Note that here an additional parameter might be used to obtain the value of spin for the electron. The magnetic part of the solution must be multiplied on this parameter. But in this case the solution is not obtained by the complex shift from the Coulomb field. A consideration of the static linear electrodynamics equations in toroidal coordinates $(\xi,\eta)$ gives the appropriate solution with toroidal symmetry. This solution can include an electric and a magnetic parts. They can be represented with the help of toroidal harmonics which are the spheroidal harmonics with half-integer index: $P_{n-\frac{1}{2}}^{l}(\cosh\xi)$, where $n$ and $l$ are integer Morse and Feshbach (1953). To obtain the right behaviour of the electromagnetic field at infinity for a charged particle with magnetic moment we must take the toroidal harmonics $P_{-\frac{1}{2}}^{0}(\cosh\xi)$, $P_{-\frac{3}{2}}^{0}(\cosh\xi)$ for the electric field and $P_{-\frac{1}{2}}^{1}(\cosh\xi)$, $P_{-\frac{3}{2}}^{1}(\cosh\xi)$ for the magnetic one. Because we intend to consider this solution as an initial approximation to a solution of a nonlinear electrodynamics model, it is reasonable to take the condition of vanishing of two electromagnetic invariants near the singular ring. This condition will be satisfied when the ratio between the electric and magnetic vector magnitudes tends to unit near the ring. The spin calculated by formula (1) with this toroidal field configuration does not depend on a radius of the singular ring. But the numerical value of the spin for this case is not equal to the electron spin: $\mathtt{s}\approx 0.0006\,{e^{2}}/({2\,\alpha})$. An additional parameter might be used to obtain the value of spin for the electron. The magnetic part of the solution must be multiplied on this parameter. But in this case the solution does not satisfy the condition of vanishing of two electromagnetic invariants near the ring. 5 Conclusions and discussions Thus to obtain the right value of fine structure constant for the considered field configurations, we must introduce some additional free parameter. But, as mentioned above, for the solution of a nonlinear electrodynamics model we must have only one free parameter that is electron charge. We must obtain the right value of the fine structure constant in the case of an appropriate nonlinear electrodynamics model and an appropriate solution. The examined field configurations can be considered as possible initial approximations to the right electron particle solution. It should be noted also that the considered here field configurations may describe the charged particles with spin but not neutral or massless one. However, another particle solutions may describe neutral and massless particles with spin according to the general approach based on the formula (1). These solutions may be more complicated than considered here. References (1) Chernitskii (1999) A. A. Chernitskii, J. High Energy Phys. 1999, Paper 10, 1–34 (1999), hep-th/9911093. Chernitskii (2004) A. A. Chernitskii, “Born-Infeld equations,” in Encyclopedia of Nonlinear Science, edited by A. Scott, Routledge, New York and London, 2004, pp. 67–69, hep-th/0509087. Chernitskii (2006a) A. A. Chernitskii, Gravitation & Cosmology 12, 130–132 (2006a), hep-th/0609204. Chernitskii (2006b) A. A. Chernitskii, “Mass, spin, charge, and magnetic moment for electromagnetic particle,” in XI Advanced Research Workshop on High Energy Spin Physics (DUBNA- SPIN-05) Proceedings, edited by A. V. Efremov, and S. V. Goloskokov, JINR, Dubna, 2006b, pp. 234–239, hep-th/0603040. Burinskii (1974) A. Burinskii, Sov. Phys. JETP 39, 193 (1974). Morse and Feshbach (1953) P. M. Morse, and H. Feshbach, Methods of Theoretical Physics, vol. II, McGraw-Hill Book Company, Inc., New York Toronto London, 1953.
Scalar invariants in gravitational and electromagnetic fields Zihua Weng [email protected]. School of Physics and Mechanical & Electrical Engineering, Xiamen University, Xiamen 361005, China Abstract The paper discusses some scalar invariants in the gravitational field and electromagnetic field by means of the characteristics of the quaternions. When we emphasize some definitions of quaternion physical quantities, the speed of light, mass density, energy density, power density, charge density, and spin magnetic moment density etc. will remain the same respectively in the gravitational and electromagnetic fields under the coordinate transformation. The results explain why there are some relationships among different invariants in the gravitational and electromagnetic fields. mass density; energy density; linear momentum; angular momentum; charge density; spin density; magnetic moment density; quaternion; octonion. pacs: 03.50.-z; 11.80.Cr; 06.30.Dr; 75.10.Hk; 11.30.Er. I INTRODUCTION In the gravitational field, there exist the conservation of mass, conservation of energy bender , mass continuity equation, and invariable energy density. A. Lavoisier lavoisier and H. Cavendish etc. proposed the mass was invariable and conserved. There are not only the conservation of mass but also the mass continuity equation. G. W. Leibnitz, J. P. Joule joule , and O. Heaviside etc. believed the energy was invariable and conserved. A. Einstein einstein and M. Planck etc. found the mass-energy equivalence between the mass and energy. However, there exist still some puzzles about the above conservations. Firstly, the conservation of mass is limited to the case of weak gravitational strength. Is it effective still when the gravitational field can not be neglected? Secondly, the conservation of mass possesses other clear mathematical formulations in some theories, such as the mass continuity equation. Whether the conservation of energy has similar mathematical formulations in some theories? Thirdly, the energy can be converted into the mass, and vice versa. Are there other relations between these two conservations? The quaternion can be used to describe the property of the electromagnetic field maxwell . Similarly, the quaternion can also be used to demonstrate the characteristics of gravitational field. By means of the scalar invariant of quaternions, we find there exist some relationships among these conservations and invariants. With the characteristics of the quaternion adler , we obtain that the speed of light, mass density, and energy density are all invariants in the gravitational field chu , under the quaternion coordinate transformation weng . II Scalar invariants in gravitational field By means of the characteristics of the quaternion, we can obtain some kinds of scalar invariants under the quaternion coordinate transformation in the gravitational field. In the paper, each quaternion definition of a physical quantity possesses one invariant equation. II.1 Quaternion transformation In the quaternion space for the gravitational field, the basis vector is $\mathbb{E}=(\emph{{i}}_{0},\emph{{i}}_{1},\emph{{i}}_{2},\emph{{i}}_{3})$ with $\emph{{i}}_{0}=1$, and the quaternion quantity $\mathbb{D}(d_{0},d_{1},d_{2},d_{3})$ is defined as follows. $$\mathbb{D}=d_{0}+d_{1}\emph{{i}}_{1}+d_{2}\emph{{i}}_{2}+d_{3}\emph{{i}}_{3}$$ (1) where, the $d_{0},~{}d_{1},~{}d_{2}$, and $d_{3}$ are all real. When we transform one form of the coordinate system into another, the physical quantity $\mathbb{D}$ will be transformed into the quaternion physical quantity $\mathbb{D}^{\prime}(d^{\prime}_{0},d^{\prime}_{1},d^{\prime}_{2},d^{\prime}_{3})$. $$\mathbb{D}^{\prime}=\mathbb{K}^{*}\circ\mathbb{D}\circ\mathbb{K}$$ (2) where, the $\circ$ denotes the quaternion multiplication; the $*$ indicates the conjugate of the quaternion; the $\mathbb{K}$ is the quaternion, and $\mathbb{K}^{*}\circ\mathbb{K}=1$ . The scalar part $d_{0}$ satisfies the following equation. $$\displaystyle d_{0}=d^{\prime}_{0}$$ (3) In the above equation, both sides’ scalar parts are one and the same during the quaternion coordinate system is transforming. II.2 Radius vector In the quaternion space, the coordinates are $r_{0},r_{1},r_{2}$, and $r_{3}$. The radius vector is $$\displaystyle\mathbb{R}=r_{0}+\Sigma(r_{j}\emph{{i}}_{j})$$ (4) where, the scalar $r_{0}=v_{0}t$; $v_{0}$ is the speed of light beam, and $t$ denotes the time; $j=1,2,3$; $i=0,1,2,3$. When the coordinate system is transformed into the other, we have the quaternion radius vector $\mathbb{R}^{\prime}(r^{\prime}_{0},r^{\prime}_{1},r^{\prime}_{2},r^{\prime}_{3})$, with the $r^{\prime}_{0}=v^{\prime}_{0}t^{\prime}$ from Eq.(2). From Eqs.(3) and (4), we obtain $$\displaystyle r_{0}=r^{\prime}_{0}$$ (5) In the some special cases, we may substitute the quaternion physical quantity $\mathbb{Z}(z_{0},z_{1},z_{2},z_{3})$ for the radius vector $\mathbb{R}(r_{0},r_{1},r_{2},r_{3})$. The former quantity is defined as $$\mathbb{Z}=\mathbb{R}\circ\mathbb{R}$$ (6) where, $z_{0}=(r_{0})^{2}-(r_{1})^{2}-(r_{2})^{2}-(r_{3})^{2}$ . When we transform one form of the coordinate system into another, the physical quantity $\mathbb{Z}$ is transformed into the quaternion physical quantity $\mathbb{Z}^{\prime}(z^{\prime}_{0},z^{\prime}_{1},z^{\prime}_{2},z^{\prime}_{3})$ from Eq.(2). From Eqs.(3) and (6), we gain $$\displaystyle(r_{0})^{2}-\Sigma(r_{j})^{2}=(r^{\prime}_{0})^{2}-\Sigma(r^{% \prime}_{j})^{2}$$ (7) The above means clearly that we may obtain different invariants from differen definitions of physical quantities, by means of the characteristics of quaternion coordinate transformation in the gravitational field. II.3 Invariable speed of light The quaternion velocity $\mathbb{V}(v_{0},v_{1},v_{2},v_{3})$ is $$\displaystyle\mathbb{V}=v_{0}+\Sigma(v_{j}\emph{{i}}_{j})$$ (8) When the coordinate system is transformed into the other, we have the quaternion velocity $\mathbb{V}^{\prime}(v^{\prime}_{0},v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3})$ from Eq.(2). From Eq.(3) and the definition of velocity, we obtain the invariable speed of light. That is $$\displaystyle v_{0}=v^{\prime}_{0}$$ (9) Choosing the definition combination of the quaternion velocity and radius vector, we find the invariants, Eqs.(5) and (9), and gain Galilean transformation. $$\displaystyle r_{0}=r^{\prime}_{0}~{},~{}v_{0}=v^{\prime}_{0}~{}.$$ (10) In some cases, we obtain Lorentz transformation from the invariant equations, Eqs.(7) and (9). $$\displaystyle(r_{0})^{2}-\Sigma(r_{j})^{2}=(r^{\prime}_{0})^{2}-\Sigma(r^{% \prime}_{j})^{2}~{},~{}v_{0}=v^{\prime}_{0}~{}.$$ (11) The above states that we can obtain many kinds of coordinate transformations from the different definition combinations of physical quantities, including Galilean transformation and Lorentz transformation, etc. II.4 Potential and strength In the quaternion space, the gravitational potential is, $$\displaystyle\mathbb{A}=a_{0}+\Sigma(a_{j}\emph{{i}}_{j})$$ (12) and the gravitational strength $\mathbb{B}(b_{0},b_{1},b_{2},b_{3})$ as follows. $$\displaystyle\mathbb{B}=\lozenge\circ\mathbb{A}$$ (13) where, $\partial_{i}=\partial/\partial r_{i}$; $\lozenge=\Sigma(\emph{{i}}_{i}\partial_{i})$; $b_{0}=\partial_{0}a_{0}-\Sigma(\partial_{j}a_{j})$. In the above equation, we choose the following gauge condition to simplify succeeding calculation, $$\displaystyle b_{0}=0$$ (14) and then the gravitational strength is $\mathbb{B}=\textbf{g}/v_{0}+\textbf{b}$ . $$\displaystyle\textbf{g}/v_{0}=$$ $$\displaystyle\partial_{0}\emph{{a}}+\nabla a_{0}$$ (15) $$\displaystyle\textbf{b}=$$ $$\displaystyle\nabla\times\emph{{a}}$$ (16) where, $\nabla=\Sigma(\emph{{i}}_{j}\partial_{j})$; $\emph{{a}}=\Sigma(a_{j}\emph{{i}}_{j})$; $\emph{{b}}=\Sigma(b_{j}\emph{{i}}_{j})$; there exist $\textbf{a}=0$ and $\textbf{b}=0$ in Newtonian gravity. When the coordinate system is transformed into the other, we have the quaternion potential $\mathbb{A}^{\prime}(a^{\prime}_{0},a^{\prime}_{1},a^{\prime}_{2},a^{\prime}_{3})$ and strength $\mathbb{B}^{\prime}(b^{\prime}_{0},b^{\prime}_{1},b^{\prime}_{2},b^{\prime}_{3})$ respectively from Eq.(2). From Eq.(3) and the definition of potential, we obtain the invariable scalar potential, $$\displaystyle a_{0}=a^{\prime}_{0}$$ (17) and the gauge from the definition of strength as follows. $$\displaystyle b_{0}=b^{\prime}_{0}$$ (18) By means of the quaternion characteristics, the above means that the scalar potential of gravitational field is the invariant under the coordinate transformations in Eq.(2). And so is that of gravitational strength. II.5 Conservation of mass There exist two kinds of linear momenta, one is of the particle, the other is of the gravitational field. For the sake of describing various linear momenta by one single definition, the source and the linear momentum both will be extended in the gravitational field. The gravitational source $\mathbb{S}$ covers the linear momentum density $\mathbb{S}_{g}$ and an extra part $\mathbb{B}^{*}\circ\mathbb{B}/(v_{0}\mu_{g}^{g})$. $$\displaystyle\mu\mathbb{S}=-(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{B}=\mu% _{g}^{g}\mathbb{S}_{g}-\mathbb{B}^{*}\circ\mathbb{B}/v_{0}$$ (19) where, $\mathbb{B}^{*}\circ\mathbb{B}/(2\mu_{g}^{g})$ is the energy density of gravitational field, and is similar to that of the electromagnetic field; $\mu$ and $\mu_{g}^{g}$ are the constants; $\mu_{g}^{g}=4\pi G/v_{0}^{2}$ , and $G$ is the gravitational constant. The linear momentum density $\mathbb{P}(p_{0},p_{1},p_{2},p_{3})$ is the extension of $\mathbb{S}_{g}=m\mathbb{V}$. $$\displaystyle\mathbb{P}=\mu\mathbb{S}/\mu_{g}^{g}$$ (20) where, $p_{0}=\widehat{m}v_{0}$ , $p_{j}=mv_{j}\emph{{i}}_{j}$ ; $\widehat{m}=m+\triangle m$; $m$ is the inertial mass density, and $\widehat{m}$ is the gravitational mass density; $\triangle m=-\mathbb{B}^{*}\circ\mathbb{B}/(v_{0}^{2}\mu_{g}^{g})$. When the quaternion coordinate system rotates, we have the quaternion linear momentum density $\mathbb{P}^{\prime}(p^{\prime}_{0},p^{\prime}_{1},p^{\prime}_{2},p^{\prime}_{3})$ from Eqs.(3) and (20). $$\displaystyle\widehat{m}v_{0}=\widehat{m}^{\prime}v^{\prime}_{0}$$ (21) By Eqs.(9) and (21), the gravitational mass density $\widehat{m}$ remains unchanged in the gravitational field. $$\displaystyle\widehat{m}=\widehat{m}^{\prime}$$ (22) The above is the conservation of mass. When $b_{j}=0$ and $\triangle m=0$, the above can be reduced as follows. $$\displaystyle m=m^{\prime}$$ (23) The above means that if we emphasize the definitions of the velocity and linear momentum, the gravitational mass density will remain the same under the coordinate transformation in Eq.(2) in the quaternion space. II.6 Mass continuity equation We choose the following definition of applied force, to cover various forces in the gravitational field. The applied force density $\mathbb{F}(f_{0},f_{1},f_{2},f_{3})$ is defined from the linear momentum density $\mathbb{P}$ . $$\displaystyle\mathbb{F}=v_{0}(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{P}$$ (24) where, the applied force density includes the inertial force density and the gravitational force density, etc. While, the scalar $f_{0}=\partial p_{0}/\partial t+v_{0}\Sigma(\partial p_{j}/\partial r_{j})+% \Sigma(b_{j}p_{j})$ . When the coordinate system rotates, we have the quaternion applied force density $\mathbb{F}^{\prime}(f^{\prime}_{0},f^{\prime}_{1},f^{\prime}_{2},f^{\prime}_{3})$ from Eqs.(2), (3) and (24). And then, we have $$\displaystyle f_{0}=f^{\prime}_{0}$$ (25) In the equilibrium state, there exists $f^{\prime}_{0}=0$, we have the mass continuity equation by Eqs.(9) and (25). $$\displaystyle\partial\widehat{m}/\partial t+\Sigma(\partial p_{j}/\partial r_{% j})+\Sigma(b_{j}p_{j})/v_{0}=0$$ (26) further, if the strength $b_{j}=0$, the above is reduced to the differential form of mass continuity equation. $$\displaystyle\partial m/\partial t+\Sigma(\partial p_{j}/\partial r_{j})=0$$ (27) The above states the gravitational strength $\mathbb{B}$ has a small influence on the mass continuity equation, although the term $\Sigma(b_{j}p_{j})/v_{0}$ and the extra mass density $\triangle m$ both are very tiny. And then the mass continuity equation is the invariant under the quaternion transformation in Eq.(2), if we choose the definition combination of the velocity and applied force in the gravitational field. II.7 Angular momentum In order to incorporate various energies with the single definition, the angular momentum and the energy both will be extended in the gravitational field. The angular momentum density $\mathbb{L}$ is defined from the linear momentum density $\mathbb{P}$, radius vector $\mathbb{R}$, and $\mathbb{X}$. $$\displaystyle\mathbb{L}=(\mathbb{R}+k_{rx}\mathbb{X})\circ\mathbb{P}$$ (28) where, the $\mathbb{X}=\Sigma(x_{i}\emph{{i}}_{i})$ is the quaternion; the $k_{rx}$ is the coefficient; the angular momentum density $\mathbb{L}$ includes the orbital angular momentum density and spin angular momentum density in the gravitational field. The angular momentum density is rewritten as, $$\displaystyle\mathbb{L}=l_{0}+\Sigma(l_{j}\emph{{i}}_{j})$$ (29) where, $l_{0}=(r_{0}+k_{rx}x_{0})p_{0}-\Sigma\left\{(r_{j}+k_{rx}x_{j})p_{j}\right\}$. The scalar part $l_{0}$ is presumed to be the density of spin angular momentum uhlenbeck in the gravitational field. As a part of the $\mathbb{L}$, the $l_{0}$ is one kind of angular momentum, and is the scalar along the $\emph{{i}}_{0}$ direction. When the $\mathbb{X}$ is neglected and the gravity is weak, the scalar $l_{0}$ will be reduced to $l_{0}=r_{0}p_{0}-\Sigma(r_{j}p_{j})$. When the coordinate system rotates, we have the quaternion angular momentum density $\mathbb{L}^{\prime}(l^{\prime}_{0},l^{\prime}_{1},l^{\prime}_{2},l^{\prime}_{3})$ from Eq.(2). Under the coordinate transformation, the spin angular momentum density remains unchanged by Eq.(3). $$\displaystyle l_{0}=l^{\prime}_{0}$$ (30) The above means that the gravitational strength has an influence on the orbital angular momentum and the spin angular momentum, etc. Choosing the definition of angular momentum, the spin angular momentum will be invariable in the gravitational field under the quaternion transformation in Eq.(2). II.8 Conservation of energy We choose the quaternion definition of total energy to include various energies in the gravitational field. In the quaternion space, the total energy density $\mathbb{W}$ is defined from the angular momentum density $\mathbb{L}$ . $$\displaystyle\mathbb{W}=v_{0}(\mathbb{B}/v_{0}+\lozenge)\circ\mathbb{L}$$ (31) where, the total energy includes the potential energy, the kinetic energy, the torque, and the work, etc. The total energy density is, $$\displaystyle\mathbb{W}=w_{0}+\Sigma(w_{j}\emph{{i}}_{j})$$ (32) where, the scalar $w_{0}=\emph{{b}}\cdot\emph{{l}}+v_{0}\partial_{0}l_{0}+v_{0}\nabla\cdot\emph{{% l}}$ is the energy density; $\emph{{w}}=\Sigma(w_{j}\emph{{i}}_{j})$; $\emph{{l}}=\Sigma(l_{j}\emph{{i}}_{j})$. When the coordinate system rotates, we have the quaternion total energy density $\mathbb{W}^{\prime}(w^{\prime}_{0},w^{\prime}_{1},w^{\prime}_{2},w^{\prime}_{3})$ from Eq.(2). And under the coordinate transformation, the conservation of energy will be obtained from the above and Eq.(3). $$\displaystyle w_{0}=w^{\prime}_{0}$$ (33) In some special cases, there exists $w^{\prime}_{0}=0$, we obtain the spin continuity equation by Eq.(33). $$\displaystyle\partial l_{0}/\partial r_{0}+\nabla\cdot\emph{{l}}+\emph{{b}}% \cdot\emph{{l}}/v_{0}=0$$ (34) If the last term is neglected, the above is reduced to $$\displaystyle\partial l_{0}/\partial r_{0}+\nabla\cdot\emph{{l}}=0$$ (35) further, if the last term is equal to zero, we have $$\displaystyle\partial l_{0}/\partial t=0$$ (36) where, when the time $t$ is only the independent variable, the $\partial l_{0}/\partial t$ will become the $dl_{0}/dt$ . The above means that the energy density $w_{0}$ is the invariant under the quaternion transformation in Eq.(2) in the gravitational field. The spin continuity equation is also the invariable in the gravitational field, although the gravitational strength has an influence on the spin angular momentum density. II.9 Energy continuity equation The external power can be described by the quaternion in the gravitational field. The external power density $\mathbb{N}$ is defined from the total energy density $\mathbb{W}$ . $$\displaystyle\mathbb{N}=v_{0}(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{W}$$ (37) where, the external power density $\mathbb{N}$ includes the power density etc. in the gravitational field. The external power density is also, $$\displaystyle\mathbb{N}=n_{0}+\Sigma(n_{j}\emph{{i}}_{j})$$ (38) where, the scalar $n_{0}=\emph{{b}}^{*}\cdot\emph{{w}}+v_{0}\partial_{0}w_{0}+v_{0}\nabla^{*}% \cdot\emph{{w}}$ is the power density. When the quaternion coordinate system rotates, we have the quaternion external power density $\mathbb{N}^{\prime}(n^{\prime}_{0},n^{\prime}_{1},n^{\prime}_{2},n^{\prime}_{3})$ from Eq.(2). Under the coordinate transformation, the power density will remain unchanged by Eq.(3). $$\displaystyle n_{0}=n^{\prime}_{0}$$ (39) In case of $n^{\prime}_{0}=0$, we obtain the energy continuity equation from the above. $$\displaystyle\partial w_{0}/\partial r_{0}+\nabla^{*}\cdot\emph{{w}}+\emph{{b}% }^{*}\cdot\emph{{w}}/v_{0}=0$$ (40) If the last term is neglected, the above is reduced to $$\displaystyle\partial w_{0}/\partial r_{0}+\nabla^{*}\cdot\emph{{w}}=0$$ (41) further, if the last term is equal to zero, we have $$\displaystyle\partial w_{0}/\partial t=0$$ (42) where, when the time $t$ is only the independent variable, the $\partial w_{0}/\partial t$ will become the $dw_{0}/dt$ . The above means that the gravitational strength b and the torque density w have the influence on the energy continuity equation. Meanwhile, the power density $n_{0}$ is the invariant under the quaternion transformation in the gravitational field. The energy continuity equation is also the invariant, when we choose the definition combination of the velocity, total energy, and external power. In the quaternion spaces, the deductive conclusions about the conservations and their relationships depend on the definition combinations of physical quantities. It leads us to understand the conservations more deeply. By means of the definition combination of the radius vector and the velocity, we obtain the conclusions of the invariable speed of light, Galilean transformation, and Lorentz transformation etc. in the gravitational field. Alike, choosing the definition combination of the linear momentum, the operator $\lozenge$, and the invariable speed of light, we have the results of the conservation of mass and the mass continuity equation. Moreover, depending on the definition combination of the angular momentum, the invariable speed of light, and the operator $\lozenge$, we obtain the inference of the invariable spin density, conservation of energy, and energy continuity equation, etc. III Mechanical invariants in electromagnetic field In the electromagnetic field, some scalar invariants will be extended from the gravitational field. They are called the mechanical invariants, which include the conservation of mass, conservation of spin, conservation of power, and conservation of energy young . In the electromagnetic field, there exist still some puzzles about the conservation laws. Firstly, those conservation laws are only limited to the case of weak electromagnetic strength. Are they still effective when the electromagnetic strength can not be neglected? Secondly, there exist either the conservation of mass or the mass continuity equation. Are there some similar continuity equations for other conservation laws? Thirdly, the energy can be converted into the mass, and vice versa. Are there other relations among these conservation laws? The octonion cayley (1845) can be used to describe the property of the electromagnetic field and gravitational field, although these two kinds of fields are quite different. By means of the scalar invariant of the octonion, we find that the strength of the electromagnetic field and gravitational field newton have the influence on the conservation laws. With the character of the octonion, we obtain the mass density and the energy density are invariants in the case for coexistence of electromagnetic field and gravitational field, under the octonion transformation, although a few definitions of physical quantities will be extended in the gravitational field and electromagnetic field. III.1 Octonion transformation In the quaternion space, the basis vector for the gravitational field is $\mathbb{E}_{g}$ = ($1$, $\emph{{i}}_{1}$, $\emph{{i}}_{2}$, $\emph{{i}}_{3}$), and that for the electromagnetic field is $\mathbb{E}_{e}$ = ($\emph{{I}}_{0}$, $\emph{{I}}_{1}$, $\emph{{I}}_{2}$, $\emph{{I}}_{3}$). The $\mathbb{E}_{e}$ is independent of the $\mathbb{E}_{g}$, with $\mathbb{E}_{e}$ = $\mathbb{E}_{g}\circ\emph{{I}}_{0}$ . The basis vectors $\mathbb{E}_{g}$ and $\mathbb{E}_{e}$ can be combined together to become the basis vector $\mathbb{E}$ of octonion space. $$\displaystyle\mathbb{E}=(1,\emph{{i}}_{1},\emph{{i}}_{2},\emph{{i}}_{3},\emph{% {I}}_{0},\emph{{I}}_{1},\emph{{I}}_{2},\emph{{I}}_{3})$$ (43) The octonion quantity $\mathbb{D}(d_{0},d_{1},d_{2},d_{3},D_{0},D_{1},D_{2},D_{3})$ is defined as follows. $$\displaystyle\mathbb{D}=d_{0}+\Sigma(d_{j}\emph{{i}}_{j})+\Sigma(D_{i}\emph{{I% }}_{i})$$ (44) where, $d_{i}$ and $D_{i}$ are all real; $i=0,1,2,3$; $j=1,2,3$. When the coordinate system is transformed into the other one, the physical quantity $\mathbb{D}$ will be become the octonion physical quantity $\mathbb{D}^{\prime}(d^{\prime}_{0},d^{\prime}_{1},d^{\prime}_{2},d^{\prime}_{3% },D^{\prime}_{0},D^{\prime}_{1},D^{\prime}_{2},D^{\prime}_{3})$ . $$\mathbb{D}^{\prime}=\mathbb{K}^{*}\circ\mathbb{D}\circ\mathbb{K}$$ (45) where, $\mathbb{K}$ is the octonion, and $\mathbb{K}^{*}\circ\mathbb{K}=1$; $*$ denotes the conjugate of octonion; $\circ$ is the octonion multiplication. Therefore $$\displaystyle d_{0}=d^{\prime}_{0}$$ (46) In the above equation, the scalar part is one and the same during the octonion coordinates are transforming. Some scalar invariants of electromagnetic field will be obtained from the characteristics of octonions. III.2 Radius vector The radius vector is $\mathbb{R}_{g}$ = ($r_{0}$, $r_{1}$, $r_{2}$, $r_{3}$) in the quaternion space for gravitational field. For the electromagnetic field, the radius vector is $\mathbb{R}_{e}$ = ($R_{0}$, $R_{1}$, $R_{2}$, $R_{3}$). Their combination is $\mathbb{R}(r_{0},r_{1},r_{2},r_{3},R_{0},R_{1},R_{2},R_{3})$. $$\displaystyle\mathbb{R}=r_{0}+\Sigma(r_{j}\emph{{i}}_{j})+\Sigma(R_{i}\emph{{I% }}_{i})$$ (47) where, $r_{0}=v_{0}t$, $R_{0}=V_{0}T$. $v_{0}$ is the speed of light, $t$ denotes the time; and $V_{0}$ is the speed of light-like, $T$ is a time-like quantity. When the coordinates are rotated, we have the octonion radius vector $\mathbb{R}^{\prime}(r^{\prime}_{0},r^{\prime}_{1},r^{\prime}_{2},r^{\prime}_{3% },R^{\prime}_{0},R^{\prime}_{1},R^{\prime}_{2},R^{\prime}_{3})$ from Eq.(45). And then, we have following consequence from Eqs.(46) and (47). $$\displaystyle r_{0}=r^{\prime}_{0}$$ (48) Sometimes, the radius vector $\mathbb{R}$ can be replaced by the physical quantity $\mathbb{Z}(z_{0},z_{1},z_{2},z_{3},Z_{0},Z_{1},Z_{2},Z_{3})$. And $$\displaystyle\mathbb{Z}=\mathbb{R}\circ\mathbb{R}=z_{0}+\Sigma(z_{j}\emph{{i}}% _{j})+\Sigma(Z_{i}\emph{{I}}_{i})$$ (49) By Eqs.(46) and (49), we have $$\displaystyle(r_{0})^{2}-\Sigma(r_{j})^{2}-\Sigma(R_{i})^{2}=(r^{\prime}_{0})^% {2}-\Sigma(r^{\prime}_{j})^{2}-\Sigma(R^{\prime}_{i})^{2}$$ (50) The above represents that the spacetime interval $z_{0}$ remains unchanged when the coordinate system rotates in the octonion space. III.3 Speed of light In the quaternion space for the gravitational field, the velocity is $\mathbb{V}_{g}$ = ($v_{0}$, $v_{1}$, $v_{2}$, $v_{3}$). For the electromagnetic field, the velocity is $\mathbb{V}_{e}$ = ($V_{0}$, $V_{1}$, $V_{2}$, $V_{3}$). They can be combined together to become the octonion velocity $\mathbb{V}(v_{0},v_{1},v_{2},v_{3},V_{0},V_{1},V_{2},V_{3})$. $$\displaystyle\mathbb{V}=v_{0}[\lozenge\circ\mathbb{R}-\nabla\cdot\left\{\Sigma% (r_{j}\emph{{i}}_{j})\right\}]=v_{0}+\Sigma(v_{j}\emph{{i}}_{j})+\Sigma(V_{i}% \emph{{I}}_{i})$$ (51) When the coordinates are rotated, we have the octonion velocity $\mathbb{V}^{\prime}(v^{\prime}_{0},v^{\prime}_{1},v^{\prime}_{2},v^{\prime}_{3% },V^{\prime}_{0},V^{\prime}_{1},V^{\prime}_{2},V^{\prime}_{3})$ from Eq.(45). And then, we have the invariable speed of light by Eq.(46). $$\displaystyle v_{0}=v^{\prime}_{0}$$ (52) and the Galilean transformation as follows. $$\displaystyle t_{0}=$$ $$\displaystyle t^{\prime}_{0}$$ (53) $$\displaystyle\Sigma(r_{j})^{2}+\Sigma(R_{i})^{2}=$$ $$\displaystyle\Sigma(r^{\prime}_{j})^{2}+\Sigma(R^{\prime}_{i})^{2}$$ (54) The above means that if we emphasize especially the important of radius vector Eq.(47) and velocity Eq.(51), we obtain Galilean transformation from Eqs.(48) and (52). In the same way, if we choose the definitions of radius vector Eq.(49) and velocity Eq.(51), we obtain the Lorentz transformation lorentz from Eqs.(50) and (52). In some special cases, the $\Sigma(R_{i}\emph{{I}}_{i})$ do not take part the octonion coordinate transformation, we have the result $\Sigma(R_{i})^{2}=\Sigma(R^{\prime}_{i})^{2}$ in Eqs.(50) and (54). III.4 Potential and strength The gravitational potential is $\mathbb{A}_{g}=(a_{0},a_{1},a_{2},a_{3})$, and the electromagnetic potential is $\mathbb{A}_{e}=(A_{0},A_{1},A_{2},A_{3})$. And the gravitational potential and the electromagnetic potential constitute the potential $\mathbb{A}$ . $$\displaystyle\mathbb{A}=v_{0}\lozenge\circ\mathbb{X}=\mathbb{A}_{g}+k_{eg}% \mathbb{A}_{e}$$ (55) where, $k_{eg}$ is the coefficient. When the coordinates are rotated, we obtain the octonion potential $\mathbb{A}^{\prime}(a^{\prime}_{0},a^{\prime}_{1},a^{\prime}_{2},a^{\prime}_{3% },A^{\prime}_{0},A^{\prime}_{1},A^{\prime}_{2},A^{\prime}_{3})$ from Eq.(45). Therefore, we have the invariable scalar potential of gravitational field by Eq.(46). $$\displaystyle a_{0}=a^{\prime}_{0}$$ (56) The octonion strength $\mathbb{B}(b_{0},b_{1},b_{2},b_{3},B_{0},B_{1},B_{2},B_{3})$ consists of the gravitational strength $\mathbb{B}_{g}$ and electromagnetic strength $\mathbb{B}_{e}$. $$\displaystyle\mathbb{B}=\lozenge\circ\mathbb{A}=\mathbb{B}_{g}+k_{eg}\mathbb{B% }_{e}$$ (57) where, the operator $\lozenge=\partial_{0}+\emph{{i}}_{1}\partial_{1}+\emph{{i}}_{2}\partial_{2}+% \emph{{i}}_{3}\partial_{3}$ . In the above equation, we choose the following gauge conditions to simplify succeeding calculation. $$\displaystyle b_{0}=\partial_{0}a_{0}+\nabla\cdot\textbf{a}=0~{},~{}B_{0}=% \partial_{0}A_{0}+\nabla\cdot\textbf{A}=0~{}.$$ where, $\textbf{a}=\Sigma(a_{j}\emph{{i}}_{j});~{}\textbf{A}=\Sigma(A_{j}\emph{{i}}_{j% });~{}\nabla=\Sigma(\emph{{i}}_{j}\partial_{j})$. The gravitational strength $\mathbb{B}_{g}$ in Eq.(57) includes two components, $\textbf{g}/c=\partial_{0}\emph{{a}}+\nabla a_{0}$ and $\textbf{b}=\nabla\times\emph{{a}}$ , meanwhile the electromagnetic strength $\mathbb{B}_{e}$ involves two parts, $\textbf{E}/c=(\partial_{0}\emph{{A}}+\nabla A_{0})\circ\emph{{I}}_{0}$ and $\textbf{B}=-(\nabla\times\emph{{A}})\circ\emph{{I}}_{0}$ . When the coordinates are rotated, we have the octonion strength $\mathbb{B}^{\prime}(b^{\prime}_{0},b^{\prime}_{1},b^{\prime}_{2},b^{\prime}_{3% },B^{\prime}_{0},B^{\prime}_{1},B^{\prime}_{2},B^{\prime}_{3})$ from Eq.(45). And then, we have the invariable scalar strength of gravitational field by Eq.(46). $$\displaystyle b_{0}=b^{\prime}_{0}$$ (58) The above means that the scalar potential and the scalar strength of gravitational field are the invariants in the octonion space. III.5 Conservation of mass The linear momentum density $\mathbb{S}_{g}=m\mathbb{V}_{g}$ is the part source of the gravitational field, and the electric current density $\mathbb{S}_{e}=q\mathbb{V}_{g}\circ\emph{{I}}_{0}$ is that of the electromagnetic field. They combine together to become the source $\mathbb{S}$ . $$\displaystyle\mu\mathbb{S}=-(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{B}=\mu% _{g}^{g}\mathbb{S}_{g}+k_{eg}\mu_{e}^{g}\mathbb{S}_{e}-\mathbb{B}^{*}\circ% \mathbb{B}/v_{0}$$ (59) where, $k_{eg}^{2}=\mu_{g}^{g}/\mu_{e}^{g}$; $q$ is the electric charge density; $\mu_{e}^{g}$ is the constant; $m$ is the inertial mass density. In some cases, the electric charge is combined with the mass to become the electron or proton etc., we have the condition $R_{i}\emph{{I}}_{i}=r_{i}\emph{{i}}_{i}\circ\emph{{I}}_{0}$ and $V_{i}\emph{{I}}_{i}=v_{i}\emph{{i}}_{i}\circ\emph{{I}}_{0}$ . The $\mathbb{B}^{*}\circ\mathbb{B}/(2\mu_{g}^{g})$ is the energy density, and includes that of the electromagnetic field. $$\displaystyle\mathbb{B}^{*}\circ\mathbb{B}/\mu_{g}^{g}=\mathbb{B}_{g}^{*}\circ% \mathbb{B}_{g}/\mu_{g}^{g}+\mathbb{B}_{e}^{*}\circ\mathbb{B}_{e}/\mu_{e}^{g}$$ (60) The octonion linear momentum density is $$\displaystyle\mathbb{P}=\mu\mathbb{S}/\mu_{g}^{g}=\widehat{m}v_{0}+\Sigma(mv_{% j}\emph{{i}}_{j})+\Sigma(MV_{i}\emph{{i}}_{i}\circ\emph{{I}}_{0})$$ (61) where, $\widehat{m}=m-(\mathbb{B}^{*}\circ\mathbb{B}/\mu_{g}^{g})/v_{0}^{2}$; $M=k_{eg}\mu_{e}^{g}q/\mu_{g}^{g}$; $\emph{{i}}_{0}=1$. The above means that the gravitational mass density $\widehat{m}$ is changed with the strength of the electromagnetic field or the gravitational field. By Eq.(45), we have the linear momentum density, $\mathbb{P}^{\prime}(\widehat{m}^{\prime}v^{\prime}_{0},m^{\prime}v^{\prime}_{1% },m^{\prime}v^{\prime}_{2},m^{\prime}v^{\prime}_{3},M^{\prime}V^{\prime}_{0},M% ^{\prime}V^{\prime}_{1},M^{\prime}V^{\prime}_{2},M^{\prime}V^{\prime}_{3})$, when the octonion coordinate system is rotated. $$\displaystyle\widehat{m}v_{0}=\widehat{m}^{\prime}v^{\prime}_{0}$$ (62) Under Eqs.(52) and (62), we obtain the conservation of mass as follows. $$\displaystyle\widehat{m}=\widehat{m}^{\prime}$$ (63) The above means that if we emphasize the definition of velocity and Eq.(61), either the inertial mass density or the gravitational mass density will remain the same, under the octonion transformation in the electromagnetic field and gravitational field. III.6 Mass continuity equation In the octonion space, the applied force density $\mathbb{F}$ is defined from the linear momentum density $\mathbb{P}$ in Eq.(61). $$\displaystyle\mathbb{F}=v_{0}(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{P}$$ (64) where, the applied force includes the gravity, the inertial force, the Lorentz force, and the interacting force between the magnetic strength with magnetic moment, etc. The applied force density $\mathbb{F}$ is rewritten as follows. $$\displaystyle\mathbb{F}=f_{0}+\Sigma(f_{j}\emph{{i}}_{j})+\Sigma(F_{i}\emph{{I% }}_{i})$$ (65) where, $f_{0}=\partial p_{0}/\partial t+v_{0}\Sigma(\partial p_{j}/\partial r_{j})+% \Sigma(b_{j}p_{j}+B_{j}P_{j})$; $p_{0}=\widehat{m}v_{0}$, $p_{j}=mv_{j}$; $P_{i}=MV_{i}$; $\triangle m=-(\mathbb{B}\circ\mathbb{B}/\mu_{g}^{g})/v_{0}^{2}$. When the coordinate system rotates, we have the octonion applied force density $\mathbb{F}^{\prime}(f^{\prime}_{0},f^{\prime}_{1},f^{\prime}_{2},f^{\prime}_{3% },F^{\prime}_{0},F^{\prime}_{1},F^{\prime}_{2},F^{\prime}_{3})$. And then, we have the result by Eq.(46). $$\displaystyle f_{0}=f^{\prime}_{0}$$ (66) When the right side is zero in the above, we have the mass continuity equation in the case for coexistence of the gravitational field and electromagnetic field. $$\displaystyle\partial\widehat{m}/\partial t+\Sigma(\partial p_{j}/\partial r_{% j})+\Sigma(b_{j}p_{j}+B_{j}P_{j})/v_{0}=0$$ (67) Further, if the strength is zero, $b_{j}=B_{j}=0$, the above will be reduced to the following equation. $$\displaystyle\partial m/\partial t+\Sigma(\partial p_{j}/\partial r_{j})=0$$ (68) The above states that the gravitational strength and electromagnetic strength have the influence on the mass continuity equation, although the $\Sigma(b_{j}p_{j}+B_{j}P_{j})/v_{0}$ and the $\triangle m$ both are usually very tiny when the fields are weak. When we emphasize the definitions of applied force and velocity in gravitational and electromagnetic fields, the mass continuity equation will be the invariant equation under the octonion transformation. III.7 Spin angular momentum In the octonion space, the angular momentum density is defined from Eqs.(47) and (61). $$\displaystyle\mathbb{L}=(\mathbb{R}+k_{rx}\mathbb{X})\circ\mathbb{P}$$ (69) where, $\mathbb{X}=\Sigma(x_{i}\emph{{i}}_{i})+\Sigma(X_{i}\emph{{I}}_{i})$; the angular momentum density includes the orbital angular momentum density and spin angular momentum density in the gravitational and electromagnetic fields; $k_{rx}=1$ is the coefficient. The angular momentum density is rewritten as $$\displaystyle\mathbb{L}=l_{0}+\Sigma(l_{j}\emph{{i}}_{j})+\Sigma(L_{i}\emph{{I% }}_{i})$$ (70) where, the scalar part of angular momentum density $$\displaystyle l_{0}=(r_{0}+k_{rx}x_{0})p_{0}-\Sigma\left\{(r_{j}+k_{rx}x_{j})p% _{j}\right\}-\Sigma\left\{(R_{i}+k_{rx}X_{i})P_{i}\right\}$$ (71) The $l_{0}$ is considered as the spin angular momentum density in the gravitational and electromagnetic fields. When the coordinate system rotates, we have the octonion angular momentum density $\mathbb{L}^{\prime}=\Sigma(l^{\prime}_{i}\emph{{i}}^{\prime}_{i}+L^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$. Under the octonion coordinate transformation, we have the conservation of spin from Eq.(46). $$\displaystyle l_{0}=l^{\prime}_{0}$$ (72) The above means that the spin angular momentum density $l_{0}$ is invariable in the case for coexistence of the gravitational field and electromagnetic field, under the octonion transformation. III.8 Conservation of energy The total energy density $\mathbb{W}$ is defined from the angular momentum density $\mathbb{L}$ in Eq.(69). $$\displaystyle\mathbb{W}=v_{0}(\mathbb{B}/v_{0}+\lozenge)\circ\mathbb{L}$$ (73) where, the energy includes the electric potential energy, the gravitational potential energy, the interacting energy between the magnetic moment with magnetic strength, the torque between the electric moment with electric strength, and the work, etc. in the gravitational field and electromagnetic field. The above can be rewritten as, $$\displaystyle\mathbb{W}=w_{0}+\Sigma(w_{j}\emph{{i}}_{j})+\Sigma(W_{i}\emph{{I% }}_{i})$$ (74) where, the scalar $w_{0}=v_{0}\partial_{0}l_{0}+(v_{0}\nabla+\emph{{h}})\cdot\emph{{j}}+\emph{{H}% }\cdot\emph{{J}}$ ; $\emph{{h}}=\Sigma(b_{j}\emph{{i}}_{j})$; $\emph{{H}}=\Sigma(B_{j}\emph{{I}}_{j})$; $\emph{{j}}=\Sigma(l_{j}\emph{{i}}_{j})$; $\emph{{J}}=\Sigma(L_{j}\emph{{I}}_{j})$. When the coordinate system rotates, we have the octonion energy density $\mathbb{W}^{\prime}=\Sigma(w^{\prime}_{i}\emph{{i}}^{\prime}_{i}+W^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$. Under the octonion transformation, the scalar part of total energy density is the energy density. And then we have the conservation of energy as follows. $$\displaystyle w_{0}=w^{\prime}_{0}$$ (75) In some special cases, the right side is equal to zero. We obtain the spin continuity equation. $$\displaystyle\partial l_{0}/\partial r_{0}+\nabla\cdot\emph{{j}}+(\emph{{h}}% \cdot\emph{{j}}+\emph{{H}}\cdot\emph{{J}})/v_{0}=0$$ (76) If the last term is neglected, the above is reduced to $$\displaystyle\partial l_{0}/\partial r_{0}+\nabla\cdot\emph{{j}}=0$$ (77) further, if the last term is equal to zero, we have $$\displaystyle\partial l_{0}/\partial t=0$$ (78) where, when the time $t$ is only the independent variable, the $\partial l_{0}/\partial t$ will become the $dl_{0}/dt$ . The above means the energy density $w_{0}$ is invariable in the case for coexistence of the gravitational field and the electromagnetic field with the octonion transformation. III.9 Energy continuity equation In the octonion space, the external power density $\mathbb{N}$ is defined from the total energy density in Eq.(73). $$\displaystyle\mathbb{N}=v_{0}(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{W}$$ (79) where, the external power density $\mathbb{N}$ includes the power density in the gravitational and electromagnetic fields. The external power density can be rewritten as follows. $$\displaystyle\mathbb{N}=n_{0}+\Sigma(n_{j}\emph{{i}}_{j})+\Sigma(N_{i}\emph{{I% }}_{i})$$ (80) where, the scalar $n_{0}=v_{0}\partial_{0}w_{0}+(v_{0}\nabla+\emph{{h}})^{*}\cdot\emph{{y}}+\emph% {{H}}^{*}\cdot\emph{{Y}}$; $\emph{{y}}=\Sigma(w_{j}\emph{{i}}_{j})$; $\emph{{Y}}=\Sigma(W_{j}\emph{{I}}_{j})$. When the coordinate system rotates, we have the octonion external power density $\mathbb{N}^{\prime}=\Sigma(n^{\prime}_{i}\emph{{i}}^{\prime}_{i}+N^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$. Under the octonion coordinate transformation, the scalar part of external power density is the power density and remains unchanged by Eq.(46). $$\displaystyle n_{0}=n^{\prime}_{0}$$ (81) In a special case, the right side is equal to zero. And then, we obtain the energy continuity equation. $$\displaystyle\partial_{0}w_{0}+\nabla^{*}\cdot\emph{{y}}+(\emph{{h}}^{*}\cdot% \emph{{y}}+\emph{{H}}^{*}\cdot\emph{{Y}})/v_{0}=0$$ (82) If the last term is neglected, the above is reduced to $$\displaystyle\partial_{0}w_{0}+\nabla^{*}\cdot\emph{{y}}=0$$ (83) further, if the last term is equal to zero, we have $$\displaystyle\partial w_{0}/\partial t=0$$ (84) where, when the time $t$ is only the independent variable, the $\partial w_{0}/\partial t$ will become the $dw_{0}/dt$ . The above means the power density $n_{0}$ is the invariant in the case for coexistence of the gravitational field and electromagnetic field, under the octonion transformation. In other words, either the strength or torque density has an influence on the energy continuity equation in the gravitational field and the electromagnetic field. In the octonion space, the deductive results about the conservation laws and the scalar invariants depend on the definition combinations in the case for coexistence of gravitational field and electromagnetic field. By means of the definition combination of the radius vector and the velocity, we obtain the results of the invariable speed of light, Galilean transformation, and Lorentz transformation. Choosing the definition combination of the linear momentum, the operator $\lozenge$, and the velocity, we have the conclusions about the conservation of mass and the mass continuity equation in the gravitational and electromagnetic fields. With the definition combination of the angular momentum, the velocity, and the operator $\lozenge$, we obtain the inferences of the conservation of spin, the spin continuity equation, the conservation of energy, and the energy continuity equation, etc. IV Electric invariants in electromagnetic field In the electromagnetic field, there are some special invariants, which is different to the mechanical invariants. They are called the electric invariants, which include the conservation of charge, the charge continuity equation, and the spin magnetic moment etc. The conservation of charge is similar to the conservation of mass, and is investigated by a lot of scientists. This conservation law is an important theorem in the physics, with the numerous applications in the quantum mechanics, the electromagnetic theory heaviside , and the optics theory, etc. With the characteristics of octonion, we obtain some scalar invariants, conservation laws, and theorems in the case for coexistence of the electromagnetic field and gravitational field, under the octonion coordinate transformation. IV.1 Electromagnetic and gravitational fields The invariants of the physical quantities in the gravitational field and the electromagnetic field can be illustrated by quaternions, although their definitions will be extended in some aspects. In the quaternion space for the gravitational field, the radius vector is $\mathbb{R}_{g}=\Sigma(r_{i}\emph{{i}}_{i})$, the velocity $\mathbb{V}_{g}=\Sigma(v_{i}\emph{{i}}_{i})$, the basis vector $\mathbb{E}_{g}$ = ($\emph{{i}}_{0}$, $\emph{{i}}_{1}$, $\emph{{i}}_{2}$, $\emph{{i}}_{3}$), with $\emph{{i}}_{0}=1$. In the quaternion space for the electromagnetic field, the radius vector is $\mathbb{R}_{e}=\Sigma(R_{i}\emph{{I}}_{i})$, the velocity $\mathbb{V}_{e}=\Sigma(V_{i}\emph{{I}}_{i})$, the basis vector $\mathbb{E}_{e}$ = ($\emph{{I}}_{0}$, $\emph{{I}}_{1}$, $\emph{{I}}_{2}$, $\emph{{I}}_{3}$), with $\mathbb{E}_{e}$ = $\mathbb{E}_{g}$ $\circ$ $\emph{{I}}_{0}$ . Two quaternion spaces combine together to become the octonion space. In the octonion space, the radius vector $\mathbb{R}=\Sigma(r_{i}\emph{{i}}_{i})+\Sigma(R_{i}\emph{{I}}_{i})$, the velocity $\mathbb{V}=\Sigma(v_{i}\emph{{i}}_{i})+\Sigma(V_{i}\emph{{I}}_{i})$, and the basis vector $\mathbb{E}=(\emph{{i}}_{0},\emph{{i}}_{1},\emph{{i}}_{2},\emph{{i}}_{3},\emph{% {I}}_{0},\emph{{I}}_{1},\emph{{I}}_{2},\emph{{I}}_{3})$. The $r_{0}=v_{0}t$; $v_{0}$ is the speed of light; $t$ denotes the time. The $R_{0}=V_{0}T$; $V_{0}$ is the speed of light-like; $T$ is a time-like quantity. The $\circ$ denotes the octonion multiplication. In some special cases, the electric charge is combined with the mass to become the electron or the proton etc. And then $R_{i}\emph{{I}}_{i}=r_{i}\emph{{i}}_{i}\circ\emph{{I}}_{0}$ and $V_{i}\emph{{I}}_{i}=v_{i}\emph{{i}}_{i}\circ\emph{{I}}_{0}$ . In the octonion space, $\mathbb{A}_{g}=(a_{0},a_{1},a_{2},a_{3})$ is the gravitational potential, and $\mathbb{A}_{e}=(A_{0},A_{1},A_{2},A_{3})$ is the electromagnetic potential. The octonion potential $$\displaystyle\mathbb{A}=\mathbb{A}_{g}+k_{eg}\mathbb{A}_{e}$$ (85) where, $k_{eg}$ is the coefficient. The octonion strength $\mathbb{B}=\Sigma(b_{i}\emph{{i}}_{i})+\Sigma(B_{i}\emph{{I}}_{i})$ includes the gravitational strength $\mathbb{B}_{g}$ and the electromagnetic strength $\mathbb{B}_{e}$ . And the gauge $b_{0}=0$ and $B_{0}=0$ . $$\displaystyle\mathbb{B}=\lozenge\circ\mathbb{A}=\mathbb{B}_{g}+k_{eg}\mathbb{B% }_{e}$$ (86) where, the gravitational strength $\mathbb{B}_{g}$ includes two parts, $\textbf{g}=(g_{01},g_{02},g_{03})$ and $\textbf{b}=(g_{23},g_{31},g_{12})$, while the electromagnetic strength $\mathbb{B}_{e}$ involves two components, $\textbf{E}=(B_{01},B_{02},B_{03})$ and $\textbf{B}=(B_{23},B_{31},B_{12})$ . The linear momentum density $\mathbb{S}_{g}=m\mathbb{V}_{g}$ is the source for the gravitational field, and the electric current density $\mathbb{S}_{e}=q\mathbb{V}_{g}\circ\emph{{I}}_{0}$ for the electromagnetic field. And they can be combined together to become the source $\mathbb{S}$ . $$\displaystyle\mu\mathbb{S}=-(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{B}=\mu% _{g}^{g}\mathbb{S}_{g}+k_{eg}\mu_{e}^{g}\mathbb{S}_{e}-\mathbb{B}^{*}\circ% \mathbb{B}/v_{0}$$ (87) where, $\mu_{e}^{g}$ is the constant; $*$ denotes the conjugate of octonion. And $$\displaystyle\mathbb{B}^{*}\circ\mathbb{B}/\mu_{g}^{g}=\mathbb{B}_{g}^{*}\circ% \mathbb{B}_{g}/\mu_{g}^{g}+\mathbb{B}_{e}^{*}\circ\mathbb{B}_{e}/\mu_{e}^{g}$$ (88) In the octonion space, we have the applied force density, $$\displaystyle\mathbb{F}=v_{0}(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{P}$$ (89) and the angular momentum density $$\displaystyle\mathbb{L}=(\mathbb{R}+k_{rx}\mathbb{X})\circ\mathbb{P}$$ (90) where, the linear momentum $\mathbb{P}=\mu\mathbb{S}/\mu_{g}^{g}$ ; the quantity $\mathbb{X}=\Sigma(x_{i}\emph{{i}}_{i})+\Sigma(X_{i}\emph{{I}}_{i})$ . By means of the total energy density, $$\displaystyle\mathbb{W}=v_{0}(\mathbb{B}/v_{0}+\lozenge)\circ\mathbb{L}$$ (91) we obtain the external power density $$\displaystyle\mathbb{N}=v_{0}(\mathbb{B}/v_{0}+\lozenge)^{*}\circ\mathbb{W}$$ (92) where, the external power density includes the power density in the gravitational and electromagnetic fields. IV.2 Speed of light-like The physical quantity $\mathbb{D}(d_{0},d_{1},d_{2},d_{3},D_{0},D_{1},D_{2},D_{3})$ in octonion space is defined as $$\displaystyle\mathbb{D}=d_{0}+d_{1}\emph{{i}}_{1}+d_{2}\emph{{i}}_{2}+d_{3}% \emph{{i}}_{3}+D_{0}\emph{{I}}_{0}+D_{1}\emph{{I}}_{1}+D_{2}\emph{{I}}_{2}+D_{% 3}\emph{{I}}_{3}$$ (93) In case of the coordinate system is transformed into another, the physical quantity $\mathbb{D}$ will be transformed into $\mathbb{D}^{\prime}(d^{\prime}_{0},d^{\prime}_{1},d^{\prime}_{2},d^{\prime}_{3% },D^{\prime}_{0},D^{\prime}_{1},D^{\prime}_{2},D^{\prime}_{3})$ . $$\mathbb{D}^{\prime}=\mathbb{K}^{*}\circ\mathbb{D}\circ\mathbb{K}$$ (94) where, $\mathbb{K}$ is the octonion, and $\mathbb{K}^{*}\circ\mathbb{K}=1$. In the above, the scalar part is one and the same during the octonion coordinate system is transforming. So $$\displaystyle d_{0}=d^{\prime}_{0}$$ (95) In the octonion space, the velocity $$\displaystyle\mathbb{V}=\Sigma(v_{i}\emph{{i}}_{i})+\Sigma(V_{i}\emph{{I}}_{i})$$ (96) and a new physical quantity $\mathbb{V}_{q}$ can be defined from the above, $$\displaystyle\mathbb{V}_{q}=\mathbb{V}\circ\emph{{I}}_{0}^{*}=\Sigma(V_{i}% \emph{{i}}_{i})-\Sigma(v_{i}\emph{{I}}_{i})$$ (97) By Eq.(94), we have the octonion physical quantity, $\mathbb{V}^{\prime}_{q}(v^{\prime}_{0},v^{\prime}_{1},v^{\prime}_{2},v^{\prime% }_{3},V^{\prime}_{0},V^{\prime}_{1},V^{\prime}_{2},V^{\prime}_{3})$, when the coordinate system is rotated. Under the coordinate transformation, the scalar part of $\mathbb{V}_{q}$ remains unchanged. $$\displaystyle V_{0}=V^{\prime}_{0}$$ (98) The above means that the speed of light-like, $V_{0}$, will remain the same in the electromagnetic field and gravitational field during the octonion coordinate transformation. IV.3 Conservation of charge In the octonion space, the linear momentum density $\mathbb{P}$ is $$\displaystyle\mathbb{P}=\mu\mathbb{S}/\mu_{g}^{g}=\widehat{m}v_{0}+\Sigma(mv_{% j}\emph{{i}}_{j})+\Sigma(MV_{i}\emph{{I}}_{i})$$ (99) where, the gravitational mass density $\widehat{m}=m+\triangle m$ ; $\triangle m=-(\mathbb{B}^{*}\circ\mathbb{B}/\mu_{g}^{g})/v_{0}^{2}$ ; $M=k_{eg}\mu_{e}^{g}q/\mu_{g}^{g}$ . $q$ is the electric charge density; $m$ is the inertial mass density. A new physical quantity $\mathbb{P}_{q}$ can be defined from the above, $$\displaystyle\mathbb{P}_{q}=\mathbb{P}\circ\emph{{I}}_{0}^{*}=MV_{0}+\Sigma(MV% _{j}\emph{{i}}_{j})-\left\{\widehat{m}v_{0}\emph{{I}}_{0}+\Sigma(mv_{j}\emph{{% I}}_{j})\right\}$$ (100) By Eq.(94), we have the linear momentum density, $\mathbb{P}^{\prime}(\widehat{m}^{\prime}v^{\prime}_{0},m^{\prime}v^{\prime}_{1% },m^{\prime}v^{\prime}_{2},m^{\prime}v^{\prime}_{3},M^{\prime}V^{\prime}_{0},M% ^{\prime}V^{\prime}_{1},M^{\prime}V^{\prime}_{2},M^{\prime}V^{\prime}_{3})$, when the octonion coordinate system is rotated. Under the coordinate transformation, the scalar part of $\mathbb{P}_{q}$ remains unchanged. $$\displaystyle MV_{0}=M^{\prime}V^{\prime}_{0}$$ With Eq.(98) and the above, we obtain the conservation of charge as follows. And $M$ is the scalar invariant, which is in direct proportion to the electric charge density $q$ . $$\displaystyle M=M^{\prime}$$ (101) The above means that if we emphasize the definitions of the velocity and the linear momentum, the electric charge density will remain the same, under the octonion coordinate transformation in the electromagnetic field and gravitational field. IV.4 Charge continuity equation In the octonion space, the applied force density $$\displaystyle\mathbb{F}=f_{0}+\Sigma(f_{j}\emph{{i}}_{j})+\Sigma(F_{i}\emph{{I% }}_{i})$$ (102) where, $F_{0}=v_{0}\partial P_{0}/\partial r_{0}+v_{0}\Sigma(\partial P_{j}/\partial r% _{j})+\Sigma(b_{j}P_{j}-B_{j}p_{j})$; $p_{0}=\widehat{m}v_{0}$, $p_{j}=mv_{j}$; $P_{i}=MV_{i}$ . A new physical quantity $\mathbb{F}_{q}$ can be defined from the above, $$\displaystyle\mathbb{F}_{q}=\mathbb{F}\circ\emph{{I}}_{0}^{*}=F_{0}+\Sigma(F_{% j}\emph{{i}}_{j})-\Sigma(f_{i}\emph{{I}}_{i})$$ (103) When the coordinate system rotates, we have the octonion applied force density $\mathbb{F}^{\prime}(f^{\prime}_{0},f^{\prime}_{1},f^{\prime}_{2},f^{\prime}_{3% },F^{\prime}_{0},F^{\prime}_{1},F^{\prime}_{2},F^{\prime}_{3})$, the radius vector $\mathbb{R}^{\prime}(r^{\prime}_{0},r^{\prime}_{1},r^{\prime}_{2},r^{\prime}_{3% },R^{\prime}_{0},R^{\prime}_{1},R^{\prime}_{2},R^{\prime}_{3})$, the momentum density $\mathbb{P}^{\prime}(p^{\prime}_{0},p^{\prime}_{1},p^{\prime}_{2},p^{\prime}_{3% },P^{\prime}_{0},P^{\prime}_{1},P^{\prime}_{2},P^{\prime}_{3})$, and the strength $\mathbb{B}^{\prime}(b^{\prime}_{0},b^{\prime}_{1},b^{\prime}_{2},b^{\prime}_{3% },B^{\prime}_{0},B^{\prime}_{1},B^{\prime}_{2},B^{\prime}_{3})$, with the gauge $b^{\prime}_{0}=B^{\prime}_{0}=0$. Under the coordinate transformation, the scalar part of $\mathbb{F}_{q}$ remains unchanged. $$\displaystyle F_{0}=F^{\prime}_{0}$$ (104) When the right side is equal to zero in the above, we have the charge continuity equation in the case for coexistence of the gravitational field and electromagnetic field. $$\displaystyle\partial P_{0}/\partial r_{0}+\Sigma(\partial P_{j}/\partial r_{j% })+\Sigma(b_{j}P_{j}-B_{j}p_{j})/v_{0}=0$$ (105) Further, if the last term is neglected, the above equation will be reduced to, $$\displaystyle\partial M/\partial t+\Sigma(\partial P_{j}/\partial r_{j})=0$$ (106) The above states that the gravitational strength and electromagnetic strength have the small influence on the charge continuity equation, although the $\Sigma(b_{j}P_{j}-B_{j}p_{j})/v_{0}$ and $\triangle m$ both are usually very tiny when the field is weak. The charge continuity equation is the invariant under the octonion coordinate transformation etc, while the definition of linear momentum density has to be extended in the gravitational field and electromagnetic field. Comparing Eq.(65) with Eq.(103), we find that the mass continuity equation Eq.(68) and charge continuity equation Eq.(106) can’t be effective at the same time. That states that some mechanical invariants and electrical invariants will not be valid simultaneously. IV.5 Conservation of spin magnetic moment In the octonion space, the angular momentum density $$\displaystyle\mathbb{L}=l_{0}+\Sigma(l_{j}\emph{{i}}_{j})+\Sigma(L_{i}\emph{{I% }}_{i})$$ (107) where, $L_{0}=(r_{0}+k_{rx}x_{0})P_{0}-\Sigma\left\{(r_{j}+k_{rx}x_{j})P_{j}\right\}+% \Sigma\left\{(R_{i}+k_{rx}X_{i})p_{i}\right\}$ . A new physical quantity $\mathbb{L}_{q}$ can be defined from the above, $$\displaystyle\mathbb{L}_{q}=\mathbb{L}\circ\emph{{I}}_{0}^{*}=L_{0}+\Sigma(L_{% j}\emph{{i}}_{j})-\Sigma(l_{i}\emph{{I}}_{i})$$ (108) When the octonion coordinate system rotates, we have the octonion linear momentum density $\mathbb{P}^{\prime}=\Sigma(p^{\prime}_{i}\emph{{i}}^{\prime}_{i}+P^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, the physical quantity $\mathbb{X}^{\prime}=\Sigma(x^{\prime}_{i}\emph{{i}}^{\prime}_{i}+X^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, the radius vector $\mathbb{R}^{\prime}=\Sigma(r^{\prime}_{i}\emph{{i}}^{\prime}_{i}+R^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, and the angular momentum density $\mathbb{L}^{\prime}=\Sigma(l^{\prime}_{i}\emph{{i}}^{\prime}_{i}+L^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$ respectively from Eq.(94). Under the coordinate transformation, the scalar part of $\mathbb{L}_{q}$ deduces the conservation of spin magnetic moment. $$\displaystyle L_{0}=L^{\prime}_{0}$$ (109) The above means that the spin magnetic moment density $L_{0}$ is invariable in the case for coexistence of the gravitational field and electromagnetic field, under the octonion coordinate transformation, etc. IV.6 Continuity equation of spin magnetic moment In the octonion space, the total energy density $$\displaystyle\mathbb{W}=w_{0}+\Sigma(w_{j}\emph{{i}}_{j})+\Sigma(W_{i}\emph{{I% }}_{i})$$ (110) where, $W_{0}=v_{0}\partial L_{0}/\partial r_{0}+v_{0}\Sigma(\partial L_{j}/\partial r% _{j})+\Sigma(b_{j}L_{j}-B_{j}l_{j})$. A new physical quantity $\mathbb{W}_{q}$ can be defined from the above, $$\displaystyle\mathbb{W}_{q}=\mathbb{W}\circ\emph{{I}}_{0}^{*}=W_{0}+\Sigma(W_{% j}\emph{{i}}_{j})-\Sigma(w_{i}\emph{{I}}_{i})$$ (111) When the coordinate system rotates, we have the octonion strength $\mathbb{B}^{\prime}=\Sigma(b^{\prime}_{j}\emph{{i}}^{\prime}_{j}+B^{\prime}_{j% }\emph{{I}}^{\prime}_{j})$, angular momentum density $\mathbb{L}^{\prime}=\Sigma(l^{\prime}_{i}\emph{{i}}^{\prime}_{i}+L^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, radius vector $\mathbb{R}^{\prime}=\Sigma(r^{\prime}_{i}\emph{{i}}^{\prime}_{i}+R^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, and the energy density $\mathbb{W}^{\prime}=\Sigma(w^{\prime}_{i}\emph{{i}}^{\prime}_{i}+W^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, from Eq.(94) respectively. And there are the gauge equations $b^{\prime}_{0}=0$ and $B^{\prime}_{0}=0$ . Under the coordinate transformation, the scalar part of $\mathbb{W}_{q}$ remains unchanged. And then, we have the conservation of energy-like. $$\displaystyle W_{0}=W^{\prime}_{0}$$ (112) In some special cases, the right side is equal to zero. We obtain the continuity equation of spin magnetic moment. $$\displaystyle\partial L_{0}/\partial r_{0}+\Sigma(\partial L_{j}/\partial r_{j% })+\Sigma(b_{j}L_{j}-B_{j}l_{j})/v_{0}=0$$ (113) If the last term is neglected, the above is reduced to $$\displaystyle\partial L_{0}/\partial r_{0}+\Sigma(\partial L_{j}/\partial r_{j% })=0$$ further, if the last term is equal to zero, we have $$\displaystyle\partial L_{0}/\partial t=0$$ (114) where, when the time $t$ is only the independent variable, the $\partial L_{0}/\partial t$ will become the $dL_{0}/dt$ . The above means the energy-like density $W_{0}$ is invariable in the case for coexistence of the gravitational field and the electromagnetic field, under the octonion coordinate transformation, etc. IV.7 Energy-like continuity equation In the octonion space, the external power density $$\displaystyle\mathbb{N}=n_{0}+\Sigma(n_{j}\emph{{i}}_{j})+\Sigma(N_{i}\emph{{I% }}_{i})$$ (115) where, $N_{0}=v_{0}\partial W_{0}/\partial r_{0}+v_{0}\Sigma(\partial W_{j}/\partial r% _{j})+\Sigma(b_{j}W_{j}-B_{j}w_{j})$. A new physical quantity $\mathbb{N}_{q}$ can be defined from the above, $$\displaystyle\mathbb{N}_{q}=\mathbb{N}\circ\emph{{I}}_{0}^{*}=N_{0}+\Sigma(N_{% j}\emph{{i}}_{j})-\Sigma(n_{i}\emph{{I}}_{i})$$ (116) When the coordinate system rotates, we have the octonion angular momentum density $\mathbb{L}^{\prime}=\Sigma(l^{\prime}_{i}\emph{{i}}^{\prime}_{i}+L^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, the radius vector $\mathbb{R}^{\prime}=\Sigma(r^{\prime}_{i}\emph{{i}}^{\prime}_{i}+R^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, the energy density $\mathbb{W}^{\prime}=\Sigma(w^{\prime}_{i}\emph{{i}}^{\prime}_{i}+W^{\prime}_{i% }\emph{{I}}^{\prime}_{i})$, the strength $\mathbb{B}^{\prime}=\Sigma(b^{\prime}_{j}\emph{{i}}^{\prime}_{j}+B^{\prime}_{j% }\emph{{I}}^{\prime}_{j})$ from Eq.(94) respectively. And the gauge $b^{\prime}_{0}=B^{\prime}_{0}=0$ . Under the coordinate transformation, the scalar part of $\mathbb{N}_{q}$ remains unchanged by the above. And then we obtain the conservation of power-like as follows. $$\displaystyle N_{0}=N^{\prime}_{0}$$ (117) In a special case, the right side is equal to zero. And then, we obtain the continuity equation of energy-like. $$\displaystyle\partial W_{0}/\partial r_{0}+\Sigma(\partial W_{j}/\partial r_{j% })+\Sigma(b_{j}W_{j}-B_{j}w_{j})/v_{0}=0$$ (118) If the last term is neglected, the above is reduced to $$\displaystyle\partial W_{0}/\partial r_{0}+\Sigma(\partial W_{j}/\partial r_{j% })=0$$ further, if the last term is equal to zero, we have $$\displaystyle\partial W_{0}/\partial t=0$$ (119) where, when the time $t$ is only the independent variable, the $\partial W_{0}/\partial t$ will become the $dW_{0}/dt$ . The above means the power-like density $N_{0}$ is the invariant in the case for coexistence of the gravitational field and electromagnetic field, under the Galilean transformation or the Lorentz transformation, etc. V CONCLUSIONS In the quaternion spaces, choosing the definitions of the $\mathbb{R}$, $\mathbb{V}$, $\mathbb{A}$, $\mathbb{B}$, $\mathbb{P}$, $\mathbb{X}$, $\mathbb{L}$, $\mathbb{W}$, and $\mathbb{N}$ in the electromagnetic field, we obtain the characteristics of some invariants, including the charge continuity equation and conservation of charge. The charge density and spin magnetic moment density etc. are the invariants under the octonion coordinate transformation. The conclusions in the electromagnetic field can be spread from the quaternion space to the octonion space. And the definitions of the mass continuity equation etc. will be extended in the octonion space. It should be noted that the study for the conservation of mass and conservation of charge etc. examined only some simple cases under the octonion coordinate transformation. Despite its preliminary characteristics, this study can clearly indicate the mass density, charge density, and spin density etc. are the scalar invariants respectively, and they are only some simple inferences due to the weak strength of the electromagnetic field. For the future studies, the investigation will concentrate on only some predictions about the mass continuity equation, charge continuity equation, and energy continuity equation, etc. under the strong potential and strength of electromagnetic field. Acknowledgements. This project was supported partially by the National Natural Science Foundation of China under grant No.60677039. References (1) C. M. Bender, D. C. Brody, and D. W. Hook, Quantum effects in classical systems having complex energy, Journal of Physics A: Mathematical and Theoretical, 41, 352003, 2008. (2) A. Lavoisier, Elements of Chemistry, trans. R. Kerr, (Dover Publications Inc., New York, 1965). (3) J. P. Joule, The Scientific Papers of James Prescott Joule, Vol. I & II, (Dawsons of Pall Mall, London, first published 1887, reprinted 1963). (4) A. Einstein, Does the Inertia of a Body Depend Upon Its Energy Content, Annalen der Physik, 18, 639-641, 1905. (5) J. C. Maxwell, A Treatise on Electricity and Magnetism, (Dover Publications, New York, 1954) (6) S. L. Adler, Quaternionic Quantum Mechanics and Quantum Fields, (Oxford University Press, New York, 1995). (7) A. Peters, K. Y. Chung, and S. Chu, High-precision Gravity Measurements Using Atom Interferometry, Metrologia, 38, 25, 2001. (8) Z.-H. Weng, Coordinate transformations in quaternion spaces, arXiv:0809.0126. (9) G. E. Uhlenbeck and S. Goudsmit, Spinning Electrons and the Structure of Spectra, Nature, 17, 264, 1926. (10) T. Young, Lectures on natural philosophy and the mechanical arts, (Thoemmes Continuum Press, Bristol, 2002). (11) A. Cayley, The Collected Mathematical Papers, (Johnson Reprint Co., New York, 1963). (12) I. Newton, The Mathematical Principles of Natural Philosophy, trans. A. Motte, (Dawsons of Pall Mall, London, 1968). (13) H. A. Lorentz, The Theory of Electrons, (Dover Publications Inc., New York, 1952). (14) O. Heaviside, A Gravitational and Electromagnetic Analogy, The Electrician, 31, 281-282, 359, 1893.
Neutron Stars for Undergraduates Richard R. Silbar and Sanjay Reddy Theoretical Division, Los Alamos National Laboratory, Los Alamos, NM 87545 Abstract Calculating the structure of white dwarf and neutron stars would be a suitable topic for an undergraduate thesis or an advanced special topics or independent study course. The subject is rich in many different areas of physics accessible to a junior or senior physics major, ranging from thermodynamics to quantum statistics to nuclear physics to special and general relativity. The computations for solving the coupled structure differential equations (both Newtonian and general relativistic) can be done using a symbolic computational package, such as Mathematica. In doing so, the student will develop computational skills and learn how to deal with dimensions. Along the way he or she will also have learned some of the physics of equations of state and of degenerate stars. pacs: 01.40.-d, 26.60.+c, 97.10.Cv, 97.20.Rp, 97.60.Jd I Introduction In 1967 Jocelyn Bell, a graduate student, along with her thesis advisor, Anthony Hewish, discovered the first pulsar, something from outer space that emits very regular pulses of radio energy. After recognizing that these pulse trains were so unvarying that they would not support an origin from LGM’s (Little Green Men), it soon became generally accepted that the pulsar was due to radio emission from a rapidly rotating neutron star Landau endowed with a very strong magnetic field. By now more than a thousand pulsars have been catalogued Princeton . Pulsars are by themselves quite interesting Jodrell , but perhaps more so is the structure of the underlying neutron star. This paper discusses a student project dealing with that structure. While still at MIT before coming to Los Alamos, one of us (Reddy) had the pleasure of acting as mentor for a bright British high school student, Aiden J. Parker. Ms. Parker was spending the summer of 2002 at MIT as a participant in a special research program (RSI). With minimal guidance she was able to write a Fortran program for solving the Tolman-Oppenheimer-Volkov (TOV) equations TOV to calculate masses and radii of neutron stars (!). In discussing this impressive performance after Reddy’s arrival at LANL, the question came up of whether it would have been possible (and easier) for her to have done the computation using Mathematica (or some other symbolic and numerical manipulation package). This was taken up as a challenge by the other of us, who also figured it would be a good opportunity to learn how these kinds of stellar structure calculations are actually done. (Silbar’s only previous experience in this field of physics consisted of having read, with some care, the chapter on stellar equilibrium and collapse in Weinberg’s treatise on gravitation and cosmology Wberg .) In the process of meeting the challenge, it became clear to us that this subject would be an excellent topic for a junior or senior physics major’s project or thesis. After all, if a British high school student could do it…There is much more physics in the problem than just simply integrating a pair of coupled non-linear differential equations. In addition to the physics (and even some astronomy), the student must think about the sizes of things he or she is calculating, that is, believing and understanding the answers one gets. Another side benefit is that the student learns about the stability of numerical solutions and how to deal with singularities. In the process he or she also learns the inner mechanics of the calculational package (e.g., Mathematica) being used. The student should begin with a derivation of the (Newtonian) coupled equations, and, presumably, be spoon-fed the general relativistic (GR) corrections. Before trying to solve these equations, one needs to work out the relation between the energy density and pressure of the matter that constitutes the stellar interior, i.e., an equation of state (EoS). The first EoS’s to try can be derived from the non-interacting Fermi gas model, which brings in quantum statistics (the Pauli exclusion principal) and special relativity. It is necessary to keep careful track of dimensions, and converting to dimensionless quantities is helpful in working these EoS’s out. As a warm-up problem the student can, at this point, integrate the Newtonian equations and learn about white dwarf stars. Putting in the GR corrections, one can then proceed in the same way to work out the structure of pure neutron stars (i.e., reproducing the results of Oppenheimer and Volkov TOV ). It is interesting at this point to compare and see how important the GR corrections are, i.e., how different a neutron star is from that which would be given by classical Newtonian mechanics. Realistic neutron stars, of course, also contain some protons and electrons. As a first approximation one can treat this multi-component system within the non-interacting Fermi gas model. In the process one learns about chemical potentials. To improve upon this treatment we must include nuclear interactions in addition to the degeneracy pressure from the Pauli exclusion principle that is used in the Fermi gas model. The nucleon-nucleon interaction is not something we would expect an undergraduate to tackle, but there is a simple model (which we learned about from Prakash Prakash ) for the nuclear matter EoS. It has parameters which are fit to quantities such as the binding energy per nucleon in symmetric nuclear matter, the so-called nuclear symmetry energy (it is really an asymmetry) and the (not so well known) nuclear compressibility. Working this out is also an excellent exercise, which even touches on the speed of sound (in nuclear matter). With these nuclear interactions in addition to the Fermi gas energy in the EoS, one finds (pure) neutron star masses and radii which are quite different from those using the Fermi gas EoS. The above three paragraphs provide the outline of what follows in this paper. In the presentation we will also indicate possible “gotcha’s” that the student might encounter and possible side-trips that might be taken. Of course, the project we outline here can (and probably should) be augmented by the faculty mentor webpage with suggestions for byways that might lead to publishable results, if that is desired. We should point out that there is a similar discussion of this subject matter in this journal by Balian and Blaizot BalianBlaizot . They, however, used this material (and other, related materials) to form the basis for a full-year course they taught at the Ecole Polytechnique in France. Our emphasis is, in contrast, more toward nudging the student into a research frame of mind involving numerical calculation. We also note that much of the material we discuss here is covered in the textbook by Shapiro and Teukolsky Shapiro . However, as the reader will notice, the emphasis here is on students learning through computation. One of our intentions is to establish here a framework for the student to interact with his or her own computer program, and in the process learn about the physical scales involved in the structure of compact degenerate stars. II The Tolman-Oppenheimer-Volkov Equation II.1 Newtonian Formulation A nice first exercise for the student is to derive the following structure equations from classical Newtonian mechanics, $$\displaystyle\frac{dp}{dr}$$ $$\displaystyle=$$ $$\displaystyle-\frac{G\rho(r){\cal M}(r)}{r^{2}}=-\frac{G\epsilon(r){\cal M}(r)% }{c^{2}r^{2}}$$ (1) $$\displaystyle\frac{d{\cal M}}{dr}$$ $$\displaystyle=$$ $$\displaystyle 4\pi r^{2}\rho(r)=\frac{4\pi r^{2}\epsilon(r)}{c^{2}}$$ (2) $$\displaystyle{\cal M}(r)$$ $$\displaystyle=$$ $$\displaystyle 4\pi\int_{0}^{r}r^{\prime\,2}dr^{\prime}\rho(r^{\prime})=4\pi% \int_{0}^{r}r^{\prime\,2}dr^{\prime}\epsilon(r^{\prime})/c^{2}\,.$$ (3) Here $G=6.673\times 10^{-8}$ dyne-cm${}^{2}$/g${}^{2}$ is Newton’s gravitational constant, $\rho(r)$ is the mass density at the distance $r$ (in gm/cm${}^{3}$), and $\epsilon$ is the corresponding energy density (in ergs/cm${}^{3}$) CGS . The quantity ${\cal M}(r)$ is the total mass inside the sphere of radius $r$. A sufficient hint for the derivation is shown in Fig. 1. (Challenge question: the above equations actually hold for any value of $r$, not just the large-$r$ situation depicted in the figure. Can the student also do the derivation in spherical coordinates where the box becomes a cut-off wedge?) Note that, in the second halves of these equations, we have departed slightly from Newtonian physics, defining the energy density $\epsilon(r)$ in terms of the mass density $\rho(r)$ according to the (almost) famous Einstein equation from special relativity, $$\epsilon(r)=\rho(r)c^{2}\,.$$ (4) This allows Eq. (1) to be used when one takes into account contributions of the interaction energy between the particles making up the star. In what follows, we may inadvertently set $c=1$ so that $\rho$ and $\epsilon$ become indistinguishable. We’ll try not to do that here so students following the equations in this presentation can keeping checking dimensions as they proceed. However, they might as well get used to this often-used physicists’ trick of setting $c=1$ (as well as $\hbar=1$). To solve this set of equations for $p(r)$ and ${\cal M}(r)$ one can integrate outwards from the origin ($r=0$) to the point $r=R$ where the pressure goes to zero. This point defines $R$ as the radius of the star. One needs an initial value of the pressure at $r=0$, call it $p_{0}$, to do this, and $R$ and the total mass of the star, ${\cal M}(R)\equiv M$, will depend on the value of $p_{0}$. Of course, to be able to perform the integration, one also needs to know the energy density $\epsilon(r)$ in terms of the pressure $p(r)$. This relationship is the equation of state (EoS) for the matter making up the star. Thus, a lot of the student’s effort in this project will necessarily be directed to developing an appropriate EoS for the problem at hand. II.2 General Relativistic Corrections The Newtonian formulation presented above works well in regimes where the mass of the star is not so large that it significantly “warps” space-time. That is, integrating Eqs. (1) and (2) will work well in cases when general relativistic (GR) effects are not important, such as for the compact stars known as white dwarfs. By creating a quantity using $G$ that has dimensions of length, the student can see when it becomes important to include GR effects. (This happens when $GM/c^{2}R$ becomes non-negligible.) As the student will see, this is the case for typical neutron stars. It is probably not to be expected that an undergraduate physics major derive the GR corrections to the above equations. For that, one can look at various textbook derivations of the Tolman-Oppenheimer-Volkov (TOV) equation Wberg Shapiro . It should suffice to simply state the corrections to Eq. (1) in terms of three additional (dimensionless) factors, $$\frac{dp}{dr}=-\frac{G\epsilon(r){\cal M}(r)}{c^{2}r^{2}}\left[1+\frac{p(r)}{% \epsilon(r)}\right]\left[1+\frac{4\pi r^{3}p(r)}{{\cal M}(r)c^{2}}\right]\left% [1-\frac{2G{\cal M}(r)}{c^{2}r}\right]^{-1}\ .$$ (5) The differential equation for ${\cal M}(r)$ remains unchanged. The first two factors in square brackets represent special relativity corrections of order $v^{2}/c^{2}$. This can be seen in that pressure $p$ goes, in the non-relativistic limit, like $k_{F}^{2}/2m=mv^{2}/2$ (see Eq. (14) below) while $\epsilon$ and ${\cal M}c^{2}$ go like $mc^{2}$. That is, these factors reduce to 1 in the non-relativistic limit. (The student should have, by now, realized that $p$ and $\epsilon$ have the same dimensions.) The last factor is a GR correction and the size of $G{\cal M}/c^{2}r$, as we emphasized above, determines whether it is important (or not). Note that the correction factors are all positive definite. It is as if Newtonian gravity becomes stronger at every $r$. That is, special and general relativity strengthens the relentless pull of gravity! The coupled non-linear equations for $p(r)$ and ${\cal M}(r)$ can also in this case be integrated from $r=0$ for a starting value of $p_{0}$ to the point where $p(R)=0$, to determine the star mass $M={\cal M}(R)$ and radius $R$ for this value of $p_{0}$. These equations invoke a balance between gravitational forces and the internal pressure. The pressure is a function of the EoS, and for certain conditions it may not be sufficient to withstand the gravitational attraction. Thus the structure equations imply there is a maximum mass that a star can have. III White Dwarf Stars III.1 A Few Facts Let us violate (in words only) the second law of thermodynamics by warming up on cold compact stars called white dwarfs. For these stellar objects, it suffices to solve the Newtonian structure equations, Eqs. (1)-(3) Koonin . White dwarf stars NASA were first observed in 1844 by Friedrich Bessel (the same person who invented the special functions bearing that name). He noticed that the bright star Sirius wobbled back and forth and then deduced that the visible star was being orbited by some unseen object, i.e., it is a binary system. The object itself was resolved optically some 20 years later and thus earned the name of “white dwarf.” Since then numerous other white (and the smaller brown) dwarf stars have been observed (or detected). A white dwarf star is a low- or medium-mass star near the end of its lifetime, having burned up, through nuclear processes, most of its hydrogen and helium forming carbon, silicon or (perhaps) iron. They typically have a mass less than 1.4 times that of our Sun, ${\rm M}_{\odot}=1.989\times 10^{33}$ g Chandra . They are also much smaller than our Sun, with radii of the order of $10^{4}$ km (to be compared with $R_{\odot}=6.96\times 10^{5}$ km). These values can be worked out from the period of the wobble for the dwarf-normal star binary in the usual Keplerian way. As a result (and as is also the case for neutron stars), the natural dimensions for discussing white dwarfs are for masses to be in units of solar mass, ${\rm M}_{\odot}$, and distances to be in km. Using these numbers the student should be able to make a quick estimate of the (average) densities of our Sun and of a white dwarf, to get a feel for the numbers that he will be encountering. Since $GM/c^{2}R\approx 10^{-4}$ for such a typical white dwarf, we can concentrate here on solving the non-relativistic structure equations of Sec. 2.1. (Question: why is it a good approximation to drop the special relativistic corrections for these dwarfs?) The reason a dwarf star is small is because, having burned up all the nuclear fuel it can, there is no longer enough thermal pressure to prevent its gravity from crushing it down. As the density increases, the electrons in the atoms are pushed closer together, which then tend to fall into the lowest energy levels available to them. (The star begins to get colder.) Eventually the Pauli principle takes over, and the electron degeneracy pressure (to be discussed next) provides the means for stabilizing the star against its gravitational attraction Chandra ; Shapiro . This is the physics behind the EoS which one needs to integrate the Newtonian structure equations above, Eqs. (1) and (2). III.2 Fermi Gas Model for Electrons For free electrons the number of states $dn$ available at momentum $k$ per unit volume is $$dn=\frac{d^{3}k}{(2\pi\hbar)^{3}}=\frac{4\pi k^{2}dk}{(2\pi\hbar)^{3}}\,.$$ (6) (This is a result from their modern physics course that students should review if they don’t remember it.) Integrating, one gets the electron number density $$n=\frac{8\pi}{(2\pi\hbar)^{3}}\int_{0}^{k_{F}}k^{2}dk=\frac{k_{F}^{3}}{3\pi^{2% }\hbar^{3}}\,.$$ (7) The additional factor of two comes in because there are two spin states for each electron energy level. Here $k_{F}c$, the Fermi energy, is the maximum energy electrons can have in the star under consideration. It is a parameter which varies according to the star’s total mass and its history, but which the student is free to set in the calculations he or she is about to make. Each electron is neutralized by a proton, which in turn is accompanied in its atomic nucleus by a neutron (or perhaps a few more, as in the case of a nucleus like ${}^{56}{\rm Fe}_{26}$). Thus, neglecting the electron mass $m_{e}$ with respect to the nucleon mass $m_{N}$, the mass density of the star is essentially given by $$\rho=nm_{N}A/Z\,,$$ (8) where $A/Z$ is the number of nucleons per electron. For ${}^{12}{\rm C}$, $A/Z=2.00$, while for ${}^{56}{\rm Fe}$, $A/Z=2.15$ . Note that, since $n$ is a function of $k_{F}$, so is $\rho$. Conversely, given a value of $\rho$, $$k_{F}=\hbar\left(\frac{3\pi^{2}\rho}{m_{N}}\frac{Z}{A}\right)^{1/3}\,.$$ (9) The energy density of this star is also dominated by the nucleon masses, i.e., $\epsilon\approx\rho c^{2}$. The contribution to the energy density from the electrons (including their rest masses) is $$\displaystyle\epsilon_{\rm elec}(k_{F})$$ $$\displaystyle=$$ $$\displaystyle\frac{8\pi}{(2\pi\hbar)^{3}}\int_{0}^{k_{F}}(k^{2}c^{2}+m_{e}^{2}% c^{4})^{1/2}k^{2}dk$$ (10) $$\displaystyle=$$ $$\displaystyle\epsilon_{0}\int_{0}^{k_{F}/m_{e}c}(u^{2}+1)^{1/2}u^{2}du$$ $$\displaystyle=$$ $$\displaystyle\frac{\epsilon_{0}}{8}\left[(2x^{3}+x)(1+x^{2})^{1/2}-\sinh^{-1}(% x)\right]\,,$$ where $$\epsilon_{0}=\frac{m_{e}^{4}c^{5}}{\pi^{2}\hbar^{3}}$$ (11) carries the desired dimensions of energy per volume and $x=k_{F}/m_{e}c$. The total energy density is then $$\epsilon=nm_{N}A/Z+\epsilon_{\rm elec}(k_{F})\,.$$ (12) One should check that the first term here is much larger than the second. To get our desired EoS, we need an expression for the pressure. The following presents a problem (!) that the student should work through. From the first law of thermodynamics, $dU=dQ-pdV$, then at temperature $T$ fixed at $T=0$ (where $dQ=0$ since $dT=0$) $$p=-\left.\frac{\partial U}{\partial V}\right]_{T=0}=n^{2}\frac{d(\epsilon/n)}{% dn}=n\frac{d\epsilon}{dn}-\epsilon=n\mu-\epsilon\,,$$ (13) where the energy density here is the total one given by Eq. (12). The quantity $\mu_{i}=d\epsilon/dn_{i}$ defined in the last equality is known as the chemical potential of the electrons. This is a concept which will be especially useful in Section 5 where we consider an equilibrium mix of neutrons, protons and electrons. Utilizing Eq. (10), Eq. (13) yields the pressure (another problem!) $$\displaystyle p(k_{F})$$ $$\displaystyle=$$ $$\displaystyle\frac{8\pi}{3(2\pi\hbar)^{3}}\int_{0}^{k_{F}}(k^{2}c^{2}+m_{e}^{2% }c^{4})^{-1/2}k^{4}dk$$ (14) $$\displaystyle=$$ $$\displaystyle\frac{\epsilon_{0}}{3}\int_{0}^{k_{F}/m_{e}c}(u^{2}+1)^{-1/2}u^{4% }du$$ $$\displaystyle=$$ $$\displaystyle\frac{\epsilon_{0}}{24}\left[(2x^{3}-3x)(1+x^{2})^{1/2}+3\sinh^{-% 1}(x)\right]\,.$$ (Hint: use the $n^{2}d(\epsilon/n)/dn$ form and remember to integrate by parts.) Using Mathematica Mma the student can show that the constant $\epsilon_{0}=1.42\times 10^{24}$ in units that, at this point, are erg/cm${}^{3}$. (Yet another problem: verify that the units of $\epsilon_{0}$ are as claimed workouteqns .) One also finds that Mathematica can perform the integrals analytically. (We quoted the results already in the equations above.) They are a bit messy, however, as they both involve an inverse hyperbolic sine function, and thus are not terribly enlightening. It is useful, however, for the student to make a plot of $\epsilon$ versus $p$ (such as shown in Fig. 2) for values of the parameter $0\leq k_{F}\leq 2m_{e}$. This curve has a shape much like $\epsilon^{4/3}$ (the student should compare with this), and there is a good reason for that. Consider the (relativistic) case when $k_{F}\gg m_{e}$. Then Eq. (14) simplifies to $$\displaystyle p(k_{F})$$ $$\displaystyle=$$ $$\displaystyle\frac{\epsilon_{0}}{3}\int_{0}^{k_{F}/m_{e}c}u^{3}du=\frac{% \epsilon_{0}}{12}(k_{F}/m_{e}c)^{4}=\frac{\hbar c}{12\pi^{2}}\left(\frac{3\pi^% {2}Z\rho}{m_{N}A}\right)^{4/3}$$ (15) $$\displaystyle\approx$$ $$\displaystyle K_{\rm rel}\,\epsilon^{4/3}\,,$$ where $$K_{\rm rel}=\frac{\hbar c}{12\pi^{2}}\left(\frac{3\pi^{2}Z}{Am_{N}c^{2}}\right% )^{4/3}\,.$$ (16) A star having simple EoS like $p=K\epsilon^{\gamma}$ is called a “polytrope”, and we therefore see that the relativistic electron Fermi gas gives a polytropic EoS with $\gamma=4/3$. As will be seen in the next subsection, a polytropic EoS allows one to solve the structure equations (numerically) in a relatively straight-forward way LaneEmden . There is another polytropic EoS for the non-interacting electron Fermi gas model corresponding to the non-relativistic limit, where $k_{F}\ll m_{e}$. In a way similar to the derivation of Eq. (15), one finds $$p=K_{\rm non-rel}\epsilon^{5/3}\,,\quad{\rm where}\ \ K_{\rm non-rel}=\frac{% \hbar^{2}}{15\pi^{2}m_{e}}\left(\frac{3\pi^{2}Z}{Am_{N}c^{2}}\right)^{5/3}\,.$$ (17) [Question: what are the units of $K_{\rm rel}$ and $K_{\rm non-rel}$? Task: confirm that in the appropriate limits, Eqs. (10) and (14) reduce to those in Eqs. (15) and (17).] III.3 The Structure Equations for a Polytrope As mentioned earlier, we want to express our results in units of km and ${\rm M}_{\odot}$. Thus it is useful to define $\bar{\cal M}(r)={\cal M}(r)/{\rm M}_{\odot}$. The first Newtonian structure equation, Eq. (1), then becomes $$\frac{dp(r)}{dr}=-R_{0}\frac{\epsilon(r)\bar{\cal M}(r)}{r^{2}}\,,$$ (18) where the constant $R_{0}=G{\rm M}_{\odot}/c^{2}=1.47$ km. (That is, for those who already know, $R_{0}$ is one half the Schwartzschild radius of our sun.) In this equation $p$ and $\epsilon$ still carry dimensions of, say, ergs/cm${}^{3}$. Therefore, let us define dimensionless energy density and pressure, $\bar{\epsilon}$ and $\bar{p}$, by $$p=\epsilon_{0}\bar{p}\,,\quad\epsilon=\epsilon_{0}\bar{\epsilon}\,$$ (19) where $\epsilon_{0}$ has dimensions of energy density. This $\epsilon_{0}$ is not the same as defined in Eq. (11). Its numerical choice here is arbitrary, and a suitable strategy is to make that choice based on the dimensionful numbers that define the problem at hand. We’ll employ this strategy to fix it below. For a polytrope, we can write $$\bar{p}=\bar{K}\bar{\epsilon}^{\,\gamma}\,,\quad{\rm where}\ \ \bar{K}=K% \epsilon_{0}^{\ \gamma-1}\,{\rm\ is\ dimensionless.}$$ (20) It is easier to solve Eq. (18) for $\bar{p}$, so we should express $\bar{\epsilon}$ in terms of it, $$\bar{\epsilon}=(\bar{p}/\bar{K})^{1/\gamma}\,.$$ (21) Equation (18) can now be recast in the form $$\frac{d\bar{p}(r)}{dr}=-\frac{\alpha\,\bar{p}(r)^{1/\gamma}\bar{\cal M}(r)}{r^% {2}}\,,$$ (22) where the constant $\alpha$ is $$\alpha=R_{0}/\bar{K}^{1/\gamma}=R_{0}/(K\epsilon_{0}^{\gamma-1})^{1/\gamma}\,.$$ (23) Equation (22) has dimensions of 1/km, with $\alpha$ in km (since $R_{0}$ is). That is, it is to be integrated with respect to $r$, with $r$ also in km. We can choose any convenient value for $\alpha$ since $\epsilon_{0}$ is still free. For a given value of $\alpha$, $\epsilon_{0}$ is then fixed at $$\epsilon_{0}=\left[\frac{1}{K}\left(\frac{R_{0}}{\alpha}\right)^{\gamma}\right% ]^{1-\gamma}\,.$$ (24) We also need to cast the other coupled equation, Eq. (2), in terms of dimensionless $\bar{p}$ and $\bar{\cal M}$, $$\frac{d\bar{\cal M}(r)}{dr}=\beta r^{2}\bar{p}(r)^{1/\gamma}\,,$$ (25) where notCoul $$\beta=\frac{4\pi\epsilon_{0}}{{\rm M}_{\odot}c^{2}\bar{K}^{1/\gamma}}=\frac{4% \pi\epsilon_{0}}{{\rm M}_{\odot}c^{2}(K\epsilon_{0}^{\gamma-1})^{1/\gamma}}\,.$$ (26) Equation (25) also carries dimensions of 1/km, the constant $\beta$ having dimesnions 1/km${}^{3}$. Note that, in integrating out from $r=0$, the initial value of $\bar{\cal M}(0)=0$. III.4 Integrating the Polytrope Numerically Our task is to integrate the coupled first-order differential equations (DE), Eqs. (22) and (25), out from the origin, $r=0$, to the point $R$ where the pressure falls to zero, $\bar{p}(R)=0$ monofall . To do this we need two initial values, $\bar{p}(0)$ (which must be positive) and $\bar{\cal M}(0)$ ( which we already know must be 0). The star’s radius, $R$, and its mass $M=\bar{\cal M}(R)$ in units of ${\rm M}_{\odot}$ will vary, depending on the choice for $\bar{p}(0)$. For purposes of numerical stability in solving Eqs. (22) and (25), we want the constants $\alpha$ and $\beta$ to be not much different from each other (and probably not much different from 1). We will see below that this can be arranged for both of the two polytropic EoS’s discussed above for white dwarfs. Our coupled DE’s are quite non-linear. In fact, because of the $\bar{p}^{1/\gamma}$ factors, the solution will become complex when $\bar{p}(r)<0$, i.e., when $r>R$. Thus we will want to recognize when this happens. How can this be programmed? Mathematica and similar symbolic/numerical packages have built-in first-order DE solvers. Perhaps the solver is as simple as a fixed, equal-step Runge-Kutta routine (as in MathCad 7 Standard), but there are often more sophisticated solvers in more recent versions. These packages also allow for program control constructs such as do-loops, whiles and the like. Thus, consider a do-loop on a variable $\bar{r}$ running in appropriately small steps over a range that is sure to contain the expected value of $R$. Call the DE solver inside this loop, integrating the coupled DE’s from $r=0$ to $\bar{r}$. When the solver routine exits, check to see if the last value of $\bar{p}$, i.e., $\bar{p}(\bar{r})$, has a real part which has gone negative. If so, then break out of the loop, calling $R=\bar{r}$. If not, go on to the next larger value of $\bar{r}$ and call the DE solver again. More discussion of how to program the integration of the DE’s is inappropriate here, since we want to encourage the student to learn from programming to appreciate how the symbolic/numerical package is used. III.5 The Relativistic Case $k_{F}\gg m_{e}$ This is the regime for white dwarfs with the largest mass. A larger mass needs a greater central pressure to support it. However, large central pressures mean the squeezed electrons become relativistic. Recall that the polytrope exponent $\gamma=4/3$ for this case and the equation of state is given by $P=K_{\rm rel}\epsilon^{\gamma}$ with $K_{\rm rel}$ given by Eq. (16). After some trial and error, we chose in our program (the student may want to try something else) $$\alpha=R_{0}=1.473\ {\rm km}\quad\quad\quad[k_{F}\gg m_{e}]\,,$$ (27) which in turn fixes, from Eq. (24), $$\epsilon_{0}=7.463\times 10^{39}{\rm ergs/cm}^{3}=4.17\ {\rm M}_{\odot}c^{2}/{% \rm km}^{3}\quad\quad\quad[k_{F}\gg m_{e}]\,.$$ (28) The first question the student should ask, in checking this number, is whether such a large number is physically reasonable. Continuing with the $k_{F}\gg m_{e}$ numerics, Eqs. (16) and (26) give $$\beta=52.46\,/{\rm km}^{3}\quad\quad\quad[k_{F}\gg m_{e}]\,,$$ (29) which is about 30 times larger than $\alpha$, but probably manageable from the standpoint of performing the numerical integration. In our first attempt to integrate the coupled DE’s for this case (using a do-loop as described above) we chose $\bar{p}(0)=1.0$. This gives us a white dwarf of radius $R\approx 2$ km, which is miniscule compared with the expected radius of $\approx 10^{4}$ km! Why? What went wrong? The student who also makes this kind of mistake will eventually realize that our choice of scale, $\epsilon_{0}=4.17{\rm M}_{\odot}c^{2}/{\rm km}^{3}$, represents a huge energy density. One can simply estimate the average energy density of a star with a $10^{4}$ km radius and a mass of one solar mass by the ratio of its rest mass energy to its volume, $$\left<\epsilon\right>\approx\frac{{\rm M}_{\odot}c^{2}}{R^{3}}=10^{-12}\ {\rm M% }_{\odot}c^{2}/{\rm km}^{3}\,,$$ (30) which is much, much smaller than the $\epsilon_{0}$ here. In addition, the pressure $p$ is about 2000 times smaller than the energy density $\epsilon$ (see Fig. 2). Thus, choosing a starting value of $\bar{p}(0)\sim 10^{-15}$ would probably be more physical. Doing so does give much more reasonable results. Table 1 shows our program’s results for $R$ and $M$ and how they depend on $\bar{p}(0)$. The surprise here is that, within the numerical error expected, all these cases have the same mass! Increasing the central pressure doesn’t allow the star to be more massive, just more compact. It turns out that this result is correct, i.e., that the white dwarf mass is independent of the choice of the central pressure. It is not easy to see this, however, from the numerical integration we have done here. The discussion in terms of Lane-Emden functions LaneEmden shows why, though the mathematics here might be a bit steep for many undergraduates. For this reason, we quote without proof the analytic results. For the case of a polytropic equation of state $p=K\epsilon^{\gamma}$ , the mass $$M=4\pi\epsilon^{2(\gamma-\frac{4}{3})/3}~{}\left(\frac{K\gamma}{4\pi G(\gamma-% 1)}\right)^{3/2}~{}\zeta_{1}^{2}~{}|\theta(\zeta_{1})|\,,$$ (31) and the radius $$R=\sqrt{\frac{K\gamma}{4\pi G(\gamma-1)}}~{}\zeta_{1}~{}\epsilon^{(\gamma-2)/2% }\,.$$ (32) In the above-mentioned solutions, $\zeta_{1}$ and $\theta(\zeta_{1})$ are numerical constants that depend on the polytropic index $\gamma$. By examining Eq.  (31), we see that for $\gamma=4/3$ the mass is independent of the central energy density, and hence also the central pressure $p_{0}$. Also, note that from Eq.  (32), the radius decreases with increasing central pressure as $R\propto p_{0}^{(\gamma-2)/2\gamma}=p_{0}^{-1/4}$. In any case, the student should notice this point and use it as check of the numerical results obtained. Figure 3 shows the dependence of $\bar{p}(r)$ and $\bar{\cal M}(r)$ on distance for the case $\bar{p}(0)=10^{-16}$. It is interesting that $\bar{p}(r)$ becomes small and essentially flat around 8000 km before finally going through zero at $R=15,080$ km. The results and graphs shown here were generated with a Mathematica 4.0 program, but we were able to reproduce them using MathCad 7 Standard. In that case, however, programming a loop is difficult, so we searched by hand for the endpoint (where the real part of $\bar{p}(r)$ goes negative). More recent versions of MathCad have more complete program constructs, such as while-loops, so this process could undoubtedly be automated. (Alternatively, the student might try to solve for a root of ${\bar{p}}(r)=0$.) III.6 The Non-Relativistic Case, $k_{F}\ll m_{e}$ Eventually, as the central pressure $\bar{p}(0)$ gets smaller, the electron gas is no longer relativistic. Also as the pressure gets smaller, it can support less mass. This moves us in the direction of the less massive white dwarfs, and, as it turns out, these dwarfs are larger (in radius) than the ones in the last section. In the extreme case, when $k_{F}\ll m_{e}$, we can integrate the structure equations for the other polytropic EoS, where $\gamma=5/3$. The programming for this is very much the same as in the 4/3 case, but the numbers involved are quite different (as are the results). Inserting the values of the physical constants in Eq. (17), we find $$K_{\rm non-rel}=3.309\times 10^{-23}\ \frac{{\rm cm}^{2}}{{\rm ergs}^{2/3}}\,.$$ (33) This time, however, and after some experimentation, we chose the constant $$\alpha=0.05\ {\rm km}\quad\quad\quad[k_{F}\ll m_{e}]\,,$$ (34) which then fixes $$\epsilon_{0}=2.488\times 10^{37}{\rm ergs/cm}^{3}=0.01392\ {\rm M}_{\odot}c^{2% }/{\rm km}^{3}\quad\quad\quad[k_{F}\ll m_{e}]\,.$$ (35) Note that this $\epsilon_{0}$ is much smaller than our choice for the relativistic case. The other constant we need, from Eq. (26), is $$\beta=0.005924\ /{\rm km}^{3}\quad\quad\quad[k_{F}\ll m_{e}]\,,$$ (36) which, unlike the relativistic case, is not larger than $\alpha$ but smaller. When we first ran our Mathematica code for this case, we (inadvertantly) tried a value of $\bar{p}(0)=10^{-12}$. This gave us a star with radius $R$ = 5310 km and mass $M$ = 3.131. Oops!, that mass is bigger than the largest mass of 1.243 that we found for the relativistic EoS! What did we do wrong? What happened (and the student can set up her program so this trap can be avoided) is that the choice $\bar{p}(0)=10^{-12}$ violates the assumption that $k_{F}\ll m_{e}$. One really needs values for $\bar{p}(0)<4\times 10^{-15}$. This says, in fact, that the relativistic $\bar{p}(0)=10^{-16}$ case that we plotted in Fig. 3 is not really relativistic. Results for the non-relativistic case for the last two values of $\bar{p}(0)$ in Table 1 are shown in Table 2. It is quite instructive to compare the differences in the two tables. The masses are, of course, smaller, as expected, and now they vary with $\bar{p}(0)$. Somewhat surprising is that the non-relativistic radius is bigger for $\bar{p}(0)=10^{-15}$ but smaller for $\bar{p}(0)=10^{-16}$. Figure 4 shows the pressure distribution for the latter case, to be compared with the corresponding graph in Fig. 3. Note that this pressure curve does not have the peculiar, long flat tail found using the relativistic EoS. In fact, by this time the student should have realized that neither of these polytropes is very physical, at least not for all cases. The non-relativistic assumption certainly does not work for central pressures $\bar{p}(0)>10^{-14}$, i.e., for the more massive (and more common) white dwarfs. On the other hand, the relativistic EoS certainly should not work when the pressure becomes small, i.e., in the outer regions of the star (where it eventually goes to zero at the star’s radius). So, can one find an EoS to cover the whole range of pressures? We haven’t actually done this for white dwarfs, but the program would be much like that discussed below for the full neutron star. Given the transcendental expressions for energy and pressure that generate the curve shown in Fig. 2, it should be possible to find a fit (using, for example, the built-in fitting function of Mathematica) like $$\bar{\epsilon}(\bar{p})=A_{\rm NR}\bar{p}^{3/5}+A_{\rm R}\bar{p}^{3/4}\,.$$ (37) The second term dominates at high pressures (the relativistic case), but the first term takes over for low pressures when the $k_{F}\gg m_{e}$ assumption does not hold. (Setting the two terms equal and solving for $\bar{p}$, as Chandrasekhar and Fowler did, gives the value of $\bar{p}$ when special relativity starts to be important.) This expression for $\bar{\epsilon}(\bar{p})$ could then be used in place of the $\bar{p}^{1/\gamma}$ factors on the right hand sides of the structure equations. Proceed to solve numerically as before. We leave this as an exercise for the interested student. IV Pure Neutron Star, Fermi Gas EoS Having by now become warm, the student can now tackle neutron stars. Here one must include the general relativistic (GR) contributions represented by the three dimensionless factors in the TOV equation, Eq. (5). One of the first things that comes to mind is how one deals numerically with the (apparent) singularities in these factors at $r=0$ hint . Also, as in the case of the white dwarfs, there is a question of what to use for the EoS. In this section we show what can be done for pure neutron stars, once again using a Fermi gas model for, now, a neutron gas instead of an electron gas. Such a model, however, is unrealistic for two reasons. First, a real neutron star consists not just of neutrons but contains a small fraction of protons and electrons (to inhibit the neutrons from decaying into protons and electrons by their weak interactions). Second, the Fermi gas model ignores the strong nucleon-nucleon interactions, which give important contributions to the energy density. Each of these points will be dealt with in sections below. IV.1 The Non-Relativistic Case, $k_{F}\ll m_{n}$ For a pure neutron star Fermi gas EoS one can proceed much as in the white dwarf case, substituting the neutron mass $m_{n}$ for the electron mass $m_{e}$ in the equations found in Sec. 3. When $k_{F}\ll m_{n}$ one finds, again, a polytrope with $\gamma=5/3$. (More exercises for the student.) The $K$ corresponding to that in Eq. (17) is $$K_{\rm non-rel}=\frac{\hbar^{2}}{15\pi^{2}m_{n}}\left(\frac{3\pi^{2}Z}{Am_{n}c% ^{2}}\right)^{5/3}=6.483\times 10^{-26}\ \frac{{\rm cm}^{2}}{{\rm ergs}^{2/3}}\,.$$ (38) This time, choosing $\alpha=1$ km, one finds the scaling factor from Eq. (24) to be $$\epsilon_{0}=1.603\times 10^{38}\ {\rm ergs/cm}^{3}=0.08969\ {\rm M}_{\odot}c^% {2}/{\rm km}^{3}\,.$$ (39) Further, from Eqs. (20) and (26), $$\bar{K}=1.914\quad{\rm and}\ \beta=0.7636\ /{\rm km}^{3}\,.$$ (40) Note that, in this case, the constants $\alpha$ and $\beta$ are of similar size. Making an estimate of the average energy density of a typical neutron star (mass = ${\rm M}_{\odot}$, $R$ = 10 km), one expects that a good starting value for the central pressure $\bar{p}(0)$ to be of order $10^{-4}$ or less. Our program for this situation is essentially the same as the one for non-relativistic white dwarfs but with appropriate changes of the distance scale. It gives the results shown in Table 3. Note that the GR effects are small, but not negligible, for this non-relativistic EoS. As in the white dwarf case, these are the smaller mass stars. One sees that as the mass gets smaller, the gravitational attraction is less and thus the star extends out to larger radii. IV.2 The Relativistic Case, $k_{F}\gg m_{n}$ Here there is again a polytropic EoS, but with $\gamma=1$. In fact, $p=\epsilon/3$, a well-known result for a relativistic gas. The conversion to dimensionless quantities becomes very simple in this case with relationships like $K=\bar{K}=1/3$. It is still useful to factor out an $\epsilon_{0}$, which in our program we took to have a value $1.6\times 10^{38}$ erg/cm${}^{3}$, as suggested by the value in the previous sub-section. Then, if we choose this time $$\alpha=3R_{0}=4.428\ {\rm km}$$ (41) we find $$\beta=3.374\ /{\rm km}^{3}\,.$$ (42) We expect central pressures $\bar{p}(0)$ in this case to be greater than $10^{-4}$. Other than these changes, we wrote a similar program to the one above, taking care to avoid exponents like $1/(\gamma-1)$. Running that code gives, at first glance, enormous radii, values of $R$ greater than 50 km! We can imagine the student looking frantically for a program bug that isn’t there. In fact, what really happens is that, for this EoS, the loop on $\bar{r}$ runs through its whole range, since the pressure $\bar{p}(r)$ never passes through zero. (A plot of $\bar{p}(r)$ looks quite similar, but for distance scale, to that shown in Fig. 3, where $\gamma=4/5$.) It only falls monotonically toward zero, getting ever smaller. By the time the student recognizes this, she will probably also have realized that the relativistic gas EoS is inappropriate for such small pressures. Something better should be done (as in the next sub-section). It turns out that the structure equations for $\gamma=1$ are sufficiently simple that an analytic solution for $p(r)$ can be found, which corroborates the above remarks about not having a zero at a finite $R$. A suggestion for the student is to try a power-law Ansatz. IV.3 The Fermi Gas EoS for Arbitrary Relativity In order to avoid the trap of the relativistic gas, one should find an EoS for the non-interacting neutron Fermi gas which works for all values of the relativity parameter $x=k_{F}/m_{n}c$. Taking a hint from the two polytropes, one can try to fit the energy density as a function of pressure, each given as a transcendental function of $k_{F}$, with the form $$\bar{\epsilon}(p)=A_{\rm NR}\bar{p}^{3/5}+A_{\rm R}\bar{p}\,.$$ (43) For low pressures the non-relativistic first term dominates over the second. (The power in the relativistic term is changed from that in Eq. (37).) It is again useful to factor out an $\epsilon_{0}$ from both $\epsilon$ and $p$. In this case, it is more natural to define it as $$\epsilon_{0}=\frac{m_{n}^{4}c^{5}}{(3\pi^{2}\hbar)^{3}}=5.346\times 10^{36}\ % \frac{{\rm ergs}}{{\rm cm}^{3}}=0.003006\ \frac{{\rm M}_{\odot}c^{2}}{{\rm km}% ^{3}}\,.$$ (44) Mathematica can easily create a table of exact $\bar{\epsilon}$ and $\bar{p}$ values as a function of $k_{F}$. The dimensionless $A$-values can then be fit using its built-in fitting function. From our efforts we found, to an accuracy of better than 1% over most of the range of $k_{F}$ lowkF , $$A_{\rm NR}=2.4216\ ,\quad A_{\rm R}=2.8663\,.$$ (45) We used the fitted functional form for $\bar{\epsilon}$ of Eq. (43) in a Mathematica program similar to that for the neutron star based on the non-relativistic EoS. With the $\epsilon_{0}$ of Eq. (44) and choosing $\alpha=R_{0}$ = 1.476 km, we obtain $\beta=0.03778$. Our results for a starting value of $\bar{p}(0)=0.01$, clearly in the relativistic regime, are $$\displaystyle R$$ $$\displaystyle=$$ $$\displaystyle 15.0\ ,\quad M=1.037\,,\quad{\rm Newtonian\ equations}$$ (46) $$\displaystyle R$$ $$\displaystyle=$$ $$\displaystyle 13.4\ ,\quad M=0.717\,,\quad{\rm full\ TOV\ equation}\,.$$ (47) For this more massive star, the GR effects are significant (as should be expected from the size of $GM/c^{2}R$, about 10% in this case). Figure 5 displays the differences. It is now instructive to make a long run of calculations for a range of $\bar{p}(0)$ values. We display in Fig. 6 a (parametric) plot of $M$ and $R$ as they depend on the central pressure. The low-mass/large-radius stars are to the right in the graph and correspond to small starting values of $\bar{p}(0)$. As the central pressure increases, the total mass that the star can support increases. And, the bigger the star mass, the bigger the gravitational attraction which draws in the periphery of the star, making stars with smaller radii. That is, increasing $\bar{p}(0)$ corresponds to “climbing the hill,” moving upward and to the left in the diagram. At about $\bar{p}(0)$ = 0.03 one gets to the top of the hill, achieving a maximum mass of about 0.8 ${\rm M}_{\odot}$ at a radius $R\approx 11$ km. That maximum $M$ and its $R$ agree with Oppenheimer and Volkov’s seminal 1939 result for a Fermi gas EoS. What about the solutions in Fig. 6 that are “over the hill,” i.e., to the left of the maximum? It turns out that these stars are unstable against gravitational collapse into a black hole. The question of stability, however, is a complicated issue WbrgStable , perhaps too difficult for a student at this level. The fact that things collapse to the left of the maximum, however, means that one probably shouldn’t worry too much about the peculiar curly-cue tail to the $M$-$R$ curve in the figure. It appears to be an artifact for very large values of $\bar{p}(0)$, also seen in other calculations, even though it is especially prominent for this Fermi gas EoS. IV.4 Why Is There a Maximum Mass? On general grounds one can argue that cold compact objects such as white dwarfs and neutron stars must possess a limiting mass beyond which stable hydrostatic configurations are not possible. This limiting mass is often called the maximum mass of the object and was briefly mentioned in the discussion at the end of Sec. 2.2 and that relating to Fig. 6. In what follows, we outline the general argument. The thermal component of the pressure in cold stars is by definition negligible. Thus, variations in both the energy density and pressure are only caused by changes in the density. Given this simple observation, let us examine why we expect a maximum mass in the Newtonian case. Here, an increase in the density results in a proportional increase in the energy density. This results in a corresponding increase in the gravitational attraction. To balance this, we require that the increment in pressure is large enough. However, the rate of change of pressure with respect to energy density is related to the speed of sound (see Sec. 6.3). In a purely Newtonian world, this is in principle unbounded. However, the speed of all propagating signals cannot exceed the speed of light. This then puts a bound on the pressure increment associated with changes in density. Once we accept this bound, we can safely conclude that all cold compact objects will eventually run into the situation in which any increase in density will result in an additional gravitational attraction that cannot be compensated for by the corresponding increment in pressure. This leads naturally to the existence of a limiting mass for the star. When we include general relativistic corrections, as discussed in Sec. 2.2 earlier, they act to “amplify” gravity. Thus we can expect the maximum mass to occur at a somewhat lower mass than in the Newtonian case. V Neutron Stars with Protons and Electrons, Fermi Gas EoS As mentioned at the beginning of the last section, neutron stars are not made only of neutrons. There must also be a small fraction of protons and electrons present. The reason for this is that a free neutron will undergo a weak decay, $$n\rightarrow p+e^{-}+\bar{\nu}_{e}\ ,$$ (48) with a lifetime of about 15 minutes. So, there must be something that prevents this decay in the case of the star, and that is the presence of the protons and electrons. The decay products have low energies ($m_{n}-m_{p}-m_{e}$ = 0.778 MeV), with most of that energy being carried away by the light electron and (nearly massless) neutrino nucooling . If all the available low-energy levels for the decay proton are already filled by the protons already present, then the Pauli exclusion principle takes over and prevents the decay from taking place. The same might be said about the presence of the electrons, but in any case the electrons must be present within the star to cancel the positive charge of the protons. A neutron star is electrically neutral. We saw earlier that the number density of a particle species is fixed in terms of that particle’s Fermi momentum [see Eq. (7)]. Thus equal numbers of electrons and protons implies that $$k_{F,p}=k_{F,e}\,.$$ (49) In addition to charge neutrality, we also require weak interaction equilibrium, i.e., as many neutron decays [Eq. (48)] taking place as electron capture reactions, $p+e^{-}\rightarrow n+\nu_{e}$. This equilibrium can be expressed in terms of the chemical potentials for the three particle species, $$\mu_{n}=\mu_{p}+\mu_{e}\,.$$ (50) We already defined the chemical equilibrium for a particle in Sec. 3.2 after Eq. (13), $$\mu_{i}(k_{F,i})=\frac{d\epsilon}{dn}=(k_{F,i}^{2}+m_{i}^{2})^{1/2}\,,\quad i=% n,p,e\,.$$ (51) where, for the time being, we have set $c=1$ to simplify the equations somewhat. (The student is urged to prove the right-hand equality.) From Eqs. (49), (50), and (51) we can find a constraint determining $k_{F,p}$ for a given $k_{F,n}$, $$(k_{F,n}^{2}+m_{n}^{2})^{1/2}-(k_{F,p}^{2}+m_{p}^{2})^{1/2}-(k_{F,p}^{2}+m_{e}% ^{2})^{1/2}=0\,.$$ (52) While an ambitious algebraist can probably solve this equation for $k_{F,p}$ as a function of $k_{F,n}$, we were somewhat lazy and let Mathematica do it, finding $$\displaystyle k_{F,p}(k_{F,n})$$ $$\displaystyle=$$ $$\displaystyle\frac{[(k_{F,n}^{2}+m_{n}^{2}-m_{e}^{2})^{2}-2m_{p}^{2}(k_{F,n}^{% 2}+m_{n}^{2}+m_{e}^{2})+m_{p}^{4}]^{1/2}}{2(k_{F,n}^{2}+m_{n}^{2})^{1/2}}$$ (53) $$\displaystyle\approx$$ $$\displaystyle\frac{k_{F,n}^{2}+m_{n}^{2}-m_{p}^{2}}{2(k_{F,n}^{2}+m_{n}^{2})^{% 1/2}}\ {\rm\ \ for\ }\frac{m_{e}}{k_{F,n}}\rightarrow 0\,.$$ (54) The total energy density is the sum of the individual energy densities, $$\epsilon_{\rm tot}=\sum_{i=n,p,e}\epsilon_{i}\,,$$ (55) where $$\epsilon_{i}(k_{F,i})=\int_{0}^{k_{F,i}}(k^{2}+m_{i})^{1/2}k^{2}dk=\epsilon_{0% }\,\bar{\epsilon}_{i}(x_{i},y_{i})\,,$$ (56) and, as before putbackcs , $$\displaystyle\epsilon_{0}$$ $$\displaystyle=$$ $$\displaystyle m_{n}^{4}/3\pi^{2}\hbar^{3}\,,$$ (57) $$\displaystyle\bar{\epsilon}_{i}(x_{i},y_{i})$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{x_{i}}(u^{2}+y_{i}^{2})^{1/2}u^{2}du\,,$$ (58) $$\displaystyle x_{i}$$ $$\displaystyle=$$ $$\displaystyle k_{F,i}/m_{i}\,,\ y_{i}=m_{i}/m_{n}\,.$$ (59) The corresponding total pressure is $$\displaystyle p_{\rm tot}$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=n,p,e}p_{i}\,,$$ (60) $$\displaystyle p_{i}(k_{F,i})$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{k_{F,i}}(k^{2}+m_{i})^{-1/2}k^{4}dk=\epsilon_{0}\,\bar{% p}_{i}(x_{i},y_{i})\,,$$ (61) $$\displaystyle\bar{p}_{i}(x_{i},y_{i})$$ $$\displaystyle=$$ $$\displaystyle\int_{0}^{x_{i}}(u^{2}+y_{i}^{2})^{-1/2}u^{4}du\,.$$ (62) Using Mathematica the (dimensionless) integrals can be expressed in terms of log and $\sinh^{-1}$ functions of $x_{i}$ and $y_{i}$. One can then generate a table of $\bar{\epsilon}_{\rm tot}$ versus $\bar{p}_{\rm tot}$ values for an appropriate range of $k_{F,n}$’s. This, in turn, can be fitted to the same form of two terms as before in Eq. (43). We found, this time, the coefficients to be $$A_{\rm NR}=2.572\ ,\quad A_{\rm R}=2.891\,.$$ (63) These coefficients are not much changed from those in Eq. (45) for the pure neutron star. Therefore, we expect that the $M$ versus $R$ diagram for this more realistic Fermi gas model would not be much different from that in Fig. 6. VI Introducing Nuclear Interactions Nucleon-nucleon interactions can be included in the EoS (they are important) by constructing a simple model for the nuclear potential that reproduces the general features of (normal) nuclear matter. In doing so we were much guided by the lectures of Prakash Prakash . We will use MeV and fm ($10^{-13}$ cm) as our energy and distance units for much of this section, converting back to ${\rm M}_{\odot}$ and km later. We will also continue setting $c=1$ for now. In this regard, the important number to remember for making conversions is $\hbar c=197.3$ MeV-fm. We will also neglect the mass difference between protons and neutrons, labeling their masses as $m_{N}$. The von Weizäcker mass formula massformula for nuclides with $Z$ protons and $N$ neutrons gives, for normal symmetric nuclear matter ($A=N+Z$ with $N=Z$), an equilibrium number density $n_{0}$ of 0.16 nucleons/fm${}^{3}$. For this value of $n_{0}$ the Fermi momentum is $k_{F}^{0}=263$ MeV/$c$ [see Eq. (7)]. This momentum is small enough compared with $m_{N}=939$ MeV/c${}^{2}$ to allow a non-relativistic treatment of normal nuclear matter. At this density, the average binding energy per nucleon, $BE=E/A-m_{N}$, is $-16$ MeV. These are two physical quantities we definitely want our nuclear potential to respect, but there are two more that we’ll need to fix the parameters of the model. We chose one of these as the nuclear compressibility, $K_{0}$, to be defined below. This is a quantity which is not all that well established but is in the range of 200 to 400 MeV. The other is the so-called symmetry energy term, which, when $Z=0$, contributes about 30 MeV of energy above the symmetric matter minimum at $n_{0}$. (This quantity might really be better described as an asymmetry parameter, since it accounts for the energy that comes in when $N\neq Z$.) VI.1 Symmetric Nuclear Matter We defer the case when $N\neq Z$, which is our main interest in this paper, to the next sub-section. Here we concentrate on getting a good (enough) EoS for nuclear matter when $N=Z$, or, equivalently, when the proton and neutron number densities are equal, $n_{n}=n_{p}$. The total nucleon density $n=n_{n}+n_{p}$. We need to relate the first three nuclear quantities, $n_{0}$, $BE$, and $K_{0}$ to the energy density for symmetric nuclear matter, $\epsilon(n)$. Here $n=n(k_{F})$ is the nuclear density (at and away from $n_{0}$). We will not worry in this section about the electrons that are present, since, as was seen in the last section, its contribution is small. The energy density now will include the nuclear potential, $V(n)$, which we will model below in terms of two simple functions with three parameters that are fitted to reproduce the above three nuclear quantities. [The fourth quantity, the symmetry energy, will be used in the next sub-section to fix a term in the potential which is proportional to $(N-Z)/A$.] First, the average energy per nucleon, $E/A$, for symmetric nuclear matter is related to $\epsilon$ by $$E(n)/A=\epsilon(n)/n\,,$$ (64) which includes the rest mass energy, $m_{N}$ and has dimensions of MeV. As a function of $n$, $E(n)/A-m_{N}$ has a minimum at $n=n_{0}$ with a depth $BE=-16$ MeV. This minimum occurs when $$\frac{d}{dn}\left(\frac{E(n)}{A}\right)=\frac{d}{dn}\left(\frac{\epsilon(n)}{n% }\right)=0{\ \rm\ at\ }n=n_{0}\,.$$ (65) This is one constraint of the parameters of $V(n)$. Another, of course, is the binding energy, $$\frac{\epsilon(n)}{n}-m_{N}=BE{\ \rm\ at\ }n=n_{0}\,.$$ (66) The positive curvature at the bottom of this valley is related to the nuclear (in)compressibility by compressibility $$K(n)=9\frac{dp(n)}{dn}=9\left[n^{2}\frac{d^{2}}{dn^{2}}\left(\frac{\epsilon}{n% }\right)+2n\frac{d}{dn}\left(\frac{\epsilon}{n}\right)\right]\,,$$ (67) using Eq. (13), which defines the pressure in terms of the energy density. At $n=n_{0}$ this quantity equals $K_{0}$. (The factor of 9 is a historical artifact from the convention originally defining $K_{0}$.) (Question: why does one not have to calculate the pressure at $n=n_{0}$?) The $N=Z$ potential in $\epsilon(n)$ we will model as Prakash $$\frac{\epsilon(n)}{n}=m_{N}+\frac{3}{5}\frac{\hbar^{2}k_{F}^{2}}{2m_{N}}+\frac% {A}{2}u+\frac{B}{\sigma+1}u^{\sigma}\,,$$ (68) where $u=n/n_{0}$ and $\sigma$ are dimensionless and $A$ and $B$ have dimensions of MeV. The first term represents the rest mass energy and the second the average kinetic energy per nucleon. [These two terms are leading in the non-relativistic limit of the nucleonic version of Eq. (10).] For $k_{F}(n_{0})=k_{F}^{0}$ we will abbreviate the kinetic energy term as $\left<E_{F}^{0}\right>$, which evaluates to 22.1 MeV. The kinetic energy term in Eq. (68) can be better written as $\left<E_{F}^{0}\right>u^{2/3}$. From the above three constraints, Eqs. (65)-(67), and noting that $u=1$ at $n=n_{0}$, we get three equations for the parameters $A$, $B$, and $\sigma$: $$\displaystyle\left<E_{F}^{0}\right>+\frac{A}{2}+\frac{B}{\sigma+1}$$ $$\displaystyle=$$ $$\displaystyle BE\,,$$ (69) $$\displaystyle\frac{2}{3}\left<E_{F}^{0}\right>+\frac{A}{2}+\frac{B\sigma}{% \sigma+1}$$ $$\displaystyle=$$ $$\displaystyle 0\,,$$ (70) $$\displaystyle\frac{10}{9}\left<E_{F}^{0}\right>+A+B\sigma$$ $$\displaystyle=$$ $$\displaystyle\frac{K_{0}}{9}\,.$$ (71) Solving these equations (which we found easier to do by hand than with Mathematica), one finds $$\displaystyle\sigma$$ $$\displaystyle=$$ $$\displaystyle\frac{K_{0}+2\left<E_{F}^{0}\right>}{3\left<E_{F}^{0}\right>-9BE}\,,$$ (72) $$\displaystyle B$$ $$\displaystyle=$$ $$\displaystyle\frac{\sigma+1}{\sigma-1}\left[\frac{1}{3}\left<E_{F}^{0}\right>-% BE\right]\,,$$ (73) $$\displaystyle A$$ $$\displaystyle=$$ $$\displaystyle BE-\frac{5}{3}\left<E_{F}^{0}\right>-B\,.$$ (74) Numerically, for $K_{0}=400$ MeV (which is perhaps a high value), $$A=-122.2{\ \rm MeV},\quad B=65.39{\ \rm MeV},\quad\sigma=2.112\,.$$ (75) Note that $\sigma>1$, a point we will come back to below, since it violates a basic principle of physics called “causality.” The student can try other values of $K_{0}$ to see how the parameters $A$, $B$, and $\sigma$ change. More interesting is to see how the interplay between the $A$- and $B$-terms gives the valley at $n=n_{0}$. Figure 7 shows $E/A-m_{N}$ as a function of $n$ using the parameters of Eq. (75). We would hope the student notices the funny little positive bump in this plot near $n=0$ and sorts out the reason for its occurrence. Given $\epsilon(n)$ from Eq. (68) one can find the pressure using Eq. (13), $$p(n)=n^{2}\frac{d}{dn}\left(\frac{\epsilon}{n}\right)=n_{0}\left[\frac{2}{3}% \left<E_{F}^{0}\right>u^{5/3}+\frac{A}{2}u^{2}+\frac{B\sigma}{\sigma+1}u^{% \sigma+1}\right]\,.$$ (76) For the parameters of Eq. (75) its dependence on $n$ is shown in Fig. 8. On first seeing this graph, the student should wonder why $p(u=1)=p(n_{0})=0$. And, what is the meaning of the negative values for pressure below $u=1$? (Hint: what is “cavitation”?) So, if this $N=Z$ case were all we had for the nuclear EoS, a plot of $\epsilon(n)$ versus $p(n)$ would only make sense for $n\geq n_{0}$. Such a plot looks much like a parabola opening to the right for the range $0<u<3$. At very large values of $u$, however, $\epsilon\approx p/3$, as it should for a relativistic nucleon gas (see Section 4.2). We don’t pursue this symmetric nuclear matter EoS further since our interest is in the case when $N\gg Z$ RHIC . VI.2 Non-Symmetric Nuclear Matter We continue following Prakash’s notes Prakash closely. Let us represent the neutron and proton densities in terms of a parameter $\alpha$ as $$n_{n}=\frac{1+\alpha}{2}\,n\,,\quad n_{p}=\frac{1-\alpha}{2}\,n\,.$$ (77) This $\alpha$ is not to be confused with the constant defined in Eq. (23). For pure neutron matter $\alpha=1$. Note that $$\alpha=\frac{n_{n}-n_{p}}{n}=\frac{N-Z}{A}\,,$$ (78) so we can expect that the isospin-symmetry-breaking interaction is proportional to $\alpha$ (or some power of it). An alternative notation is in terms of the fraction of protons in the star, $$x=\frac{n_{p}}{n}=\frac{1-\alpha}{2}\,.$$ (79) We now consider how the energy density changes from the symmetric case discussed above, where $\alpha=0$ (or $x=1/2$). First, there are contributions to the kinetic energy part of $\epsilon$ from both neutrons and protons, $$\displaystyle\epsilon_{KE}(n,\alpha)$$ $$\displaystyle=$$ $$\displaystyle\frac{3}{5}\frac{k_{F,n}^{2}}{2m_{N}}\,n_{n}+\frac{3}{5}\frac{k_{% F,p}^{2}}{2m_{N}}\,n_{p}$$ (80) $$\displaystyle=$$ $$\displaystyle n\left<E_{F}\right>\frac{1}{2}\left[\left(1+\alpha\right)^{5/3}+% \left(1-\alpha\right)^{5/3}\right]\,,$$ where $$\left<E_{F}\right>=\frac{3}{5}\frac{\hbar^{2}}{2m_{N}}\left(\frac{3\pi^{2}n}{2% }\right)^{2/3}\,$$ (81) is the mean kinetic energy of symmetric nuclear matter at density $n$. For $n=n_{0}$ we note that $\left<E_{F}\right>=3\left<E_{F}^{0}\right>/5$ [see Eq. (68)]. For non-symmetric matter, $\alpha\neq 0$, the excess kinetic energy is $$\displaystyle\Delta\epsilon_{KE}(n,\alpha)$$ $$\displaystyle=$$ $$\displaystyle\epsilon_{KE}(n,\alpha)-\epsilon_{KE}(n,0)$$ (82) $$\displaystyle=$$ $$\displaystyle n\left<E_{F}\right>\left\{\frac{1}{2}\left[\left(1+\alpha\right)% ^{5/3}+\left(1-\alpha\right)^{5/3}\right]-1\right\}$$ $$\displaystyle=$$ $$\displaystyle n\left<E_{F}\right>\left\{2^{2/3}\left[\left(1-x\right)^{5/3}+x^% {5/3}\right]-1\right\}\,.$$ For pure neutron matter, $\alpha=1$, $$\Delta\epsilon_{KE}(n,\alpha)=n\left<E_{F}\right>\left(2^{2/3}-1\right)\,.$$ (83) It is also useful to expand to leading order in $\alpha$, $$\displaystyle\Delta\epsilon_{KE}(n,\alpha)$$ $$\displaystyle=$$ $$\displaystyle n\left<E_{F}\right>\frac{5}{9}\alpha^{2}\left(1+\frac{\alpha^{2}% }{27}+\cdots\right)$$ (84) $$\displaystyle=$$ $$\displaystyle n\,E_{F}\,\frac{\alpha^{2}}{3}\,\left(1+\frac{\alpha^{2}}{27}+% \cdots\right)\,.$$ (85) Keeping terms to order $\alpha^{2}$ is evidently good enough for most purposes. For pure neutron matter, the energy per particle (which, recall, is $\epsilon/n$) at normal density is $\Delta\epsilon_{KE}(n_{0},1)/n_{0}\approx 13$ MeV, more than a third of the total bulk symmetry energy of 30 MeV, our fourth nuclear parameter. Thus the potential energy contribution to the bulk symmetry energy must be 20 MeV or so. Let us assume the quadratic approximation in $\alpha$ also works well enough for this potential contribution and write the total energy per particle as $$E(n,\alpha)=E(n,0)+\alpha^{2}S(n)\,,$$ (86) The isospin-symmetry breaking is proportional to $\alpha^{2}$, which reflects (roughly) the pair-wise nature of the nuclear interactions. We will assume $S(u)$, $u=n/n_{0}$, has the form $$S(u)=(2^{2/3}-1)\frac{3}{5}\left<E_{F}^{0}\right>\left(u^{2/3}-F(u)\right)+S_{% 0}F(u)\,.$$ (87) Here $S_{0}=30$ MeV is the bulk symmetry energy parameter. The function $F(u)$ must satisfy $F(1)=1$ [so that $S(u=1)=S_{0}$] and $F(0)=0$ [so that $S(u=0)=0$; no matter means no energy]. Besides these two constraints there is, from what we presently know, a lot of freedom in what one chooses for $F(u)$. We will make the simplest possible choice here, namely, $$F(u)=u\,,$$ (88) but we encourage the student to try other forms satisfying the conditions on $F(u)$, such as $\sqrt{u}$, to see what difference it makes. Figure 9 shows the energy per particle for pure neutron matter, $E(n,1)-m_{N}$, as a function of $u$ for the parameters of Eq. (75) and $S_{0}=30$ MeV. In contrast with the $\alpha=0$ plot in Fig. 7, $E(n,1)\geq 0$ and is monotonically increasing. The plot looks almost quadratic as a function of $u$. The dominant term at large $u$ goes like $u^{\sigma}$, and $\sigma=2.112$ (for this case). However, one might have expected a linear increase instead. We will return to this point in Sec. 6.3. Given the energy density, $\epsilon(n,\alpha)=n_{0}uE(n,\alpha)$, the corresponding pressure is, from Eq. (13), $$\displaystyle p(n,x)$$ $$\displaystyle=$$ $$\displaystyle u\frac{d}{du}\epsilon(n,\alpha)-\epsilon(n,\alpha)$$ (89) $$\displaystyle=$$ $$\displaystyle p(n,0)+n_{0}\alpha^{2}\left[\frac{2^{2/3}-1}{5}\left<E_{F}^{0}% \right>\left(2u^{5/3}-3u^{2}\right)+S_{0}u^{2}\right]\,,$$ where $p(n,0)$ is defined by Eq. (76). Figure 10 shows the dependence of the pure neutron $p(n,1)$ and $\epsilon(n,1)$ on $u=n/n_{0}$, ranging from 0 to 10 times normal nuclear density. Both functions increase smoothly and monotonically from $u=0$. We hope the student would wonder why the pressure becomes greater than the energy density around $u=6$. Why doesn’t it go like a relativistic nucleon gas, $p=\epsilon/3$? (Hint: check the assumptions.) One can now look at the EoS, i.e., the dependance of $p$ on $\epsilon$ (the points in Fig. 11). The pressure is smooth, non-negative, and monotonically increasing as a function of $\epsilon$. In fact it looks almost quadratic over this energy range ($0\leq u\leq 5$). This suggests that it might be reasonable to see if one can make a simple, polytropic fit. If we try that using a form $$p(\epsilon)=\kappa_{0}\,\epsilon^{\gamma}\,,$$ (90) we find the fit shown in Fig. 11 as the solid curve with $$\kappa_{0}=3.548\times 10^{-4}\,,\quad\gamma=2.1\,,$$ (91) where $\kappa_{0}$ has appropriate units so that $p$ and $\epsilon$ are in MeV/fm${}^{3}$. (We simply guessed and set $\gamma$ to that value.) This polytrope can now be used in solving the TOV equation for a pure neutron star with nuclear interactions. Alternatively, one might solve for the structure by using the functional forms from Eq. (86), multiplied by $n$, and Eq. (89) directly. We defer that for a bit, since it would be a good idea to first find an EoS which doesn’t violate causality, a basic tenet of special relativity. VI.3 Does the Speed of Sound Exceed That of Light? What is the speed of sound in nuclear matter? Starting from the elementary formula for the square of the speed of sound in terms of the bulk modulus young , one can show that $$\left(\frac{c_{s}}{c}\right)^{2}=\frac{B}{\rho c^{2}}=\frac{dp}{d\epsilon}=% \frac{dp/dn}{d\epsilon/dn}\,.$$ (92) To satisfy relativistic causality we must require that the sound speed does not exceed that of light. This can happen when the density becomes very large, i.e., when $u\rightarrow\infty$. For the simple model of nuclear interactions presented in the last section, the dominant terms at large $u$ in $p$ and $\epsilon$ are those going like $u^{\sigma+1}$. Thus, from Eq. (86), multiplied by $n$, and Eq. (89), we see that $$\left(\frac{c_{s}}{c}\right)^{2}=\frac{dp/dn}{d\epsilon/dn}\rightarrow\sigma=2% .11\,$$ (93) for the parameters of Eq. (75), and indeed for any set of parameters with $K_{0}$ greater than about 180 MeV. One can recover causality (i.e., speeds of sound less than light) by assuring that both $\epsilon(u)$ and $p(u)$ grow no faster than $u^{2}$. There must still be an interplay between the $A$- and $B$-terms in the nuclear potential, but one simple way of doing this is to modify the $B$-term by introducing a fourth parameter $C$ so that, for symmetric nuclear matter ($\alpha=0$), $$V_{\rm Nuc}(u,0)=\frac{A}{2}u+\frac{B}{\sigma+1}\ \frac{u^{\sigma}}{1+Cu^{% \sigma-1}}\,.$$ (94) One can choose $C$ small enough so that the effect of the denominator only becomes appreciable for very large $u$. The presence of the denominator would modify and complicate the constraint equations for $A$, $B$, and $\sigma$ from those given in Eqs. (69)–(71). However, for small $C$, which can be chosen as one wishes, the values for the other parameters should not be much changed from those, say, in Eq. (75). Thus, with a little bit of trial and error, one can simply readjust the $A$, $B$, and $\sigma$ values to put the minimum of $E/A-m_{N}$ at the right position ($n_{0}$) and depth ($BE$), hoping that the resulting value of the (poorly known) compressibility $K_{0}$ remains sensible. In our calculations we chose $C=0.2$ and started the hand search with the $K_{0}=400$ MeV parameters in Eq. (75). We found that, by fiddling only with $B$ and $\sigma$, we could re-fit $n_{0}$ and $B$ with only small changes, $$B=65.39\rightarrow 83.8{\rm\ MeV}\,,\quad\sigma=2.11\rightarrow 2.37\,,$$ (95) somewhat larger than before. For these new values of $B$ and $\sigma$, $A$ changes from -122.2 MeV to -136.7 MeV, and $K_{0}$ from 400 to 363.2 MeV. That is, it remains a reasonable nuclear model. One can now proceed as in the last section to get $\epsilon(n,\alpha)$, $p(n,\alpha)$, and the EoS, $p(\epsilon,\alpha)$. The results are not much different from those shown in the figures of the previous sub-section. This time we decided to live with a quadratic fit for the EoS for pure neutron matter, finding $$p(\epsilon,1)=\kappa_{0}\epsilon^{2}\,,\quad\kappa_{0}=4.012\times 10^{-4}\,.$$ (96) This is not much different from before, Eq. (91). Somewhat more useful for solving the TOV equation is to express $\epsilon$ in terms of $p$, $$\epsilon(p)=\left(p/\kappa_{0})\right)^{1/2}\,.$$ (97) VI.4 Pure Neutron Star with Nuclear Interactions Having laid all this groundwork, the student can now proceed to solve the TOV equations as before for a pure neutron star, using the fit for $\epsilon(p)$ found in the previous sub-section. It is, once again, useful to convert from the units of MeV/fm${}^{3}$ to ergs/cm${}^{3}$ to ${\rm M}_{\odot}$/km${}^{3}$ and dimensionless $\bar{p}$ and $\bar{\epsilon}$. By now the student has undoubtedly grown quite accustomed to that procedure. $$\bar{\epsilon}(\bar{p})=(\kappa_{0}\epsilon_{0})^{-1/2}\bar{p}^{1/2}=A_{0}\bar% {p}^{1/2}\,,\quad A_{0}=0.8642\,,$$ (98) where this time we defined $$\epsilon_{0}=\frac{m_{n}^{4}c^{5}}{3\pi^{2}\hbar^{3}}\,.$$ (99) With this, the constant $\alpha$ that occurs on the right-hand side of the TOV equation, Eq. (22), is $\alpha=A_{0}R_{0}=1.276$ km. The constant for the mass equation, Eq. (25), is $\beta=0.03265$, again in units of 1/km${}^{3}$. Now proceeding as before, one can solve the coupled TOV equations for $\bar{p}(r)$ and $\bar{\cal M}(r)$ for various initial central pressures, $\bar{p}(0)$. We don’t exhibit here plots of the solutions, as they look very similar to those for the Fermi gas EoS, Fig. 5. More interesting is to solve for a range of initial $\bar{p}(0)$’s, generating, as before, a mass $M$ versus radius $R$ plot which now includes nucleon-nucleon interactions (Fig. 12). The effect of the nuclear potential is enormous, on comparing with the Fermi gas model predictions for $M$ vs. $R$ shown in Fig. 6. The maximum star mass this time is about 2.3 ${\rm M}_{\odot}$, rather than 0.8 ${\rm M}_{\odot}$. The radius for this maximum mass star is about 13.5 km, somewhat larger than the Fermi gas model radius of 11 km. The large value of maximum $M$ is a reflection of the large value of nuclear (in)compressibility $K_{0}=363$ MeV. The more incompressible something is, the more mass it can support. Had we fit to a smaller value of $K_{0}$ we would have gotten a smaller maximum mass. VI.5 What About a Cosmological Constant? We do not know (either) if there is one, but there are definite indications that a great part of the make-up of our universe is something called “Dark Energy” darkenergy . This conclusion comes about because we have recently learned that something, at the present time, is causing the universe to be accelerating, instead of slowing down (as would be expected after the Big Bang). One way (of several) to interpret this dark energy is as Einstein’s cosmological constant, which contributes a term $\Lambda g_{\mu\nu}$ to the right-hand side of Einstein’s field equation, the basic equation of general relativity. The most natural value for $\Lambda$ would be zero, but that may not be the way the world is. If $\Lambda$ is non-zero, it is nonetheless surprisingly small. What would the effect of a non-zero cosmological constant be for the structure of a neutron star? It turns out that the only modification to the TOV equation Pauchy is in the correction factor $$\left[1+\frac{4\pi r^{3}p(r)}{{\cal M}(r)c^{2}}\right]\rightarrow\left[1+\frac% {4\pi r^{3}p(r)}{{\cal M}(r)c^{2}}-\frac{\Lambda r^{3}}{2G{\cal M}(r)}\right]\,.$$ (100) So, we encourage the student to, first, understand the units of $\Lambda$ and then to see what values for it might affect the structure of a typical neutron star. VII Conclusions The materials we have described in this paper would be quite suitable as an undergraduate thesis or special topics course accessible to a junior or senior physics major. It is a topic rich in the subjects the student will have covered in his or her courses, ranging from thermodynamics to quantum statistics to nuclear physics. The major emphasis in such a project is on constructing a (simple) equation of state. This is needed to be able to solve the non-linear structure equations. Solving those equations numerically, of course, develops the student’s computational skills. Along the way, however, he or she will also learn some of the lore regarding degenerate stars, e.g., white dwarfs and neutron stars. And, in the latter case, the student will also come to appreciate the relative importance of special and general relativity. VIII Acknowledgments We thank M. Prakash, T. Buervenich, and Y.-Z. Qian for their helpful comments and for reading drafts of this paper. We would also like to acknowledge comments and suggestions made by an anonymous referee. This research is supported in part by the Department of Energy under contract W-7405-ENG-36. References (1) It is widely beleived that neutron stars were proposed by Lev Landau in 1932, very soon after the neutron was discovered (although we are not aware of any documented proof of this). In 1934 Fritz Zwicky and Walter Baade speculated that they might be formed in Type II supernova explosions, which is now generally accepted as true. (2) An on-line catalog of pulsars can be found at http:/pulsar.princeton.edu. (3) An intermediate-level on-line tutorial on the physics of pulsars can be found at http://www.jb.man.ac.uk/research/pulsar. This tutorial follows the book by Andrew G. Lyne and Francis Graham-Smith, Pulsar Astronomy, 2nd. ed., Cambridge University Press, 1998. (4) R. C. Tolman, Phys. Rev. 55, 364 (1939); J. R. Oppenheimer and G. M. Volkov, Phys. Rev. 55, 374 (1939). (5) Steven Weinberg, Gravitation and Cosmology, John Wiley & Sons, Inc., New York, 1972, Chapter 11. (6) M. Prakash, lectures delivered at the Winter School on “The Equation of State of Nuclear Matter,” Puri, India, January 1994, esp. Chapter 3, Equation of State. These notes are published in “The Nuclear Equation of State”, ed. by A. Ausari and L. Satpathy, World Scientific Publishing Co., Singapore, 1996. (7) If you are a mentor for such a student, you may want to see some of the Mathematica and MathCad files we developed along our path of discovery. Send e-mail to the first author convincing him that you are such a mentor, and he will direct you to a web page from which they can be downloaded. The idea behind this misdirection is that the student will learn more by doing the programming himself. (8) R. Balian and J.-P. Blaizot, “Stars and Statistical Physics: A Teaching Experience,” Am. J. Phys. 67, (12) 1189 (1999). (9) S. L. Shapiro and S. A. Teukolsky, Black Holes, White Dwarfs and Neutron Stars: The Physics of Compact Objects, Wiley-Interscience, 1983. (10) We apologize to readers who are enthusiasts of SI units, but the first author was raised on CGS units. Actually, we strongly feel that by the time a physics student is at this level, he or she ought to be comfortable in switching from one system of units to another. (11) A discussion of how to solve these equations (using conventional programming languages) is given in S. Koonin, Computational Physics, Benjamin-Cummings Publishing, 1986. (12) For more details on white dwarfs, NASA provides a useful web page at http://imagine.gsfc.nasa.gov/docs/science/know_l1/dwarfs.html. (13) This maximum mass of 1.4 ${\rm M}_{\odot}$ is usually referred to as the Chandrasekhar limit. See S. Chandrasekhar’s 1983 Nobel Prize lecture, http://www.nobel.se/physics/laureates/1983/. For more detail see his treatise, An Introduction to the Study of Stellar Structure, Dover Publications, New York, 1939. (14) Mathematica is a software product of Wolfram Research, Inc., (see web page at http://www.wolfram.com), and its use is described by S. Wolfram in The Mathematica Book, Fourth Ed., Cambridge University Press, Cambridge, England, 1999. However, whenever we use the phrase “using Mathematica,” we really mean using whatever package one has available or is familiar with, be it Maple, MathCad, or whatever. We did almost all of the numerical/symbolic work that we describe in this paper in Mathematica, but some of its notebooks were duplicated in MathCad, just to be sure it could be done there as well. (15) Enough of these explicit flags! Most of the equations from here on present challenges for the student to work through. (16) For the Newtonian case, a polytropic EoS also allows for a somewhat more analytic solution in terms of Lane-Emden functions. See Weinberg, op. cit., Sec. 11.3, or C. Flynn, Lectures on Stellar Physics, especially lectures 4 and 5, at http://www.astro.utu.fi/ cflynn/Stars/. (17) Despite the appearance of the $4\pi\epsilon_{0}$, the astute student will not be lulled into thinking that this factor has anything to do with a Coulomb potential or the dielectric constant of the vacuum. (18) Note that the right-hand side of Eq. (22) is negative (for positive $\bar{p}$), so $\bar{p}(r)$ must fall monotonically from $\bar{p}(0)$. (19) We leave this for the student to figure out, except for the following hint: Use an if statement if necessary. (20) This fit is least accurate ($\approx 2\%$) at very low values of $k_{F}$. However, this is where the pure neutron approximation itself is least accurate. The surface of a neutron star is likely made of elements like iron. A fictional account of what life might be like on such a surface can be found in Robert Forward’s Dragon’s Egg, first published in 1981 by Del Rey Publishing, republished in 2000. (21) See Weinberg, op. cit., Sec. 11.2. (22) Because it is almost non-interacting with nuclear matter, a neutrino tends to escape from the neutron star. This is the major cooling mechanism as the neutron star is being formed in a supernova explosion. George Gamow named this the URCA process, after a Brazilian casino where people lost a lot of money. (23) Does the student know how to put all the factors of $c$ back into $\epsilon_{0}$ so as to re-write this for CGS units? (24) See, e.g., J. M. Blatt and V. F. Weisskopf, Theoretical Nuclear Physics, John Wiley & Sons, 1952, Chap. 6, Sec. 2. (25) The reason for the “(in)” is because a materials physicist might rather define compressibility as $\chi=-(1/V)(\partial V/\partial p)=-(1/n)(dp/dn)^{-1}$. (26) Folks interested in RHIC physics might want to, however. (RHIC stands for “Relativistic Heavy Ion Collider,” an accelerator at the Brookhaven National Laboratory which is studying reactions like Au nuclei striking each other at center of mass energies around 200 GeV/nucleon.) (27) See, e.g., Hugh Young, University Physics, 8th ed., Addison-Wesley, Reading MA, 1992, Sec. 19-5, Speed of a Longitudinal Wave. (28) See, e.g., P. J. E. Peebles and Bharat Ratra, Rev. Mod. Phys. 75, 599 (2003) (29) W. Y. Pauchy Hwang, private communication.
Automata Guided Hierarchical Reinforcement Learning for Zero-Shot Skill Composition Xiao Li, Yao Ma and Calin Belta Abstract An obstacle that prevents the wide adoption of (deep) reinforcement learning (RL) in control systems is its need for a large amount of interactions with the environment in order to master a skill. The learned skill usually generalizes poorly across domains and re-training is often necessary when presented with a new task. We present a framework that combines methods in formal methods with hierarchical reinforcement learning (HRL). The set of techniques we provide allows for convenient specification of tasks with complex logic, learn hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards using any RL methods and is able to construct new skills from existing ones without additional learning. We evaluate the proposed methods in a simple grid world simulation as well as simulation on a Baxter robot. Automata Guided Hierarchical Reinforcement Learning for Zero-Shot Skill Composition Xiao Li, Yao Ma and Calin Belta 1 Introduction Reinforcement learning has received much attention in the recent years because of its achievements in games Mnih et al. (2015)Silver et al. (2016), robotics Jang et al. Levine et al. (2016)Gu et al. (2016) and autonomous driving Isele et al. (2017)Madrigal (2017). However, training a policy that sufficiently masters a skill requires an enormous amount of interactions with the environment and acquiring such experience can be difficult on physical systems. Moreover, most learned policies are tailored to mastering one skill (by maximizing the reward) and is hardly reusable on a new skill. Skill composition is the idea of constructing new skills with existing skills (and hence their policies) with little to no additional learning. In stochastic optimal control, this idea has been adopted by authors of Todorov (2009) and Da Silva et al. (2009) to construct provably optimal control laws based on linearly solvable Markov decision processes. Authors of Haarnoja et al. (2017), Tang & Haarnoja have showed in simulated manipulation tasks that approximately optimal policies can result from adding the Q-functions of the existing policies. Hierarchical reinforcement learning is an effective means of achieving transfer among tasks. The goal is to obtain task-invariant low-level policies, and by re-training the meta-policy that schedules over the low-level policies, different skills can be obtain with less samples than training from scratch. Authors of Heess et al. (2016) have adopted this idea in learning locomotor controllers and have shown successful transfer among simulated locomotion tasks. Authors of Oh et al. (2017) have utilized a deep hierarchical architecture for multi-task learning using natural language instructions. Temporal logic is a formal language commonly used in software and digital circuit verification Baier & Katoen (2008) as well as formal synthesis Belta et al. (2017). It allows for convenient expression of complex behaviors and causal relationships. TL has been used by Sadraddini & Belta (2015), Leahy et al. (2015) to synthesis provably correct control policies. Authors of Aksaray et al. (2016) have also combined TL with Q-learning to learn satisfiable policies in discrete state and action spaces. In this work, we focus on hierarchical skill acquisition and zero-shot skill composition. Once a set of skills are acquired, we provide a technique that can synthesize new skills without the need to further interact with the environment (given the state and action spaces as well as the transition remain the same). We adopt temporal logic as the task specification language. Compared to most heuristic reward structures used in the RL literature to specify tasks, formal specification language excels at its semantic rigor and interpretability of specified behaviors. Our main contributions are: • We take advantage of the transformation between TL formula and finite state automatons (FSA) to construct deterministic meta-controllers directly from the task specification without the necessity for additional learning. We show that by adding one discrete dimension to the original state space, structurally simple parameterized policies such as feed-forward neural networks can be used to learn tasks that require complex temporal reasoning. • Intrinsic motivation has been shown to help RL agents learn complicated behaviors with less interactions with the environment Singh et al. (2004)Kulkarni et al. (2016)Jaderberg et al. (2016). However, designing a well-behaved intrinsic reward that aligns with the extrinsic reward takes effort and experience. In our work, we construct intrinsic rewards directly from the input alphabets of the FSA (a component of the automaton), which guarantees that maximizing each intrinsic reward makes positive progress towards satisfying the entire task specification. From a user’s perspective, the intrinsic rewards are constructed automatically from the TL formula. • In our framework, each FSA represents a hierarchical policy with low-level controllers can be re-modulated to achieve different tasks. Skill composition is achieved by manipulating the FSA that results from their TL specifications in a deterministic fashion. Instead of interpolating/extrapolating among existing skills, we present a simple policy switching scheme based on graph manipulation of the FSA. Therefore, the compositional outcome is much more transparent. We introduce a method that allows learning of such hierarchical policies with any non-hierarchical RL algorithm. Compared with previous work on skill composition, we impose no constraints on the policy representation or the problem class. 2 Preliminaries 2.1 The Options Framework in Hierarchical Reinforcement learning In this section, we briefly introduce the options framework Sutton et al. (1998), especially the terminologies that we will inherit in later sections. We start with the definition of a Markov Decision Process. Definition 1. An MDP is defined as a tuple $\mathcal{M}=\langle T,S,A,p(\cdot|\cdot,\cdot),R(\cdot,\cdot,\cdot)\rangle$, where $T$ is the length of a fixed time horizon; $S\subseteq{\rm I\!R}^{n}$ is the state space ; $A\subseteq{\rm I\!R}^{m}$ is the action space ($S$ and $A$ can also be discrete sets); $p:S\times A\times S\to[0,1]$ is the transition function with $p(s^{\prime}|s,a)$ being the conditional probability density of taking action $a\in A$ at state $s\in S$ and ending up in state $s^{\prime}\in S$; $R:S\times A\times S\to{\rm I\!R}$ is the reward function. The goal is to find a policy $\pi^{\star}:S\to A$ (or $\pi^{\star}:S\times A\to[0,1]$ for stochastic policies) that maximizes the expected return, i.e. $$\pi^{\star}=\underset{\pi}{\arg\max}\mathbb{E}^{\pi}[R(\tau_{T})]$$ (1) where $\tau_{T}=(s_{0},a_{0},...,s_{T},)$ denotes the state-action trajectory from time $0$ to $T$. The options framework exploits temporal abstractions over the action space. An option is defined as a tuple $o=\langle\mathcal{I},\pi^{o},\beta\rangle$ where $\mathcal{I}$ is the set of states that option $o$ can be initiated (here we let $\mathcal{I}=S$ for all options), $\pi^{o}:S\to A$ is an options policy and $\beta:S\to[0,1]$ is the termination condition for the option. In addition, there is a policy over options $\pi^{h}:S\to O$ (where $O$ is a set of available options) that schedules among options. At a given time step $t$, an option $o$ is chosen according to $\pi^{h}(s_{t})$ and the options policy $\pi^{o}$ is followed until $\beta(s)>threshold$ at time $t+k$, and the next option is chosen by $\pi^{h}(s_{t+k})$. 2.2 scTLTL And Automata We consider tasks specified with Truncated Linear Temporal Logic (TLTL). We restrict the set of allowed operators to be $$\begin{split}\displaystyle\phi:=&\displaystyle\top\,\,|\,\,f(s)<c\,\,|\,\,\neg% \phi\,\,|\,\,\phi\wedge\psi\,\,|\,\,\phi\vee\psi\,\,|\\ &\displaystyle\diamondsuit\phi\,\,|\,\,\phi\,\mathcal{U}\,\psi\,\,|\,\,\phi\,% \mathcal{T}\,\psi\,\,|\,\,\bigcirc\phi\,\,|\,\,\phi\Rightarrow\psi\end{split}$$ (2) where $f(s)<c$ is a predicate, $\neg$ (negation/not), $\wedge$ (conjunction/and), and $\vee$ (disjunction/or) are Boolean connectives, and $\diamondsuit$ (eventually), $\mathcal{U}$ (until), $\mathcal{T}$ (then), $\bigcirc$ (next), are temporal operators. Implication is denoted by $\Rightarrow$ (implication). Essentially we excluded the Always operator ($\Box$) with reasons similar to Kupferman & Vardi (2001). We refer to this restricted TLTL as syntactically co-safe TLTL (scTLTL). There exists a real-value function $\rho(s_{0:T},\phi)$ called robustness degree that measures the level of satisfaction of trajectory $s_{0:T}$ with respective to $\phi$. $\rho(s_{0:T},\phi)>0$ indicates that $s_{0:T}$ satisfies $\phi$ and vice versa. Due to space constraints, we refer readers to Li et al. (2016) for definitions of TLTL and robustness degree. Any scTLTL formula can be translated into a finite state automata (FSA) with the following definition: Definition 2. An FSA is defined as a tuple $\mathcal{A}_{\phi}=\langle Q_{\phi},\Psi_{\phi},q^{0},p_{\phi}(\cdot|\cdot),% \mathcal{F}_{\phi}\rangle$, where $Q_{\phi}$ is a set of automaton states; $\Psi_{\phi}$ is an input alphabet, we denote $\psi_{q_{i},q_{j}}\in\Psi_{\phi}$ the predicate guarding the transition from $q_{i}$ to $q_{j}$ (as illustrated in Figure 1); $q^{0}\in Q_{\phi}$ is the initial state; $p_{\phi}:Q_{\phi}\times Q_{\phi}\rightarrow[0,1]$ is a conditional probability defined as $$p_{\phi}(q_{j}|q_{i})=\begin{cases}1&\psi_{q_{i},q_{j}}\textrm{ is true}\\ 0&\text{otherwise}.\end{cases}$$ (3) In addition, given an MDP state $s$, we can calculate the transition in automata states at $s$ by $$p_{\phi}(q_{j}|q_{i},s)=\begin{cases}1&\rho(s,\psi_{q_{i},q_{j}})>0\\ 0&\text{otherwise}.\end{cases}$$ (4) We abuse the notation $p_{\phi}$ to represent both kinds of transitions when the context is clear. $\mathcal{F}_{\phi}$ is a set of final automaton states. The translation from TLTL formula FSA to can be done automatically with available packages like Lomap Ulusoy (2017). Example 1. Figure 1(left) illustrates the FSA resulting from formula $\phi=\neg b\,\,\mathcal{U}\,\,a$. In English, $\phi$ entails during a run, $b$ cannot be true until $a$ is true and $a$ needs to be true at least once. The FSA has four automaton states $Q_{\phi}=\{q_{0},q_{f},trap\}$ with $q_{0}$ being the input(initial) state (here $q_{i}$ serves to track the progress in satisfying $\phi$). The input alphabet is defined as the $\Psi_{\phi}=\{\neg a\wedge\neg b,\neg a\wedge b,a\wedge\neg b,a\wedge b\}$. Shorthands are used in the figure, for example $a=(a\wedge b)\vee(a\wedge\neg b)$. $\Psi_{\phi}$ represents the power set of $\{a,b\}$, i.e. $\Psi_{\phi}=2^{\{a,b\}}$. During execution, the FSA always starts from state $q_{0}$ and transitions according to Equation (3) or (4). The specification is satisfied when $q_{f}$ is reached and violated when $trap$ is reached. In this example, $q_{f}$ is reached only when $a$ becomes true before $b$ becomes true. 3 Problem Formulation And Approach We start with the following problem definition: Problem 1. Given an MDP in Definition 1 with unknown transition dynamics $p(s^{\prime}|s,a)$ and a scTLTL specification $\phi$ over state predicates (along with its FSA $\mathcal{A}_{\phi}$) as in Definition 2. Find a policy $\pi^{\star}_{\phi}$ such that $$\pi^{\star}_{\phi}=\underset{\pi_{\phi}}{\arg\max}\mathbb{E}^{\pi_{\phi}}[% \mathbbm{1}(\rho(s_{0:T},\phi)>0)].$$ (5) where $\mathbbm{1}(\rho(s_{0:T},\phi)>0)$ is an indicator function with value $1$ if $\rho(s_{0:T},\phi)>0$ and $0$ otherwise. Problem 1 defines a policy search problem where the trajectories resulting from following the optimal policy should satisfy the given scTLTL formula in expectation. Problem 2. Given two scTLTL formula $\phi_{1}$ and $\phi_{2}$ along with policy $\pi_{\phi_{1}}$ that satisfies $\phi_{1}$ and $\pi_{\phi_{2}}$ that satisfies $\phi_{2}$. Obtain a policy $\pi_{\phi}$ that satisfies $\phi=\phi_{1}\wedge\phi_{2}$. Problem 2 defines the problem of task composition. Given two policies each satisfying a scTLTL specification, construct the policy that satisfies the conjunction of the given specifications. Solving this problem is useful when we want to break a complex task into simple and manageable components, learn a policy that satisfy each component and ”stitch” all the components together so that the original task is satisfied. It can also be the case that as the scope of the task grows with time, the original task specification is amended with new items. Instead of having to re-learn the task from scratch, we can only learn a policy that satisfies the new items and combine them with the old policy. We propose to solve Problem 1 by constructing a product MDP from the given MDP and FSA that can be solved using any state-of-the-art RL algorithm. The idea of using product automaton for control synthesis has been adopted in various literature Leahy et al. (2015), Chen et al. (2012). However, the methods proposed in these works are restricted to discrete state and actions spaces. We extend this idea to continuous state-action spaces and show its applicability on robotics systems. For Problem 2, we propose a policy switching scheme that satisfies the compositional task specification. The switching policy takes advantage of the characteristics of FSA and uses robustness comparison at each step for decision making. 4 FSA Augmented MDP Problem 1 can be solved with any episode-based RL algorithm. However, doing so the agent suffers from sparse feedback because a reward signal can only be obtained at the end of each episode. To address this problem as well as setting up ground for automata guided HRL, We introduce the FSA augmented MDP Definition 3. An FSA augmented MDP corresponding to scTLTL formula $\phi$ is defined as $\mathcal{M}_{\phi}=\langle T,\tilde{S},A,\tilde{p}(\cdot|\cdot,\cdot),\tilde{R% }(\cdot,\cdot)\rangle$ where $T$ is the length of a fixed time horizon, $\tilde{S}\subseteq S\times Q_{\phi}$, $A$ is the same as the original MDP. $\tilde{p}(\tilde{s}^{\prime}|\tilde{s},a)$ is the probability of transitioning to $\tilde{s}^{\prime}$ given $\tilde{s}$ and $a$, in particular $$\begin{split}\displaystyle\tilde{p}(\tilde{s}^{\prime}|\tilde{s},a)=p\big{(}(s% ^{\prime},q^{\prime})|(s,q),a\big{)}\\ \displaystyle=\begin{cases}p(s^{\prime}|s,a)&p_{\phi}(q^{\prime}|q,s)=1\\ 0&\text{otherwise}.\end{cases}\end{split}$$ (6) Here $p_{\phi}$ is defined in Equation (4). $\tilde{R}:\tilde{S}\times\tilde{S}\to{\rm I\!R}$ is the FSA augmented reward function, defined by $$\tilde{R}(\tilde{s},\tilde{s}^{\prime})=\mathbbm{1}\big{(}\rho(s^{\prime},% \mathrm{D}_{\phi}^{q})>0\big{)},$$ (7) where $\Omega_{q}$ is the set of automata states that are connected with $q$ through outgoing edges. $D_{\phi}^{q}=\bigvee_{q^{\prime}\in\Omega_{q}}\psi_{q,q^{\prime}}$ represents the disjunction of all predicates guarding the transitions that originate from $q$. As a quick example to the notation $D^{q}_{\phi}$, consider the state $q_{0}$ in the FSA in Figure 1, $\Omega_{q_{0}}=\{trap,q_{f}\}$, $D^{q_{0}}_{\phi}=\psi_{q_{0},trap}\vee\psi_{q_{0},qf}=b\vee a$. The goal is then to find a policy $\pi:\tilde{S}\to A$ that maximizes the expected sum of $\tilde{R}$ over the horizon $T$. The FSA augmented MDP can be constructed with any standard MDP and a scTLTL formula. And it can be solved with any off-the-shelf RL algorithm. By directly learning the flat policy $\pi$ we bypass the need to learn multiple options policy separately. After obtaining the optimal policy $\pi^{\star}$, the optimal options policy for any option $o_{q}$can be extracted by executing $\pi^{\star}(a|s,q)$ without transitioning the automata state, i.e. keeping $q_{i}$ fixed (denoted $\pi^{\star}_{q}$ ). And $\pi^{\star}_{q}$ satisfies $$\pi_{q_{i}}^{\star}=\underset{\pi_{q_{i}}}{\arg\max}\mathbb{E}^{\pi_{q_{1}}}% \left[\sum_{t=0}^{T-1}\mathbbm{1}\big{(}\rho(s_{t+1},D_{\phi}^{q_{i}})>0\big{)% }\right].$$ (8) In other words, the purpose of $\pi_{q_{i}}$ is to activate one of the outgoing edges of $q_{i}$ and by doing so reach $q_{f}$ as soon as possible. The reward function in Equation (7) encourages the system to exit the current automata state and move on to the next, and by doing so eventually reach the final state $q_{f}$. However, this reward does not distinguish between the $trap$ state and other states and therefore will also promote entering of the $trap$ state. One way to address this issue is to impose a terminal reward on both $q_{f}$ and $trap$. Because the reward is an indicator function with maximum value of 1, we assign terminal rewards $R_{q_{f}}=2$ and $R_{trap}=-2$. Appendix D describes the typical learning routine using FSA augmented MDP. The algorithm utilizes memory replay which is popular among off-policy RL methods (DQN, A3C, etc) but this is not a requirement for learning with $\tilde{M}_{\phi}$. On-policy methods can also be used. 5 Automata Guided Task Composition In section, we provide a solution for Problem 2 by constructing the FSA of $\phi$ from that of $\phi_{1}$ and $\phi_{2}$ and using $\phi$ to synthesize the policy for the combined skill. We start with the following definition. Definition 4. Given $\mathcal{A}_{\phi_{1}}=\langle Q_{\phi_{1}},\Psi_{\phi_{1}},q^{0}_{1},p_{\phi_% {1}},\mathcal{F}_{\phi_{1}}\rangle$ and $\mathcal{A}_{\phi_{2}}=\langle Q_{\phi_{2}},\Psi_{\phi_{2}},q^{0}_{2},p_{\phi_% {2}},\mathcal{F}_{\phi_{2}}\rangle$, The FSA of $\phi$ is the product automaton of $\mathcal{A}_{\phi_{1}}$ and $\mathcal{A}_{\phi_{1}}$, i.e. $\mathcal{A}_{\phi=\phi_{1}\wedge\phi_{2}}=\mathcal{A}_{\phi_{1}}\times\mathcal% {A}_{\phi_{2}}=\langle Q_{\phi},\Psi_{\phi},q^{0},p_{\phi},\mathcal{F}_{\phi}\rangle$ where $Q_{\phi}\subseteq Q_{\phi_{1}}\times Q_{\phi_{2}}$ is the set of product automaton, states, $q^{0}=(q^{0}_{1},q^{0}_{2})$ is the product initial state, $\mathcal{F}\subseteq\mathcal{F}_{\phi_{1}}\cap\mathcal{F}_{\phi_{2}}$ is the final accepting states. Following Definition 2, for states $q=(q_{1},q_{2})\in Q_{\phi}$ and $q^{\prime}=(q_{1}^{\prime},q_{2}^{\prime})\in Q_{\phi}$, the transition probability $p_{\phi}$ is defined as $$p_{\phi}(q^{\prime}|q)=\begin{cases}1&p_{\phi_{1}}(q_{1}^{\prime}|q_{1})p_{% \phi_{2}}(q_{2}^{\prime}|q_{2})=1\\ 0&\text{otherwise}.\end{cases}$$ (9) Example 2. Figure 1 (right) illustrates the FSA of $\mathcal{A}_{\phi_{1}}$and $\mathcal{A}_{\phi_{2}}$ and their product automaton $\mathcal{A}_{\phi}$. Here $\phi_{1}=\diamondsuit a\wedge\diamondsuit b$ which entails that both $a$ and $b$ needs to be true at least once (order does not matter), and $\phi_{2}=\neg b\,\,\mathcal{U}\,\,a$ which is the same as Example 1. The resultant product corresponds to the formula $\phi=(\diamondsuit a\wedge\diamondsuit b)\wedge(\neg b\,\,\mathcal{U}\,\,a)$ which dictates that $a$ and $b$ need to be true at least once, and $a$ needs to be true before $b$ becomes true (an ordered visit). We can see that the $trap$ state occurs in $\mathcal{A}_{\phi_{2}}$ and $\mathcal{A}_{\phi}$, this is because if $b$ is ever true before $a$ is true, the specification is violated and $q_{f}$ can never be reached. In the product automaton, we aggregate all state pairs with a trap state component into one trap state. For $q=(q_{1},q_{2})\in Q_{\phi}$, let $\Psi_{q}$, $\Psi_{q_{1}}$ and $\Psi_{q_{2}}$denote the set of predicates guarding the outgoing edges of $q$, $q_{1}$ and $q_{2}$ respectively. Equation (9) entails that a transition at $q$ in the product automaton $\mathcal{A}_{\phi}$ exists only if corresponding transitions at $q_{1}$, $q_{2}$exist in $\mathcal{A}_{\phi_{1}}$and $\mathcal{A}_{\phi_{2}}$ respectively. Therefore, $\psi_{q,q^{\prime}}=\psi_{q_{1},q_{1}^{\prime}}\wedge\psi_{q_{2},q_{2}^{\prime}}$, for $\psi_{q,q^{\prime}}\in\Psi_{q},\psi_{q_{1},q_{1}^{\prime}}\in\Psi_{q_{1}},\psi% _{q_{2},q_{2}^{\prime}}\in\Psi_{q_{2}}$ (here $q_{i}^{\prime}$ is a state such that $p_{\phi_{i}}(q_{i}^{\prime}|q_{i})=1$). Following Equation (8), $$\begin{split}&\displaystyle\pi_{q}^{\star}=\underset{\pi_{q}}{\arg\max}\mathbb% {E}^{\pi_{q}}\big{[}\sum^{T-1}_{t=0}\mathbbm{1}\big{(}\rho(s_{t+1},D^{q}_{\phi% })>0\big{)}\big{]},\\ &\displaystyle\textrm{where }D^{q}_{\phi}=\bigvee_{q_{1}^{\prime},q_{2}^{% \prime}}(\psi_{q_{1},q_{1}^{\prime}}\wedge\psi_{q_{2},q_{2}^{\prime}}).\end{split}$$ (10) Repeatedly applying the distributive law $(\Delta\wedge\Omega_{1})\vee(\Delta\wedge\Omega_{2})=\Delta\wedge(\Omega_{1}% \vee\Omega_{2})$ to the logic formula $D^{q}_{\phi}$ transforms the formula to $$D^{q}_{\phi}=\big{(}\bigvee_{q_{1}^{\prime}}\psi_{q_{1},q_{1}^{\prime}}\big{)}% \wedge\big{(}\bigvee_{q_{2}^{\prime}}\psi_{q_{2},q_{2}^{\prime}}\big{)}=D^{q_{% 1}}_{\phi_{1}}\wedge D^{q_{2}}_{\phi_{2}}.$$ (11) Therefore, $$\begin{split}\displaystyle\pi_{q}^{\star}&\displaystyle=\underset{\pi_{q}}{% \arg\max}\mathbb{E}^{\pi_{q}}\big{[}\sum^{T-1}_{t=0}\mathbbm{1}\big{(}\rho(s_{% t+1},D^{q_{1}}_{\phi_{1}}\wedge D^{q_{2}}_{\phi_{2}})>0)\big{)}\big{]}\\ &\displaystyle=\underset{\pi_{q}}{\arg\max}\mathbb{E}^{\pi_{q}}\big{[}\sum^{T-% 1}_{t=0}\mathbbm{1}\big{(}\min(\rho(s_{t+1},D^{q_{1}}_{\phi_{1}}),\rho(s_{t+1}% ,D^{q_{2}}_{\phi_{2}}))>0)\big{)}\big{]}\end{split}$$ (12) The second step in Equation (12) follows the robustness definition. Recall that the optimal options policies for $q_{1}$ and $q_{2}$ satisfy $$\pi_{q_{i}}^{\star}=\underset{\pi_{q_{i}}}{\arg\max}\mathbb{E}^{\pi_{\phi_{i}}% }\big{[}\sum^{T-1}_{t=0}\mathbbm{1}\big{(}\rho(s_{t+1},D^{q_{i}}_{\phi_{i}})>0% )\big{)}\big{]},\textrm{ }i=1,2.$$ (13) Equation (12) provides a relationship among $\pi_{q}^{\star}$, $\pi_{q_{1}}^{\star}$ and $\pi_{q_{2}}^{\star}$. Given this relationship, We propose a simple switching policy based on stepwise robustness comparison that satisfies $\phi=\phi_{1}\wedge\phi_{2}$ as follows $$\pi_{\phi}(s,q)=\begin{cases}\pi_{\phi_{1}}(s,q_{1})&\rho(s_{t},D^{q_{1}}_{% \phi_{1}})<\rho(s_{t},D^{q_{2}}_{\phi_{2}})\\ \pi_{\phi_{2}}(s,q_{2})&otherwise\end{cases}$$ (14) We show empirically the use of this switching policy for skill composition and discuss its limitations in the following sections. 6 Experiments And Discussion 6.1 Grid World Simulation In this section, we provide a simple grid world navigation example to illustrate the techniques presented in Sections 4 and 5. Here we have a robot navigating in a discrete 1 dimensional space. Its MDP state space $S=\{s|s\in[-5,5),s\textrm{ is discrete}\}$, its action space $A=\{left,stay,right\}$. The robot navigates in the commanded direction with probability 0.8, and with probability 0.2 it randomly chooses to go in the opposite direction or stay in the same place. The robot stays in the same place if the action leads it to go out of bounds. We define two regions $a:-3<s<-1$ and $b:2<s<4$. For the first task, the scTLTL specification $\phi_{1}=\diamondsuit\ a\wedge\diamondsuit b$ needs to be satisfied. In English, $\phi_{1}$ entails that the robot needs to visit regions $a$ and $b$ at least once. To learn a deterministic optimal policy $\pi^{\star}_{\phi_{1}}:S\times Q\rightarrow A$, we use standard Q-Learning Watkins (1989) on the FSA augmented MDP for this problem. We used a learning rate of $0.1$, a discount factor of 0.99, epsilon-greedy exploration strategy with $\epsilon$ decaying linearly from 0.0 to 0.01 in 1500 steps. The episode horizon is $T=50$ and trained for 500 iterations. All Q-values are initialized to zero. The resultant optimal policy is illustrated in Figure 3. We can observe from the figure above that the policy on each automaton state $q$ serves a specific purpose. $\pi^{\star}_{q_{0}}$tries to reach region $a$ or $b$ depending on which is closer. $\pi^{\star}_{q_{1}}$ always proceeds to region $a$. $\pi^{\star}_{q_{2}}$ always proceeds to region $b$. This agrees with the definition in Equation 8. The robot can start anywhere on the $s$ axis but must always start at automata state $q_{0}$. Following $\pi_{\phi_{1}}$, the robot will first reach region $a$ or $b$ (whichever is nearer), and then aim for the other region which in turn satisfies $\phi$. The states that have $stay$ as their action are either goal regions (states $(-2,q_{0}),(3,q_{1})$, etc) where a transition on $q$ happens or states that are never reached (states $(-3,q_{1}),(-4,q_{2})$, etc) because a transition on $q$ occurs before they can be reached. To illustrate automata guided task composition described in Example 2, instead of learning the task described by $\phi$ from scratch, we cam simply learn policy $\pi_{\phi_{2}}$ for the added requirement $\phi_{2}=\neg b\,\,\ \mathcal{U}\,\,a$. We use the same learning setup and the resultant optimal policy is depicted in Figure 4. It can be observed that $\pi_{\phi_{2}}$ tries to reach $a$ while avoiding $b$. This behavior agrees with the specification $\phi_{2}$ and its FSA provided in Figure 2. The action at $s=4$ is $stay$ because in order for the robot to reach $a$ it has to pass through $b$, therefore it prefers to obtain a low reward over violating the task. Having learned policies $\pi_{\phi_{1}}$ and $\pi_{\phi_{2}}$, we can now use Equation 14 to construct policy $\pi_{\phi_{1}\wedge\phi_{2}}$. The resulting policy for $\pi_{\phi_{1}\wedge\phi_{2}}$ is illustrated in Figure 2 (upper right). This policy guides the robot to first reach $a$ (except for state $s=4$) and then go to $b$ which agrees with the specification. Looking at Figure 1, the FSA of $\phi=\phi_{1}\wedge\phi_{2}$ have two options policies $\pi_{\phi}(\cdot,q_{0})$ and $\pi_{\phi}(\cdot,q_{1})$ 111$\pi_{\phi}(\cdot,q_{i})$ is the options policy of $\pi_{\phi}$ at automata state $q_{i}$ (definiton in Equation (8)). Writing in this form prevents cluttered subscripts ($trap$ state and $q_{f}$ are terminal states which don’t have options). State $q_{1}$ has only one outgoing edge with the guarding predicate $\psi_{q_{1},q_{f}}:b$, which means $\pi_{\phi}(\cdot,q_{1})=\pi_{\phi_{1}}(\cdot,q_{2})$(they have the same guarding predicate). Policy $\pi_{\phi}(\cdot,q_{0})$ is a switching policy between $\pi_{\phi_{1}}(\cdot,q_{0})$ and $\pi_{\phi_{2}}(\cdot,q_{0})$. Figure 2 (lower left) shows the robustness comparison at each state. The policy with lower robustness is chosen following Equation (14). We can see that the robustness of both policies are the same from $s=-5$ to $s=0$. And their policies agree in this range (Figures 3 and 4). As $s$ becomes larger, disagreement emerge because $\pi_{\phi_{1}}(\cdot,q_{0})$ wants to stay closer to $b$ but $\pi_{\phi_{2}}(\cdot,q_{0})$ wants otherwise. To maximize the robustness of their conjunction, the decisions of $\pi_{\phi_{2}}(\cdot,q_{0})$ are chosen for states $s>0$. 6.2 Simulated Baxter In this section, we construct a set of more complicated tasks that require temporal reasoning and evaluate the proposed techniques on a simulated Baxter robot. The environment is shown in Figure 3(left). In front of the robot are three square regions and two circular regions. An object with planar coordinates $p=(x,y)$ can use predicates $\mathcal{S}_{red}(p),\mathcal{S}_{blue}(p),\mathcal{S}_{black}(p)$, $\mathcal{C}_{red}(p),\mathcal{C}_{blue}(p)$ to evaluate whether or not it is within the each region. The predicates are defined by $\mathcal{S}:(x_{min}<x<x_{max})\wedge(y_{min}<y<y_{max})$ and $\mathcal{C}:dist((x,y),(x,y)_{center})<r$. $(x_{min},y_{min})$ and $(x_{max},y_{max})$ are the boundary coordinates of the square region, $(x,y)_{center}$ and $r$ are the center and radius of the circular region. There are also two boxes which planar positions are denoted as $p_{redbox}=(x,y)_{redbox}$ and $p_{bluebox}=(x,y)_{bluebox}$. And lastly there is a sphere that can randomly move in space which 2D coordinate is denoted as $p_{sphere}=(x,y)_{sphere}$ (all objects move in the table plane). We design seven tasks each specified by a scTLTL formula. The task specifications and their English translations are provided in Appendix A. Throughout the experiments in this section, we use proximal policy search Schulman et al. (2017) as the policy optimization method. The hyperparameters are kept fixed across the experiments and are listed in Appendix B. The policy is a Gaussian distribution parameterized by a feed-forward neural network with 2 hidden layers each of 64 relu units. The state and action spaces vary across tasks and comparison cases, and are described in Appendix C. We use the first task $\phi_{1}$ to evaluate the learning outcome using the FSA augmented MDP. As comparisons, we design two other rewards structures. The first is to use the robustness $\rho(s_{0:T},\phi)$ as the terminal reward for each episode and zero everywhere else, the second is a heuristic reward that aims to aligns with $\phi_{1}$. The heuristic reward consists of a state that keeps track of whether the sphere is in a region and a set of quadratic distance function. For $\phi_{1}$, the heuristic reward is $$r_{\phi_{1}}=\begin{cases}-dist(p_{redbox},p_{redsquarecenter})&p_{sphere}% \textrm{ is in red circle}\\ -dist(p_{redbox},P_{\textrm{black square center}})&\text{otherwise}.\end{cases}$$ (15) Heuristic rewards for other tasks are defined in a similar manner and are not presented explicitly. The results are illustrated in Figure 3(right). The upper right plot shows the average robustness over training iterations. Robustness is chosen as the comparison metric for its semantic rigor (robustness greater than zero satisfies task the specification). The reported values are averaged over 60 episodes and the plot shows the mean and 2 standard deviations over 5 random seeds. From the plot we can observe that the FSA augmented MDP and the terminal robustness reward performed comparatively in terms of convergence rate, whereas the heuristic reward fails to learn the task. The FSA augmented MDP also learns a policy with lower variance in final performance. We deploy the learned policy on the robot in simulation and record the task success rate. For each of the three cases, we deploy the 5 policies learned from 5 random seeds on the robot and perform 10 sets of tests with randomly initialized states resulting in 50 test trials for each case. The average success rate is presented in Figure 3(lower right). From the results we can see that the FSA augmented MDP is able to achieve the highest rate of success and this advantage over the robustness reward is due to the low variance of its final policy. To evaluate the policy switching technique for skill composition, we first learn four relatively simple policies $\pi_{\phi_{2}},\pi_{\phi_{3}},\pi_{\phi_{4}},\pi_{\phi_{5}}$ using the FSA augmented MDP. Then we construct $\pi_{\phi_{6}}=\pi_{\phi_{2}\wedge\phi_{3}}$ and $\pi_{\phi_{7}}=\pi_{\phi_{2}\wedge\phi_{3}\wedge\phi_{4}\wedge\phi_{4}}$using Equation (14) (It is worth mentioning that the policies learned by the robustness and heuristic rewards do not have an automaton state in them, therefore the skill composition technique does not apply). We deploy $\pi_{\phi_{6}}$ and $\pi_{\phi_{7}}$ on tasks 6 and 7 for 10 trials and record the average robustness of the resulting trajectories. As comparisons, we also learn tasks 6 and 7 from scratch using terminal robustness rewards and heuristic rewards, the results are presented in Figure 4. We can observe from the plots that as the complexity of the tasks increase, using the robustness and heuristic rewards fail to learn a policy that satisfies the specifications while the constructed policy can reliably achieve a robustness of greater than zero. We perform the same deployment test as previously described and looking at Figure 4 (right) we can see that for both tasks 6 and 7, only the policies constructed by skill composition are able to consistently complete the tasks. 7 Conclusion In this paper, we proposed the FSA augmented MDP, a product MDP that enables effective learning of hierarchical policies using any RL algorithm for tasks specified by scTLTL. We also introduced automata guided skill composition, a technique that combines existing skills to create new skills without additional learning. We show in robotic simulations that using the proposed methods we enable simple policies to perform logically complex tasks. Limitations of the current frame work include discontinuity at the point of switching (for Equation (14)), which makes this method suitable for high level decision tasks but not for low level control tasks. The technique only compares robustness at the current step and choose to follow a sub-policy for one time-step, this makes the switching policy short-sighted and may miss long term opportunities. One way to address this is to impose a termination condition for following each sub-policies and terminate only when the condition is triggered (as in the original options framework). This termination condition can be hand designed or learned References Aksaray et al. (2016) Derya Aksaray, Austin Jones, Zhaodan Kong, Mac Schwager, and Calin Belta. Q-learning for robust satisfaction of signal temporal logic specifications. In Decision and Control (CDC), 2016 IEEE 55th Conference on, pp.  6565–6570. IEEE, 2016. Baier & Katoen (2008) Christel Baier and Joost-Pieter Katoen. Principles Of Model Checking, volume 950. 2008. Belta et al. (2017) Calin Belta, Boyan Yordanov, and Ebru Aydin Gol. Formal methods for discrete-time dynamical systems, 2017. Chen et al. (2012) Yushan Chen, Kun Deng, and Calin Belta. Multi-agent persistent monitoring in stochastic environments with temporal logic constraints. In Decision and Control (CDC), 2012 IEEE 51st Annual Conference on, pp.  2801–2806. IEEE, 2012. Da Silva et al. (2009) Marco Da Silva, Frédo Durand, and Jovan Popović. Linear bellman combination for control of character animation. Acm transactions on graphics (tog), 28(3):82, 2009. Gu et al. (2016) Shixiang Gu, Ethan Holly, Timothy Lillicrap, and Sergey Levine. Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates. arXiv preprint arXiv:1610.00633, 2016. Haarnoja et al. (2017) Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, and Sergey Levine. Reinforcement learning with deep energy-based policies. arXiv preprint arXiv:1702.08165, 2017. Heess et al. (2016) Nicolas Heess, Greg Wayne, Yuval Tassa, Timothy Lillicrap, Martin Riedmiller, and David Silver. Learning and transfer of modulated locomotor controllers. arXiv preprint arXiv:1610.05182, 2016. Isele et al. (2017) David Isele, Akansel Cosgun, Kaushik Subramanian, and Kikuo Fujimura. Navigating Intersections with Autonomous Vehicles using Deep Reinforcement Learning. may 2017. URL http://arxiv.org/abs/1705.01196. Jaderberg et al. (2016) Max Jaderberg, Volodymyr Mnih, Wojciech Marian Czarnecki, Tom Schaul, Joel Z Leibo, David Silver, and Koray Kavukcuoglu. Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397, 2016. (11) Eric Jang, Google Brain, Sudheendra Vijayanarasimhan, Peter Pastor, Julian Ibarz, and Sergey Levine. End-to-End Learning of Semantic Grasping. URL https://arxiv.org/pdf/1707.01932.pdf. Kulkarni et al. (2016) Tejas D. Kulkarni, Karthik R. Narasimhan, Ardavan Saeedi, and Joshua B. Tenenbaum. Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation. 2016. URL http://arxiv.org/abs/1604.06057. Kupferman & Vardi (2001) Orna Kupferman and Moshe Y Vardi. Model checking of safety properties. Formal Methods in System Design, 19(3):291–314, 2001. Leahy et al. (2015) Kevin Leahy, Austin Jones, Mac Schwager, and Calin Belta. Distributed information gathering policies under temporal logic constraints. In Decision and Control (CDC), 2015 IEEE 54th Annual Conference on, pp.  6803–6808. IEEE, 2015. Levine et al. (2016) Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen. Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection. arXiv, 2016. doi: 10.1145/2835776.2835844. URL http://arxiv.org/abs/1603.02199v1. Li et al. (2016) Xiao Li, Cristian-Ioan Vasile, and Calin Belta. Reinforcement learning with temporal logic rewards. arXiv preprint arXiv:1612.03471, 2016. Madrigal (2017) Alexis C Madrigal. Inside Waymo’s Secret World for Training Self-Driving Cars, 2017. URL https://www.theatlantic.com/technology/archive/2017/08/inside-waymos-secret-testing-and-simulation-facilities/537648/. Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei a Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529–533, 2015. doi: 10.1038/nature14236. Oh et al. (2017) Junhyuk Oh, Satinder Singh, Honglak Lee, and Pushmeet Kohli. Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning. 2017. Sadraddini & Belta (2015) Sadra Sadraddini and Calin Belta. Robust Temporal Logic Model Predictive Control. 53rd Annual Conference on Communication, Control, and Computing (Allerton), 2015. Schulman et al. (2017) John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347, 2017. Silver et al. (2016) David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, and Koray Kavukcuoglu. Mastering the game of Go with deep neural networks and tree search. Nature, 529(7585):484–489, 2016. ISSN 0028-0836. doi: 10.1038/nature16961. URL http://dx.doi.org/10.1038/nature16961. Singh et al. (2004) S. Singh, A.G. Barto, and N. Chentanez. Intrinsically motivated reinforcement learning. 18th Annual Conference on Neural Information Processing Systems (NIPS), 17(2):1281–1288, 2004. ISSN 1943-0604. doi: 10.1109/TAMD.2010.2051031. Sutton et al. (1998) Richard S Sutton, Doina Precup, and Satinder Singh. Between MDPs and Semi-MDPs: Learning, Planning, and Representing Knowledge at Multiple Temporal Scales. Artificial Intelligence, 1(98-74):1–39, 1998. (25) Haoran Tang and Tuomas Haarnoja. Learning Diverse Skills via Maximum Entropy Deep Reinforcement Learning. URL http://bair.berkeley.edu/blog/2017/10/06/soft-q-learning/. Todorov (2009) Emanuel Todorov. Compositionality of optimal control laws. In Advances in Neural Information Processing Systems, pp. 1856–1864, 2009. Ulusoy (2017) A Ulusoy. Ltl optimal multi-agent planner (lomap). Github repository, 2017. Watkins (1989) Christopher John Cornish Hellaby Watkins. Learning From Delayed Rewards. PhD thesis, King’s College, Cambridge, England, 1989. Appendix Appendix A Task Specifications Appendix B Hyperparameters For Proximal Policy Optimization Appendix C State And Action Spaces For experiments with the simulated Baxter robot, we delegate low level control to motion planning packages and only learn high level decisions. Depending on the task, the states are current planar positions of objects (red box, blue box, sphere) and the automata state. The actions are the target positions of the objects. We assume that the low level controller can take objects to the desired target position with minor uncertainty that will be dealt with by the learning agent. The table below shows state and action spaces used for each task. $s_{\mathcal{M}}$ and $a_{\mathcal{M}}$ denote the spaces for regular MDP (used for terminal robustness rewards and heuristic rewards). $s_{\tilde{\mathcal{M}}}$ and $a_{\tilde{\mathcal{M}}}$ denote the spaces for FSA augmented MDP. $q$ denotes the automata state. For tasks $\phi_{6}$ and $\phi_{7}$, the action space is three dimensional, the first two dimension $p=(x,y)$ is a target position, the third dimension $d$ controls which object should be placed at $p$. If $d<0.5$, then $p=p_{redbox}$ and if $d>0.5$, then $p=p_{bluebox}$. Appendix D Learning With FSA Augmented MDP (off-policy version)
Diffraction manipulation by four-wave mixing I. Katzir    A. Ron Department of Physics, Technion-Isreal Institute of Technology, Haifa 32000, Israel    O. Firstenberg Department of Physics, Harvard University, Cambridge, MA 02138, USA Abstract We suggest a scheme to manipulate paraxial diffraction by utilizing the dependency of a four-wave mixing process on the relative angle between the light fields. A microscopic model for four-wave mixing in a $\Lambda$-type level structure is introduced and compared to recent experimental data. We show that images with feature size as low as 10 $\mu$m can propagate with very little or even negative diffraction. The inherent gain prevents loss and allows for operating at high optical depths. Our scheme does not rely on atomic motion and is thus applicable to both gaseous and solid media. ††preprint: I Introduction The diffraction of light during propagation in free space is a fundamental and generally unavoidable physical phenomenon. Because of diffraction, light beams do not maintain their intensity distribution in the plane transverse to the propagation direction, unless belonging to a particular class of non-diffracting (Bessel) beams Durnin (1987). In nonuniform media, waveguiding is possible for specific spatial modes Kapoor and Agarwal (2000); Truscott et al. (1999), or equivalently arbitrary images may revive after a certain self-imaging distance Cheng and Han (2007). However in such waveguides, the suppression of diffraction for multimode profiles is not trivial, as each transverse mode propagates with a different propagation constant or group velocity, resulting in spatial dispersion of the profile. Recently, a technique was suggested Firstenberg et al. (2009a) and experimentally demonstrated Firstenberg et al. (2009b) to manipulate (eliminate, double, or reverse) the diffraction of arbitrary images imprinted on a light beam for arbitrary propagation distances. The technique is based on electromagnetically induced transparency (EIT) Fleischhauer et al. (2005) in a thermal atomic gas. Unlike other methods utilizing EIT Moseley et al. (1995); Mitsunaga et al. (2000); Bortman-Arbiv et al. (1998); Kapoor and Agarwal (2000); Truscott et al. (1999); Andre et al. (2005); Cheng et al. (2005); Cheng and Han (2007); Vengalattore and Prentiss (2005); Tarhan et al. (2007); Hong (2003); Friedler et al. (2005); Gómez-Reino and Larrea (1982), which rely on spatial non-uniformity, this technique operates and prescribes non-uniformity in $k_{\perp}$ space. Here, $k_{\perp}$ denotes the transverse wave vectors, i.e., the Fourier components of the envelope of the field in the transverse plane, which is the natural basis for paraxial diffraction. The technique of Refs. Firstenberg et al. (2009a, b) relies on the diffusion of the atoms in the medium and on the resulting so-called Dicke narrowing Firstenberg et al. (2007, 2008). Due to Dicke narrowing, the linear susceptibility becomes quadratic in $k_{\perp}$ and results in motional-induced diffraction that can counteract the free-space diffraction. Unfortunately in the currently available experimental conditions, the resolution limit of motional-induced diffraction is on the order of 100 $\mu$m, preventing it from being of much practical use. Higher resolution requires a denser atomic gas, in which strong absorption is unavoidable due to imperfect EIT conditions. Very recently, Zhang and Evers proposed to circumvent the absorption by generalizing the model of motional-induced diffraction to a four-wave mixing (FWM) process in combination with EIT Zhang and Evers (2013). The FWM process further allows the frequency conversion of the image and increases the available resolution Zhang and Evers (2013). In this paper, we propose a scheme to manipulate diffraction using FWM Chiao et al. (1966); Carman et al. (1966); Boyd et al. (1981) without the need for motional-induced diffraction. The mechanism we study originates from phase matching in $k_{\perp}$ space and does not require a gaseous medium; it is therefore directly applicable to solid nonlinear media. For our model to be general and accommodate motional-broadening mechanisms (not important in solids), we here still concentrate on describing atomic gases and validate our model against relevant experiments. The inherent gain of the FWM process allows us to improve the spatial resolution by working with relatively higher gas densities while avoiding loss due to absorption. In Sec. II, we introduce a microscopic model of FWM in a $\Lambda$ system, based on Liouville-Maxwell equations, similar to the one used in Ref. Harada et al. (2008). In Sec. III, we compare the model to recent experimental results of FWM in hot vapor Harada et al. (2008); McCormick et al. (2007). We use our model in Sec. IV to show that, with specific choice of frequencies, the $k_{\perp}$ dependency of the FWM process can be used to eliminate the diffraction of a propagating light beam. We also present a demonstration of negative diffraction, implementing a paraxial version of a negative-index lens Vaselago (1968), similar to the one in Ref. Firstenberg et al. (2009b) but with positive gain and higher resolution. Finally, we analyze the resolution limitation of our scheme and propose ways to enhance it. We show that, for cold atoms at high densities ($\sim$10${}^{12}$ cm${}^{-3}$), diffraction-less propagation of an image with a resolution of $\sim$10 $\mu$m can be achieved. II Theory II.1 Model We consider an ensemble of three-level atoms in a $\Lambda$ configuration depicted in Fig. 1a. The atomic states are denoted as $\left|u\right\rangle$, $\left|l\right\rangle$, and $\left|r\right\rangle,$ for the up, left, and right states, and the corresponding energies are $\epsilon_{u},\epsilon_{l},$ and $\epsilon_{r}.$ We introduce the optical transition frequencies $\omega_{ul}=\left(\epsilon_{u}-\epsilon_{l}\right)/\hslash$ and $\omega_{ur}=\left(\epsilon_{u}-\epsilon_{r}\right)/\hslash$, taken to be much larger than the ground-state splitting $\omega_{lr}=\omega_{ul}-\omega_{ur}=\left(\epsilon_{l}-\epsilon_{r}\right)/\hslash$. To simplify the formalism, we assume the same dipole-moment for the two optical transitions $\mu=\mu_{ul}=\mu_{ur},$ where $\mu_{\alpha\alpha^{\prime}}=\left\langle\alpha\right|\mathbf{\mu}\cdot\widehat% {\mathbf{x}}\left|\alpha^{\prime}\right\rangle$, $\mathbf{\mu}$ being the dipole-moment operator. The atom interacts with three external, classical electromagnetic fields, propagating in time $t$ and space $\mathbf{r}$, $$\begin{array}[c]{c}\mathbf{E}_{cl}\left(\mathbf{r},t\right)=(\hslash/\mu)% \boldsymbol{\varepsilon}_{cl}\Omega_{c}\left(\mathbf{r},t\right)e^{-i\omega_{c% }t}e^{ik_{c}^{0}z},\\ \mathbf{E}_{cr}\left(\mathbf{r},t\right)=(\hslash/\mu)\boldsymbol{\varepsilon}% _{cr}\Omega_{c}\left(\mathbf{r},t\right)e^{-i\omega_{c}t}e^{ik_{c}^{0}z},\\ \mathbf{E}_{p}\left(\mathbf{r},t\right)=(\hslash/\mu)\boldsymbol{\varepsilon}_% {p}\Omega_{p}\left(\mathbf{r},t\right)e^{-i\omega_{p}t}e^{ik_{p}^{0}z}.\end{array}$$ (1) Here $\omega_{i}$ are the frequencies of a weak ’probe’ ($i=p$) and two ’control’ fields ($i=c$); $\boldsymbol{\varepsilon}_{j}$ are the polarization vectors (with $j=p,cl,cr$); $k_{i}^{0}\equiv\omega_{i}/c$ are the wave vectors in the case of plane waves, otherwise they are carrier wave-vectors; and $\Omega_{i}\left(\mathbf{r},t\right)$ are the slowly varying envelopes of the the Rabi frequencies, satisfying $\left|\partial_{t}^{2}\Omega_{i}\left(\mathbf{r},t\right)\right|\ll\left|% \omega_{i}\partial_{t}\Omega_{i}\left(\mathbf{r},t\right)\right|$ and $\left|\partial_{z}^{2}\Omega_{i}\left(\mathbf{r},t\right)\right|\ll\left|k_{i}% ^{0}\partial_{z}\Omega_{i}\left(\mathbf{r},t\right)\right|$. We shall analyze the case of two identical control fields with the same Rabi frequency $\Omega_{c}$, wave vector $k_{c}^{0}$, and frequency $\omega_{c}$. The strong control and weak probe fields stimulate a weak classical ’Stokes’ field (or ’conjugate’) at a frequency $\omega_{s}=2\omega_{c}-\omega_{p},$ $$\mathbf{E}_{s}\left(\mathbf{r},t\right)=(\hslash/\mu)\boldsymbol{\varepsilon}_% {s}\Omega_{s}\left(\mathbf{r},t\right)e^{-i\omega_{s}t}e^{ik_{s}^{0}z}.$$ (2) The resonances are characterized by the one-photon detuning $\Delta_{1p}=\omega_{c}-\omega_{ur}$ and the two-photon detuning $\Delta_{2p}=\omega_{p}-\omega_{c}-\omega_{lr}$ (see Fig. 1a) .The population of the excited level $\left|u\right\rangle$ decays to the ground levels $\left|l\right\rangle$ and $\left|r\right\rangle$ with rates $\Gamma_{l}$ and $\Gamma_{r}$. The atomic coherence between the excited level $\left|u\right\rangle$ and each of the ground levels $\left|l\right\rangle$ and $\left|r\right\rangle$ decays with rates $\Gamma_{d,l}$ and $\Gamma_{d,r}$. For simplicity, we assume $\Gamma_{l}=$ $\Gamma_{r}=\Gamma_{d,l}=\Gamma_{d,r}\equiv\Gamma$. Within the ground state, we consider a population relaxation with a symmetric rates $\Gamma_{l\leftrightarrow r}$ and decoherence with rate $\Gamma_{lr}$. In a frame rotating with the control frequency $\omega_{c}$, the equations of motion for the local density-matrix $\rho\left(\mathbf{r},t\right)$ are better written in terms of the slowly-varying density-matrix $R\left(\mathbf{r},t\right),$ where $R_{u,j}\left(\mathbf{r},t\right)=\rho_{u,j}\left(\mathbf{r},t\right)e^{i\omega% _{c}t-ik_{c}^{0}z}$ for $j=l,r$ and $R_{\alpha,\alpha^{\prime}}\left(\mathbf{r},t\right)=\rho_{\alpha,\alpha^{% \prime}}\left(\mathbf{r},t\right)$ for all other matrix elements, $$\displaystyle\frac{\partial}{\partial t}{R}_{l,l}$$ $$\displaystyle{=}{-2}\operatorname{Im}(\hat{P}^{\ast}R_{u,l}){+\Gamma}_{l% \leftrightarrow r}{(R}_{r,r}{-R}_{l,l}{)+\Gamma R}_{u,u}$$ $$\displaystyle\frac{\partial}{\partial t}R_{r,r}$$ $$\displaystyle=-2\operatorname{Im}(\hat{S}^{\ast}R_{u,r})-\Gamma_{l% \leftrightarrow r}(R_{r,r}-R_{l,l})+\Gamma R_{u,u}$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,u}$$ $$\displaystyle=2\operatorname{Im}(\hat{P}^{\ast}R_{u,l})+2\operatorname{Im}(% \hat{S}^{\ast}R_{u,r})-2\Gamma R_{u,u}$$ $$\displaystyle\frac{\partial}{\partial t}R_{r,l}$$ $$\displaystyle=i\hat{S}^{\ast}R_{u,l}-i\hat{P}R_{u,r}^{\ast}-(\Gamma_{lr}+i% \omega_{lr})R_{r,l}$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,l}$$ $$\displaystyle=-i\hat{P}\left(R_{u,u}-R_{l,l}\right)+i\hat{S}R_{r,l}+\gamma_{cl% }^{\ast}R_{u,l}$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,r}$$ $$\displaystyle=-i\hat{S}\left(R_{u,u}-R_{r,r}\right)+i\hat{P}R_{r,l}^{\ast}+% \gamma_{cr}^{\ast}R_{u,r}.$$ (3) Here $\gamma_{cj}=\Gamma-i\left(\omega_{c}-\omega_{uj}\right)$ are complex one-photon detunings of the control fields ($j=l,r$), $\delta_{\omega}=\omega_{p}-\omega_{c}=\omega_{c}-\omega_{s}$ and $\delta_{k}=k_{p}^{0}-k_{c}^{0}$ $=k_{c}^{0}-k_{s}^{0}$ are detuning parameters, and $$\displaystyle\hat{P}$$ $$\displaystyle\equiv\Omega_{p}\left(\mathbf{r},t\right)e^{-i\left(\delta_{% \omega}t-\delta_{k}z\right)}+\Omega_{c},$$ (4a) $$\displaystyle\hat{S}$$ $$\displaystyle\equiv\Omega_{s}\left(\mathbf{r},t\right)e^{i\left(\delta_{\omega% }t-\delta_{k}z\right)}+\Omega_{c}$$ (4b) are interference fields. Assuming non-depleted control fields, constant in time and space $\Omega_{c}\left(\mathbf{r},t\right)=\Omega_{c},$ we complete the description of the atom-field interaction with the propagation equations under the envelope approximation for the probe field $$\left(\frac{\partial}{\partial z}+\frac{1}{c}\frac{\partial}{\partial t}+\frac% {i\nabla_{\perp}^{2}}{2q}\right)\Omega_{p}\left(\mathbf{r},t\right)=igR_{u,l}% \left(\mathbf{r},t\right)e^{i(\delta_{\omega}t-\delta_{k}z)}$$ (5a) and the Stokes field $$\left(\frac{\partial}{\partial z}+\frac{1}{c}\frac{\partial}{\partial t}-\frac% {i\nabla_{\perp}^{2}}{2q}\right)\Omega_{s}\left(\mathbf{r},t\right)=igR_{u,r}% \left(\mathbf{r},t\right)e^{-(\delta_{\omega}t-\delta_{k}z)},$$ (5b) where $\nabla_{\perp}^{2}\equiv$ $\partial^{2}/\partial x^{2}+\partial^{2}/\partial y^{2}$ the transverse Laplacian, $g=2\pi N\left|\mu\right|^{2}q/\hslash$ the coupling strength proportional to the atomic density $N$, and $q\equiv\left|k_{c}^{0}\right|\approx\left|k_{p}^{0}\right|\approx\left|k_{s}^{% 0}\right|$. To obtain Eqs. (5), we neglected the second-order $t$ and $z$ derivatives of the envelopes. II.2 Steady-state solution The evolution of the fields is described by a set of non-linear, coupled differential equations for the density matrix elements $R_{\alpha,\alpha^{\prime}}$ and the weak fields $\Omega_{p}$ and $\Omega_{s}$ [Eqs. (3)-(5)], which require further approximations to be solved analytically. The solutions and assumptions are detailed in the Appendix, where we find the steady state of the system to first order in the weak fields, $$R_{\alpha,\alpha^{\prime}}\simeq R_{\alpha,\alpha^{\prime}}^{(0)}+R_{\alpha,% \alpha^{\prime}}^{(1)},$$ (6) $R_{\alpha,\alpha^{\prime}}^{(0)}$ and $R_{\alpha,\alpha^{\prime}}^{(1)}$ being the zero- and first- order steady-state solutions. The most important assumption is the proximity to two-photon resonance, such that $\delta_{\omega}$ is on the order of the ground-state frequency splitting $\omega_{lr}$ and much larger than any detuning, Rabi frequency, or pumping rate in the system. Plugging Eqs. (24)-(27) and (6) for $R_{u,r}$ and $R_{u,l}$ into the propagation equations (5) and discarding terms rotating at $\delta_{\omega}$ and $2\delta_{\omega}$, we obtain the well-known FWM form including paraxial diffraction, $$\displaystyle\left(\frac{\partial}{\partial z}-i\frac{1}{2q}\nabla_{\perp}^{2}% \right)\Omega_{p}\left(\mathbf{r}\right)$$ $$\displaystyle=A\Omega_{p}\left(\mathbf{r}\right)+B\Omega_{s}^{\ast}\left(% \mathbf{r}\right)$$ (7a) $$\displaystyle\left(\frac{\partial}{\partial z}+i\frac{1}{2q}\nabla_{\perp}^{2}% \right)\Omega_{s}^{\ast}\left(\mathbf{r}\right)$$ $$\displaystyle=C\Omega_{p}\left(\mathbf{r}\right)+D\Omega_{s}^{\ast}\left(% \mathbf{r}\right),$$ (7b) where $$\displaystyle A$$ $$\displaystyle\equiv-\alpha_{p}+\frac{\beta_{p}}{\gamma_{pl}\gamma_{0}}\left|% \Omega_{c}\right|^{2},\ \ \ B\equiv\frac{\beta_{s}}{\gamma_{pl}\gamma_{0}}% \left|\Omega_{c}\right|^{2},$$ (8) $$\displaystyle C$$ $$\displaystyle\equiv\frac{\beta_{p}}{\gamma_{sr}^{\ast}\gamma_{0}}\left|\Omega_% {c}\right|^{2},\ \ \ \ \ \ \ \ \ \ \ \ D\equiv-\alpha_{s}^{\ast}+\frac{\beta_{% s}}{\gamma_{sr}^{\ast}\gamma_{0}}\left|\Omega_{c}\right|^{2}.$$ Here $\alpha_{j}=g\left(n_{l}/\gamma_{jl}+n_{r}/\gamma_{jr}\right)$ are the linear absorption coefficients of the probe ($j=p$) or Stokes ($j=s$) fields, with $n_{i}\equiv R_{i,i}^{(0)}$ the populations of the$\ i=r,l$ levels. $\beta_{p}=g\left(n_{l}/\gamma_{pl}+n_{r}/\gamma_{cr}^{\ast}\right)$ and $\beta_{s}=g\left(n_{r}/\gamma_{sr}^{\ast}+n_{l}/\gamma_{cl}\right)$ are two-photon interaction coefficients, $\gamma_{jk}=\Gamma-i\left(\omega_{j}-\omega_{uk}\right)$ $[j=p,c,s;$ $k=l,r]$ are complex one-photon detunings, and $\gamma_{0}=\Gamma_{lr}+\left|\Omega_{c}\right|^{2}/\gamma_{sr}^{\ast}+\left|% \Omega_{c}\right|^{2}/\gamma{}_{pl}-i(\delta_{\omega}-\omega_{lr})$ is the complex two-photon detuning. Eqs. (7) are similar to those obtained by Harada et al. Harada et al. (2008) but here including the diffraction term $\pm i\nabla_{\perp}^{2}/(2q)$, which we require in order to explore the spatial evolution of the FWM process. We start with the simple case of a weak plane-wave probe $\Omega_{p}\left(\mathbf{r}\right)=f\left(z\right)e^{i\boldsymbol{k}_{\perp}^{p% }\cdot\boldsymbol{r}_{\perp}}e^{i(k_{z}^{p}-k_{0}^{p})z}\ $directed at some small angle $\theta\approx k_{\perp}^{p}/k_{0}^{p}\ll 1$ relatively to the $z$ axis (Fig. 1). We assume that the generated Stokes field is also a plane wave $\Omega_{s}\left(\mathbf{r}\right)=g\left(z\right)e^{i\boldsymbol{k}_{\perp}^{s% }\cdot\boldsymbol{r}_{\perp}}e^{i\left(k_{z}^{s}-k_{0}^{s}\right)z}$. Substituting into Eqs. (7), the phase-matching condition $\vec{k}_{\perp}^{s}=-\vec{k}_{\perp}^{p}$ is easily obtained, and the resulting equations for $f$ and $g$ are Boyd et al. (1981) $$\displaystyle f^{\prime}(z)$$ $$\displaystyle=Af(z)+Bg^{\ast}(z)e^{i2k_{\Delta}z},$$ (9a) $$\displaystyle g^{\prime\ast}(z)$$ $$\displaystyle=Cf(z)+Dg^{\ast}(z)e^{-i2k_{\Delta}z},$$ (9b) where $2k_{\Delta}=\left(2\boldsymbol{k}_{c}-\boldsymbol{k}_{p}-\boldsymbol{k}_{s}% \right)\cdot\hat{z}\thickapprox$ $k_{\perp}^{2}/q$ is the phase mismatch scalar. Assuming $f\left(0\right)=1$ and $g\left(0\right)=0,$ we follow Ref. Boyd et al. (1981) and find along the medium $$\displaystyle f\left(z\right)e^{-ik_{\Delta}z}$$ $$\displaystyle=\frac{A-\lambda_{2}}{\lambda_{1}-\lambda_{2}}e^{\lambda_{1}z}-% \frac{A-\lambda_{1}}{\lambda_{1}-\lambda_{2}}e^{\lambda_{2}z},$$ (10a) $$\displaystyle g^{\ast}\left(z\right)e^{ik_{\Delta}z}$$ $$\displaystyle=-\frac{C}{\lambda_{1}-\lambda_{2}}\left(e^{\lambda_{1}z}-e^{% \lambda_{2}z}\right),$$ (10b) with the eigenvalues $$\lambda_{1,2}=\frac{A+D}{2}\pm\frac{1}{2}\sqrt{\left(A-D-i2k_{\Delta}\right)^{% 2}+4BC}.$$ (11) In the limit where $\left|B\right|$ and $\left|C\right|$ are much smaller than $\left|A\right|$ and $\left|D\right|$, the solution is governed by independent EIT for the probe and Stokes fields with little coupling between them. In the opposite limit, the fields experience strong coupling, and the real part of the eigenvalues $\lambda_{1,2}$ can be made positive and result in gain. III Comparison with experiments To verify our model, we have calculated the probe transmission as a function of the two-photon detuning and compared it to the data published in Refs. Harada et al. (2008); McCormick et al. (2007). The Doppler effect due to the motion of the thermal atoms is taken into account by averaging the FWM coefficients, $Q=A,B,C,D$ in Eq. (8), over the Doppler profile Happer and Mathur (1967). Assuming nearly collinear beams, the mean coefficients are $$\overline{Q}=\frac{1}{\sqrt{2\pi}v_{th}}\int duQ\left(\omega_{p}+qu,\omega_{c}% +qu\right)\exp\left(\frac{-u^{2}}{2v_{th}}\right),$$ (12) where $v_{th}=k_{B}T/m,$ $T$  the cell temperature, and $m$ the atomic mass. Fig. 2 shows the transmission spectrum in (a) rubidium vapor and (b) sodium vapor (cells length $L\simeq 5$ cm). Our model reproduces the experimental spectra, including the Doppler-broadened absorption lines and the gain peaks for both rubidium and sodium experiments. The missing peak in Fig. 2b is due to anti-Stokes generation not included in the model. IV Diffraction manipulation by FWM We now concentrate on a specific choice of frequency detunings, for which the phase dependency of the FWM process can be used to manipulate the diffraction of the propagating probe and Stokes. To this end, we study the evolution of an arbitrary image $F\left(r\right)$ imprinted on the probe beam, with the boundary conditions $\Omega_{p}\left(\boldsymbol{r}_{\perp},0\right)=F\left(r\right)$ and $\Omega_{s}\left(\boldsymbol{r}_{\perp},0\right)=0.$ Our prime examples shall be the propagation of the image$\ $without diffraction or with reverse diffraction, both of which while experiencing gain. IV.1 Image propagation We start by solving Eqs. (7) in the transverse Fourier basis, $$\displaystyle\left(\frac{\partial}{\partial z}+i\frac{k_{\perp}^{2}}{2q}-\bar{% A}\right)\Omega_{p}\left(\boldsymbol{k}_{\perp},z\right)$$ $$\displaystyle=\bar{B}\Omega_{s}^{\ast}\left(\boldsymbol{k}_{\perp},z\right),$$ (13a) $$\displaystyle\left(\frac{\partial}{\partial z}-i\frac{k_{\perp}^{2}}{2q}-\bar{% D}\right)\Omega_{s}^{\ast}\left(\boldsymbol{k}_{\perp},z\right)$$ $$\displaystyle=\bar{C}\Omega_{p}\left(\boldsymbol{k}_{\perp},z\right),$$ (13b) where $$\Omega_{p/s}\left(\boldsymbol{k}_{\perp},z\right)=\int dr_{\perp}^{2}e^{-i% \boldsymbol{k}_{\perp}\cdot\boldsymbol{r}_{\perp}}\Omega_{p/s}\left(% \boldsymbol{r}_{\perp},z\right)$$, and the coefficients $$\bar{A}$$, $$\bar{B}$$, $$\bar{C}$$, and $$\bar{D}$$ are Doppler averaged according to Eq. (12). We notice that Eqs. (13) are identical to Eqs. (9) with $$k_{\Delta}=0$$ and with the substitutions $$\bar{A}\rightarrow\bar{A}-ik_{\perp}^{2}/(2q)$$ and $$\bar{D}\rightarrow\bar{D}+ik_{\perp}^{2}/(2q).$$ The evolution of the probe and Stokes fields then follows from Eq. (10), $$\displaystyle\frac{\Omega_{p}\left(\boldsymbol{k}_{\perp},z\right)}{\Omega_{p}% \left(\boldsymbol{k}_{\perp},0\right)}$$ $$\displaystyle=\frac{\bar{A}-ik_{\perp}^{2}/q-\lambda_{2}}{\lambda_{1}-\lambda_% {2}}e^{\lambda_{1}z}$$ (14a) $$\displaystyle-\frac{\bar{A}-ik_{\perp}^{2}/q-\lambda_{1}}{\lambda_{1}-\lambda_% {2}}e^{\lambda_{2}z},$$ $$\displaystyle\frac{\Omega_{s}\left(\boldsymbol{k}_{\perp},z\right)}{\Omega_{p}% \left(\boldsymbol{k}_{\perp},0\right)}$$ $$\displaystyle=\frac{-\bar{C}}{\lambda_{1}-\lambda_{2}}(e^{\lambda_{1}z}-e^{% \lambda_{2}z}),$$ (14b) where $$\lambda_{1,2}=\frac{\bar{A}+\bar{D}}{2}\pm\frac{1}{2}\sqrt{\left(\bar{A}-\bar{% D}-i\frac{k_{\perp}^{2}}{q}\right)^{2}+4\bar{B}\bar{C}}.$$ (15) We choose $|e^{\lambda_{2}z}|\gg|e^{\lambda_{1}z}|$ and obtain $\Omega_{p}\left(\boldsymbol{k}_{\perp},z\right)=\Omega_{p}\left(\boldsymbol{k}% _{\perp},0\right)e^{Z}$, where $$Z\equiv\lambda_{2}z+\log\left(\frac{\bar{A}-ik_{\perp}^{2}/q-\lambda_{1}}{% \lambda_{2}-\lambda_{1}}\right)$$ (16) determines the changes in the spatial shape of the probe along its propagation. $\operatorname{Re}Z$ is responsible for the $k_{\perp}$-dependency of the gain/absorption, and $\operatorname{Im}Z$ is responsible for the $k_{\perp}$-dependency of the phase accumulation, that is, the diffraction-like evolution. IV.2 Suppression of paraxial diffraction In general, in order to minimize the distortion of the probe beam, one is required to minimize both the real and imaginary $k_{\perp}$-dependencies of $Z$. To better understand the behavior of $Z$, we expand it in orders of $k_{\perp}^{2}$. Taking the limit $$k_{\perp}^{2}\ll k_{0}^{2}=\min\left(\frac{2qE^{2}}{\bar{A}-\bar{D}},2qE\right),$$ (17) where $2E=[\left(\bar{A}-\bar{D}\right)^{2}+4\bar{B}\bar{C}]^{1/2}$, we write $${Z\thickapprox Z}^{(0)}+\frac{k_{\perp}^{2}}{2q}Z^{(2)}{+O}\left(k_{\perp}^{4}\right)$$ (18) and find $$\displaystyle{Z}^{(0)}$$ $$\displaystyle{\thickapprox}\left(\frac{\bar{A}+\bar{D}}{2}-E\right){z}$$ $$\displaystyle Z^{(2)}$$ $$\displaystyle=i\left(\frac{\bar{A}-\bar{D}}{2E}z+\frac{\bar{A}-\bar{D}}{2E^{2}% }-\frac{1}{E}\right).$$ (19) The $k_{\perp}$-dependency, governed by $Z^{(2)}$, can be controlled through the FWM coefficients $\bar{A}$, $\bar{B}$, $\bar{C}$, and $\bar{D}$ given in Eq. (8), by manipulating the frequencies of the probe and control fields ($\omega_{p},\omega_{c}$), the control amplitude $\Omega_{c}$, and the density $N$. We demonstrate this procedure in Fig. 3, using for example the experimental conditions of the sodium experiment, detailed in Fig. 2. First, we observe the gain of the probe and the Stokes fields in Fig. 3a and 3b, as a function of the one- ($\Delta_{1p}$) and two- ($\Delta_{2p}$) photon detunings. The gain is achieved around the two-photon resonance ($\Delta_{2p}\ \thickapprox 0$), either when the probe is at the one-photon resonance ($\Delta_{1p}\thickapprox 0$) or the Stokes ($\Delta_{1p}\thickapprox\omega_{lr},$ here $\thickapprox 2\ $GHz); the latter exhibits higher gain, since the probe sits outside its own absorption line. The real and imaginary parts of $Z^{(2)}$ are plotted in Figs. 3c and 3d. When $\operatorname{Re}Z^{(2)}=0$ (dashed line), the gain/absorption is not $k_{\perp}$-dependent, whereas when $\operatorname{Im}Z^{(2)}=0$ (solid line), the phase accumulation along the cell is not $k_{\perp}$-dependent. When both happen, $Z^{(2)}=0,$ and a probe with a spectrum confined within the resolution limit $k_{\perp}\ll k_{0}$ propagates without distortion. The exact propagation exponent $Z$ as a function of $k_{\perp}$ for the point $Z^{(2)}=0$ ($\Delta_{2p}\thickapprox 0.4$ MHz, $\Delta_{1p}\thickapprox 0$) are plotted in Fig. 3e. As expected, both real (blue solid line) and imaginary (red dashed line) parts of $Z$ are constant for $k_{\perp}$ $\ll k_{0}$ (deviation of 1% within $k_{\perp}<k_{0}/2$ and 0.1% within $k_{\perp}<k_{0}/4$). In the specific example of Fig. 3, the probe’s gain is $\sim$1.4, the Stokes’ gain is $\sim$ 4, and $k_{0}\thickapprox 40\ $mm${}^{-1}$. To illustrate the achievable resolution, we shall employ a conservative definition for a characteristic feature size in the image in area units $a=(2\pi/k_{\perp})^{2}$ [For example: for a Gaussian beam, $a^{1/2}$ shall be twice the waist radius, and, for the field pattern $E=1+\cos(k_{\perp}x)\cos(k_{\perp}y),$ the pixel area is $a^{2}.$ The Rayleigh length is $qa^{2}/8$]. Fig. 4 presents numerical calculations of Eqs. (14) in the conditions found above for a probe beam in the shape of the symbol (R) with features of $a\approx 0.025$ mm${}^{2}$ (corresponding to $k_{\perp}=k_{0}=40$ mm${}^{-1}$). The propagation distance is $L=45$ mm, equivalent to $\lesssim 2$ Rayleigh distances as evident by the substantial free-space diffraction. Indeed when $Z^{(2)}=0$, the FWM medium dramatically reduces the distortion of the image due to diffraction. Note that the image spectrum (black dashed-dotted line) lies barely within the resolution limit and that the Stokes distortion due to diffraction is also reduced. Direct numerical solutions of Eqs. (7) give exactly the same results. For the hot sodium system, the required control-field intensity is on the order of 100 mW for beams with a waist radius of a few mm, which is practically a plane wave on the length scale of the image. IV.3 Negative paraxial diffraction Another interesting application of diffraction manipulation is imaging by negative diffraction, similar to the one proposed in Ref. Firstenberg et al. (2009b). Using the same tools as above, one can find the conditions for the reversal of paraxial diffraction, namely when $\operatorname{Re}Z^{(2)}$ vanishes and $\operatorname{Im}Z^{(2)}=1$ (free space diffraction is equivalent to $Z^{(2)}=-i$). At these conditions, as demonstrated in Fig. 5, the FWM medium of length $L$ focuses the radiation from a point source at a distance $u<L$ to a distance $v$ behind the cell, where $u+v=L.$ The mechanism is simple: each $k_{\perp}$ component of the probe accumulates outside the cell the phase $-ik_{\perp}^{2}\left(u+v\right)/(2q)=-ik_{\perp}^{2}L/(2q)$ and inside the cell the the phase $ik_{\perp}^{2}L/(2q)$, summing up to zero phase accumulation. The probe image thus ’revives’, with some additional gain, at the exit face of the cell. V Conclusions and discussion We have suggested a scheme utilizing the $k_{\perp}$-dependency of the four-wave mixing process for manipulating the diffraction during the propagation. The inherent gain of the FWM process allows one to take advantage of high optical-depths while avoiding absorption and, by that, achieving higher resolution than with previous EIT-based schemes Firstenberg et al. (2009a, b). As oppose to a recent proposal incorporating FWM Zhang and Evers (2013), our scheme does not require atomic motion and is expected to work even more efficiently in its absence. We have introduced a microscopic model for the FWM process, based on Liouville-Maxwell equations and incorporating Doppler broadening, and verified it against recent experimental results. We have delineated the conditions for which, according to the model, the FWM process suppresses the paraxial diffraction. We have also demonstrated the flexibility of the scheme to surpass the regular diffraction and reverse it, yielding an imaging effect while introducing gain. Our proposal was designed with the experimental limitations in mind, and its demonstration should be feasible in many existing setups. The resolution limit $a^{-1}\propto k_{0}^{2}$ of our scheme (and thus the number of ’pixels’ $S/a$ for a given beam area $S$) is proportional to the resonant optical depth . In practice, the latter can be increased either with higher atomic density $N$ or narrower optical transitions. For example, using a density of $N=5\cdot 10^{12}$ cm${}^{-3},$ 10 times higher than in the sodium setup of Ref. Harada et al. (2008), the limiting feature size would be $250$ $\mu$m${}^{2}$ ($k_{0}\thickapprox 125$ mm${}^{-1}$). As long as $NL=$const$,$ the other parameters required for the suppression of diffraction remain the same. At the same time, avoiding Doppler broadening by utilizing cold atoms or perhaps solids would substantially increase the resolution limit. Assuming cold atoms with practically no Doppler broadening (and ground-state relaxation rate $\Gamma_{lr}=100$ Hz), the same limiting feature of $250$ $\mu$m${}^{2}$ can be obtained at a reasonable density of $10^{12}$ cm${}^{-3}$. Finally, we note that the best conditions for suppression of diffraction are not always achieved by optimizing $Z^{(2)}$ alone (first order in $k_{\perp}^{2}/k_{0}^{2}$); In some cases, one could improve significantly by working with higher orders. As demonstrated in Fig. 6, Combining the aforementioned methods for resolution enhancement with $N=10^{12}$ cm${}^{-3}$ cold atoms, a resolution-limited feature size of down to about $100$ $\mu$m${}^{2}$ with unity gain can be achieved. Going beyond this resolution towards the $1-10$ $\mu$m${}^{2}$-scale, for applications in microscopy or lithography, would require further work aimed at lifting the paraxial assumption. The FWM process conserves quantum coherence on the level of single photons, as was previously shown theoretically Glorieux et al. (2010) and experimentally Boyer et al. (2008) by measuring spatial coherence (correlation) between the outgoing probe and Stokes beams. An intriguing extension of our work would thus be the generalization of the scheme to the single-photon regime. Specifically, the main limitation in the experiment of Ref. Boyer et al. (2008) was the trade-off between focusing the beams to the smallest spot possible while keeping the ’image’ from diffracting throughout the medium. Our scheme could circumvent this trade-off by maintaining the fine features of the image along much larger distances. In addition, our scheme can be utilized in optical trapping experiments for the production of traps that are long along the axial direction and tight in the transverse direction. An intricate transverse pattern can be engineered, for example, a thin cylindrical shell or a 2D array of narrow wires, that will extend over a large axial distance to allow for high optical depths. This can be further extended by modulating the control fields along the axial direction, such that the probe (and Stokes) diffracts in the absence of the controls and ’anti-diffracts’ in their presence. In this arrangement, non-uniform traps in the axial direction can be designed. Acknowledgements. We thank O. Peleg and J. Evers for helpful discussions. Appendix A Steady-state solution Assuming control fields constant in time and space and much stronger than the probe and Stokes fields, the steady-state solution of Eqs. (3)-(5) can be approximated to lowest orders in the weak fields as $R_{\alpha,\alpha^{\prime}}\simeq R_{\alpha,\alpha^{\prime}}^{(0)}+R_{\alpha,% \alpha^{\prime}}^{(1)},$where $R_{\alpha,\alpha^{\prime}}^{(0)}$ is the zero-order and $R_{\alpha,\alpha^{\prime}}^{(1)}$ the first-order steady-state solutions. We find $R_{\alpha,\alpha^{\prime}}^{(0)}$ from the zero-order equations of motion $$\displaystyle\frac{\partial}{\partial t}R_{l,l}^{(0)}$$ $$\displaystyle=-2\operatorname{Im}(\Omega_{c}^{\ast}R_{u,l}^{(0)})+\Gamma R_{u,% u}^{(0)}+\Gamma_{l\leftrightarrow r}(R_{r,r}^{(0)}-R_{l,l}^{(0)})$$ $$\displaystyle\frac{\partial}{\partial t}R_{r,r}^{(0)}$$ $$\displaystyle=-2\operatorname{Im}(\Omega_{c}^{\ast}R_{u,r}^{(0)})+\Gamma R_{u,% u}^{(0)}-\Gamma_{l\leftrightarrow r}(R_{r,r}^{(0)}-R_{l,l}^{(0)})$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,u}^{(0)}$$ $$\displaystyle=2\operatorname{Im}(\Omega_{c}^{\ast}R_{u,l}^{(0)})+2% \operatorname{Im}(\Omega_{c}^{\ast}R_{u,r}^{(0)})-2\Gamma R_{u,u}^{(0)}$$ $$\displaystyle\frac{\partial}{\partial t}R_{r,l}^{(0)}$$ $$\displaystyle=i\Omega_{c}^{\ast}R_{u,l}^{(0)}-i\Omega_{c}R_{u,r}^{\ast(0)}-i% \omega_{lr}R_{r,l}^{(0)}-\Gamma_{lr}R_{r,l}^{(0)}$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,l}^{(0)}$$ $$\displaystyle=-i\Omega_{c}(R_{u,u}^{(0)}-R_{l,l}^{(0)})+i\Omega_{c}R_{r,l}^{(0% )}+\gamma_{cl}R_{u,l}^{(0)}$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,r}^{(0)}$$ $$\displaystyle=-i\Omega_{c}(R_{u,u}^{(0)}-R_{r,r}^{(0)})+i\Omega_{c}R_{r,l}^{% \ast(0)}+\gamma_{cr}R_{u,r}^{(0)}$$ (20) by solving $(\partial/\partial t)R_{\alpha,\alpha^{\prime}}^{(0)}=0$. Under the assumption $|\Omega_{c}/\omega_{lr}|\ll 1,$ thus $R_{r,l}^{(0)}=0$, we obtain for the other elements $$\displaystyle R_{l,l}^{(0)}$$ $$\displaystyle=\frac{2A_{r}A_{l}+\Gamma A_{r}+\Gamma_{l\leftrightarrow r}(A_{l}% +A_{r}+\Gamma)}{X}$$ $$\displaystyle R_{r,r}^{(0)}$$ $$\displaystyle=\frac{2A_{r}A_{l}+\Gamma A_{l}+\Gamma_{l\leftrightarrow r}(A_{l}% +A_{r}+\Gamma)}{X}$$ $$\displaystyle R_{u,u}^{(0)}$$ $$\displaystyle=\frac{2A_{r}A_{l}+\Gamma_{1\leftrightarrow 2}A_{r}+\Gamma_{l% \leftrightarrow r}A_{l}}{X}$$ $$\displaystyle R_{u,l}^{(0)}$$ $$\displaystyle=i\frac{\Omega_{c}}{\gamma_{cl}}\frac{\Gamma\left(\Gamma_{1% \leftrightarrow 2}+A_{r}\right)}{X}$$ $$\displaystyle R_{u,r}^{(0)}$$ $$\displaystyle=i\frac{\Omega_{c}}{\gamma_{cr}}\frac{\Gamma\left(\Gamma_{1% \leftrightarrow 2}+A_{l}\right)}{X},$$ (21) with the denominator $$\displaystyle X$$ $$\displaystyle=6A_{r}A_{l}+2\Gamma_{l\leftrightarrow r}\Gamma+$$ $$\displaystyle A_{r}\left(3\Gamma_{l\leftrightarrow r}+\Gamma\right)+A_{l}\left% (3\Gamma_{l\leftrightarrow r}+\Gamma\right),$$ (22) and $A_{l/r}=\left|\Omega_{c}\right|^{2}\operatorname{Im}[(\omega_{ul/ur}-\omega_{c% }-i\Gamma_{d})^{-1}]\ $the optical pumping rates. To find $R_{\alpha,\alpha^{\prime}}^{(1)}$, we start from the first-order equations of motion, $$\displaystyle\frac{\partial}{\partial t}R_{l,l}^{(1)}$$ $$\displaystyle=-2\operatorname{Im}\left[\Omega_{p}^{\ast}e^{-i\delta_{k}z+i% \delta_{\omega}t}R_{u,l}^{(0)}+\Omega_{c}^{\ast}R_{u,l}^{(1)}\right]$$ $$\displaystyle-\left(\Gamma-\Gamma_{l\leftrightarrow r}\right)R_{r,r}^{(1)}-% \left(\Gamma+\Gamma_{l\leftrightarrow r}\right)R_{l,l}^{(1)}$$ $$\displaystyle\frac{\partial}{\partial t}R_{r,r}^{(1)}$$ $$\displaystyle=-2\operatorname{Im}\left[\Omega_{s}^{\ast}e^{i\delta_{k}z-i% \delta_{\omega}t}R_{u,r}^{(0)}+\Omega_{c}^{\ast}R_{u,r}^{(1)}\right]$$ $$\displaystyle-\left(\Gamma+\Gamma_{l\leftrightarrow r}\right)R_{r,r}^{(1)}-% \left(\Gamma-\Gamma_{l\leftrightarrow r}\right)R_{l,l}^{(1)}$$ $$\displaystyle\frac{\partial}{\partial t}R_{r,l}^{(1)}$$ $$\displaystyle=i(\Omega_{s}^{\ast}R_{u,l}^{(0)}-\Omega_{p}R_{u,r}^{\ast(0)})e^{% i\delta_{k}z-i\delta_{\omega}t}$$ $$\displaystyle+i\Omega_{c}^{\ast}R_{u,l}^{(1)}-i\Omega_{c}R_{u,r}^{\ast(1)}-i% \left(\omega_{lr}-i\Gamma_{lr}\right)R_{r,l}^{(1)}$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,l}^{(1)}$$ $$\displaystyle=-i\Omega_{p}e^{i\delta_{k}z-i\delta_{\omega}t}(R_{u,u}^{(0)}-R_{% l,l}^{(0)})$$ $$\displaystyle+i\Omega_{c}(R_{r,r}^{(1)}+2R_{l,l}^{(1)}+R_{r,l}^{(1)})-\gamma_{% cl}R_{u,l}^{(1)}$$ $$\displaystyle\frac{\partial}{\partial t}R_{u,r}^{(1)}$$ $$\displaystyle=-i\Omega_{s}e^{-i\delta_{k}z+i\delta_{\omega}t}(R_{u,u}^{(0)}-R_% {r,r}^{(0)})$$ $$\displaystyle+i\Omega_{c}(R_{l,l}^{(1)}+2R_{r,r}^{(1)}+R_{r,l}^{\ast(1)})-% \gamma_{cr}R_{u,r}^{(1)}.$$ (23) Eqs. (23) are explicitly time-dependent, and we cannot directly solve for $(\partial/\partial t)R_{\alpha,\alpha^{\prime}}^{(1)}=0$. Instead, we introduce the new variables $P_{\alpha,\alpha^{\prime}}^{(1)}$ and $N_{\alpha,\alpha^{\prime}}^{(1)}$ and rewrite Eqs. (23) using $$R_{\alpha,\alpha^{\prime}}^{(1)}=P_{\alpha,\alpha^{\prime}}^{(1)}e^{i\left[% \delta_{\omega}t-\delta_{k}z\right]}+N_{\alpha,\alpha^{\prime}}^{(1)}e^{-i% \left[\delta_{\omega}t-\delta_{k}z\right]},$$ (24) eliminating the explicit dependency on time. The steady-state solution is obtained from the complete set of linear algebraic equations for the variables $P_{u,l}^{(1)}$, $P_{u,r}^{(1)}$, $P_{r,l}^{(1)}$, $N_{u,l}^{\ast(1)}$, $N_{u,r}^{\ast(1)}$, $N_{r,l}^{\ast(1)}$, $P_{l,l}^{(1)}$, and $P_{r,r}^{(1)}$, $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}^{\ast}P_{u,l}^{(1)}-\Omega_{c}N_{u,l}^{\ast(1)}+% \Omega_{p}^{\ast}R_{u,l}^{(0)}$$ $$\displaystyle+i\left(\Gamma-\Gamma_{l\leftrightarrow r}\right)P_{r,r}^{(1)}+% \left(i\Gamma+i\Gamma_{l\leftrightarrow r}-\delta_{\omega}\right)P_{l,l}^{(1)}$$ $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}^{\ast}P_{u,r}^{(1)}-\Omega_{c}N_{u,r}^{\ast(1)}-% \Omega_{s}R_{u,r}^{\ast(0)}$$ $$\displaystyle+\left(i\Gamma+i\Gamma_{l\leftrightarrow r}-\delta_{\omega}\right% )P_{r,r}^{(1)}+i\left(\Gamma-\Gamma_{l\leftrightarrow r}\right)P_{l,l}^{(1)}$$ $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}^{\ast}P_{u,l}^{(1)}-\Omega_{c}N_{u,r}^{\ast(1)}-\left% (\omega_{lr}-i\Gamma_{lr}+\delta_{\omega}\right)P_{r,l}^{(1)}$$ $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}N_{u,l}^{\ast(1)}-\Omega_{c}^{\ast}P_{u,r}^{(1)}-\left% (\omega_{lr}+i\Gamma_{lr}-\delta_{\omega}\right)N_{r,l}^{\ast(1)}$$ $$\displaystyle-\Omega_{p}^{\ast}R_{u,r}^{(0)}+\Omega_{s}R_{u,l}^{\ast(0)}$$ $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}(P_{r,r}^{(1)}+2P_{l,l}^{(1)}+P_{r,l}^{(1)})+\left(i% \gamma_{cl}+\delta_{\omega}\right)P_{u,l}^{(1)}$$ $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}^{\ast}(P_{r,r}^{(1)}+2P_{l,l}^{(1)}+N_{r,l}^{\ast(1)}% )+\left(i\gamma_{cl}^{\ast}+\delta_{\omega}\right)N_{u,l}^{\ast(1)}$$ $$\displaystyle-\Omega_{p,l}^{\ast}(R_{u,u}^{(0)}-R_{l,l}^{(0)})$$ $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}^{\ast}(P_{l,l}^{(1)}+2P_{r,r}^{(1)}+P_{r,l}^{(1)})-(i% \gamma_{cr}^{\ast}-\delta_{\omega})N_{u,r}^{\ast(1)}$$ $$\displaystyle 0$$ $$\displaystyle=\Omega_{c}(P_{l,l}^{(1)}+2P_{r,r}^{(1)}+N_{r,l}^{\ast(1)})-(i% \gamma_{cr}-\delta_{\omega})P_{u,r}^{(1)}$$ $$\displaystyle+\Omega_{s,r}(R_{u,u}^{(0)}-R_{r,r}^{(0)})$$ (25) The exact solution of Eqs. (25) is easily obtained but is unmanageable and bears no physical intuition. Rather, we derive an approximate solution under the following assumptions: 1. The control and probe frequencies are near two-photon resonance, $|\Delta_{2p}|=|\delta_{\omega}-\omega_{lr}|\ll\omega_{lr}$. 2. The ground-state population relaxation is much slower than the excited-to-ground relaxation, $\Gamma_{r\leftrightarrow l}\ll\Gamma$. 3. The optical pumping is much slower than the ground-state frequency difference $\Omega_{c}^{2}/\Gamma\ll\omega_{lr}$. Under these assumptions, and taking the control Rabi frequency to be real $\Omega_{c}=\Omega_{c}^{\ast}$, we solve Eqs. (25) and obtain the coherences relevant for the evolution of the probe and the Stokes [Eqs. (5)], $$\displaystyle iN_{u,l}^{(1)}$$ $$\displaystyle=\left(\frac{n_{l}}{\gamma_{pl}}+\frac{n_{r}}{\gamma_{cr}}-\frac{% n_{l}/\gamma_{pl}+n_{r}/\gamma_{cr}^{\ast}}{\gamma_{pl}\gamma_{0}}\Omega_{c}^{% 2}\right)\Omega_{p}$$ $$\displaystyle+\frac{n_{r}/\gamma_{sr}^{\ast}+n_{l}/\gamma_{cl}}{\gamma_{pl}% \gamma_{0}}\Omega_{c}^{2}\Omega_{s}^{\ast},$$ (26) $$\displaystyle iP_{u,r}^{(1)}$$ $$\displaystyle=\left(\frac{n_{r}}{\gamma_{sr}}+\frac{n_{l}}{\gamma_{cl}}-\frac{% n_{r}/\gamma_{sr}+n_{l}/\gamma_{cl}^{\ast}}{\gamma_{sr}\gamma_{0}^{\ast}}% \Omega_{c}^{2}\right)\Omega_{p}$$ $$\displaystyle+\frac{n_{l}/\gamma_{pl}^{\ast}+n_{r}/\gamma_{cr}}{\gamma_{sr}% \gamma_{0}^{\ast}}\Omega_{c}^{2}\Omega_{p}^{\ast}.$$ (27) References Durnin (1987) J. Durnin, J. Opt. Soc. Am. A 4, 651 (1987). Kapoor and Agarwal (2000) R. Kapoor and G. S. Agarwal, Phys. Rev. A 61, 053818 (2000). Truscott et al. (1999) A. G. Truscott, M. E. J. Friese, N. R. Heckenberg, and H. Rubinsztein-Dunlop, Phys. Rev. Lett. 82, 1438 (1999). Cheng and Han (2007) J. Cheng and S. Han, Opt. Lett. 32, 1162 (2007). Firstenberg et al. (2009a) O. Firstenberg, M. Shuker, N. Davidson, and A. Ron, Phys. Rev. Lett. 102, 043601 (2009a). Firstenberg et al. (2009b) O. Firstenberg, P. London, M. Shuker, A. Ron, and N. Davidson, Nat. Phys. 5, 665 (2009b). Fleischhauer et al. (2005) M. Fleischhauer, A. Imamoglu, and J. P. Marangos, Rev. Mod. Phys. 77, 633 (2005). Moseley et al. (1995) R. R. Moseley, S. Shepherd, D. J. Fulton, B. D. Sinclair, and M. H. Dunn, Phys. Rev. Lett. 74, 670 (1995). Mitsunaga et al. (2000) M. Mitsunaga, M. Yamashita, and H. Inoue, Phys. Rev. A 62, 013817 (2000). Bortman-Arbiv et al. (1998) D. Bortman-Arbiv, A. D. Wilson-Gordon, and H. Friedmann, Phys. Rev. A 58, R3403 (1998). Andre et al. (2005) A. Andre, M. Bajcsy, A. S. Zibrov, and M. D. Lukin, Phys. Rev. Lett. 94, 063902 (2005). Cheng et al. (2005) J. Cheng, S. Han, and Y. Yan, Phys. Rev. A 72, 021801(R) (2005). Vengalattore and Prentiss (2005) M. Vengalattore and M. Prentiss, Phys. Rev. Lett. 95, 243601 (2005). Tarhan et al. (2007) D. Tarhan, N. Postacioglu, and Özgür E. Müstecaplioglu, Opt. Lett. 32, 1038 (2007). Hong (2003) T. Hong, Phys. Rev. Lett. 90, 183901 (2003). Friedler et al. (2005) I. Friedler, G. Kurizki, O. Cohen, and M. Segev, Opt. Lett. 30, 3374 (2005). Gómez-Reino and Larrea (1982) C. Gómez-Reino and E. Larrea, Appl. Opt. 21, 4271 (1982). Firstenberg et al. (2007) O. Firstenberg, M. Shuker, A. Ben-Kish, D. R. Fredkin, N. Davidson, and A. Ron, Phys. Rev. A 76, 013818 (2007). Firstenberg et al. (2008) O. Firstenberg, M. Shuker, R. Pugatch, D. R. Fredkin, N. Davidson, and A. Ron, Phys. Rev. A 77, 043830 (2008). Zhang and Evers (2013) L. Zhang and J. Evers, arXiv:1309.0615 (2013). Chiao et al. (1966) R. Y. Chiao, P. L. Kelley, and E. Garmire, Phys. Rev. Lett. 17, 1158 (1966). Carman et al. (1966) R. L. Carman, R. Y. Chiao, and P. L. Kelley, Phys. Rev. Lett. 17, 1281 (1966). Boyd et al. (1981) R. W. Boyd, M. G. Raymer, P. Narum, and D. J. Harter, Phys. Rev. A 24, 411 (1981). Harada et al. (2008) K.-i. Harada, K. Mori, J. Okuma, N. Hayashi, and M. Mitsunaga, Phys. Rev. A 78, 013809 (2008). McCormick et al. (2007) C. F. McCormick, A. M. Marino, V. Boyer, and P. D. Lett, arXiv;quant-ph/0703111v1 (2007). Vaselago (1968) V. G. Vaselago, Sov. Phys. Usp. 10, 509 (1968). Happer and Mathur (1967) W. Happer and B. S. Mathur, Phys. Rev. 163, 12 (1967). Glorieux et al. (2010) Q. Glorieux, R. Dubessy, S. Guibal, L. Guidoni, J.-P. Likforman, T. Coudreau, and E. Arimondo, Phys. Rev. A 82, 033819 (2010). Boyer et al. (2008) V. Boyer, A. M. Marino, R. C. Pooser, and P. D. Lett, Science 321, 544 (2008).
Abstract We present a set of log-price integrated variance estimators, equal to the sum of open-high-low-close bridge estimators of spot variances within $n$ subsequent time-step intervals. The main characteristics of some of the introduced estimators is to take into account the information on the occurrence times of the high and low values. The use of the high’s and low’s of the bridge associated with the original process makes the estimators significantly more efficient that the standard realized variance estimators and its generalizations. Adding the information on the occurrence times of the high and low values improves further the efficiency of the estimators, much above those of the well-known realized variance estimator and those derived from the sum of Garman and Klass spot variance estimators. The exact analytical results are derived for the case where the underlying log-price process is an Itô stochastic process. Our results suggests more efficient ways to record financial prices at intermediate frequencies. Time-Bridge Estimators of Integrated Variance A. Saichev${}^{1,3}$, D. Sornette${}^{1,2}$ ${}^{1}$ETH Zurich – Department of Management, Technology and Economics, Switzerland ${}^{2}$Swiss Finance Institute, 40, Boulevard du Pont-d’ Arve, Case Postale 3, 1211 Geneva 4, Switzerland ${}^{3}$Nizhni Novgorod State University – Department of Mathematics, Russia. E-mail addresses: [email protected] & [email protected] Time-Bridge Variance Estimators Didier Sornette Department of Management, Technology and Economics (D-MTEC, KPL F38.2) ETH Zurich Kreuzplatz 5 CH-8032 Zurich Switzerland 1 Introduction The integrated variance is a crucial risk indicator of the stochastic log-price process within specific time intervals. Most of the existing high-frequency integrated variance estimators are modifications of the well-known realized volatility (see, for instance, Andersen et al. (2003), Aït-Sahalia (2005), Zhang et al. (2005)), and are based on the knowledge of the open and close prices of $n$ time-step intervals dividing the whole time interval of interest. Another common practice to estimate the variance of a log-price process is to use not two (open-close) log-prices within a given time-step, but four values, the so-called the open-high-low-close (OHLC) of the log-prices. Well-known examples are the Garman and Klass (G&K) (1980) and Parkinson (Park) (1980) spot variance estimators. The main goal of this paper is to demonstrate the efficiency of bridge OHLC integrated variance estimators, that use the knowledge of the high and low values of the bridge process derived from the original log-price process, as well as possibly the random occurrence times of these extrema within each time-step interval. We compare the efficiencies of these time-OHLC bridge estimators with the efficiency of the standard realized variance and with the efficiency of the integrated variance estimators based on the G&K estimators of the variance within each elementary time-step interval. We show that some time-OHLC integrated variance estimators achieve a very significant improvement in efficiency compared with the realized variance and the G&K integrated variance estimators. Another remarkable property of the proposed time-OHLC bridge estimators is that they depend much less on the drift of the log-price process than the realized variance and G&K integrated variance estimators. This has the great advantage of essentially removing the biases that affect the standard estimators, given that the drift (expected return) is in general the most poorly constrained statistical variable. We compare the efficiencies of the introduced integrated variance estimators using the Itô process as our workhorse to model the stochastic behavior of log-prices. Present databases record either all prices associated with transactions or prune the data to keep the OHLC at given time steps, for instance, seconds, minutes or days. The later records giving the OHLC of the realized log-prices do not allow the reconstruction of the OHLC (and even less the occurrence times of the high’s and low’s) for the associated bridge process in each elementary interval. Of course, one could construct the OHLC and any other useful information from the full time series of all transaction prices. But then, one could question the value of deriving new estimators based on a reduced information set. Therefore, the present paper can be considered as a normative exercise to learn about the fundamental limits of integrated variance estimators. Our results are also useful in suggesting more efficient ways to record financial prices at intermediate frequencies: instead of recording the OHLC at the daily scale for instance, we propose that data centers and vendors should store to open and close of the real log-price and the high and low of the corresponding bridge in each day (or in any other chosen frequency). Our calculations below show that this information, which has the same cost and is as easy to obtain at the end of the day from the high frequency data, provides much more efficient estimators of the variance that can be stored for future use. The same conclusion holds true for other risk measures beyond variance such as higher order moments, but this is not explored in the present paper. The paper is organized as follows. Section 2 describes the properties of the well-known realized variance estimator, which we need in order to compare its efficiency with the efficiencies of the suggested time-OHLC bridge integrated variance estimators. Section 3 is devoted to the discussion of the efficiencies of the simple bridge integrated variance estimators, illustrating the comparative efficiency and unbiasedness of the bridge integrated variance estimators. This section written in a pedagogical style gradually introduces the readers in the area of homogeneous most efficient variance estimators. Section 4 provides a detailed analysis of the efficiency of the OHL and time-OHLC bridge integrated variance estimators, which turn out to be significantly more efficient than the realized variance and the G&K integrated variance estimators. Section 5 describes the results of numerical simulations demonstrating the comparative efficiency of the proposed estimators. Section 6 concludes. The paper is completed by three appendix. Appendix A presents the essential properties of the canonical bridge. Appendix B derives the joint probability density function (pdf) of the high value and of its occurrence time. Appendix C derives and gives the statistical properties of the joint distribution of the high and low values and of the occurrence time of the last extremum for the canonical bridge. 2 Realized variance and beyond Henceforth, we assume that the log-price $X(t)$ of a given security follows an Itô process $$dX(t)=\mu(t)dt+\sigma(t)dW(t),\qquad X(0)=X_{0},$$ (1) where $W(t)$ is a realization of the standard Wiener process, while $\mu(t)$ is the drift process, and $\sigma^{2}(t)$ is the instantaneous variance of the log-price process $X(t)$. 2.1 Definitions and basic properties of realized variance Let us provide first some basic definitions and properties. Definition 1 The integrated variance of the process $X(t)$ within the time interval $t\in(0,T)$ is $$D(T):=\int_{0}^{T}\sigma^{2}(t)dt~{}.$$ (2) Definition 2 The spot variance is defined within the time-step interval $$\mathbb{S}_{i}:~{}(t_{i-1},t_{i}]~{},$$ (3) by $$\hat{D}_{\text{real}}\{X(t):t\in\mathbb{S}_{i}\}:=(X_{i}-X_{i-1})^{2},~{}~{}X_% {i}:=X(t_{i}),~{}~{}t_{i}:=i\Delta~{},~{}~{}\Delta={T\over n}~{}.$$ (4) Definition 3 The well-known statistical estimator of the integrated variance is the so-called realized variance defined as $$[X,X]_{T}:=\sum_{i=1}^{n}\hat{D}_{\text{real}}\{X(t):t\in\mathbb{S}_{i}\}~{}.$$ (5) Remark 1 For Itô processes (1) and for $n\to\infty$, it is well-known that the realized variance converges in probability to the integrated one. However, for real data, the number $n$ of available data points is always limited, ultimately by the discreteness of the transaction flow and the associated microstructure noise. Such structures, which are not taken into account in the Itô log-price model, can be neglected in the use of the realized variance estimator if the discrete time step $\Delta$ is much larger than the inverse of the mean frequency $\nu$ of the tick-by-tick transactions, so that $n\ll\nu T$. Assumption 1 While $\Delta\gg 1/\nu$, we assume that $\Delta$ is sufficiently small in comparison with the time scales over which the drift process $\mu(t)$ and the instantaneous variance $\sigma^{2}(t)$ vary, so that one may replace the original Itô process (1) by Wiener processes with drift $$\begin{array}[]{c}dX^{i}(t)\simeq\mu_{i}dt+\sigma_{i}dW(t),\qquad X^{i}(t_{i-1% })=X_{i-1},\qquad t\in\mathbb{S}_{i},\\ \mu_{i}=\text{const},\qquad\sigma_{i}=\text{const}~{}.\end{array}$$ (6) Consider the special case of the Wiener process with drift $$X(t,\mu,\sigma)=\mu t+\sigma W(t)~{}.$$ (7) Using the scale-invariance property of the Wiener process, the following identity holds in law (represented by the symbol $\sim$) $$\hat{D}_{\text{real}}\{X(t,\mu,\sigma):t\in\mathbb{S}_{i}\}\sim\sigma^{2}% \Delta[\gamma+W(1)]^{2}=\sigma^{2}\Delta\cdot X^{2}(1;\gamma)~{},$$ (8) where $$X(t;\gamma)=\gamma t+W(t),\qquad\gamma={\mu\over\sigma}\sqrt{\Delta},\qquad t% \in(0,1)~{},$$ (9) is the canonical Wiener process with drift. Applying the identity in law (8) to the realized variance expression (5), (4), we obtain $$[X,X]_{T}\sim\Delta\sum_{i=1}^{n}\sigma_{i}^{2}(\gamma_{i}+W_{i})^{2}~{},$$ (10) where $\{W_{i}\}$ are iid Gaussian variables $\mathcal{N}(0,1)$. Accordingly, the expected value of the realized variance is $$\text{E}\left[[X,X]_{T}\right]=\Delta\sum_{i=1}^{n}\sigma_{i}^{2}(1+\gamma_{i}% ^{2}),\qquad\gamma_{i}={\mu_{i}\over\sigma_{i}}\sqrt{\Delta}~{}.$$ (11) This recovers the well-known fact that the realized variance is in general biased for non-zero drift, and is non-biased only for zero-drift ($\mu(t)\equiv 0$). 2.2 Beyond realized variance with new estimated variance estimators $\hat{D}_{\text{est}}(T)$ The essential idea of the present work is that it is possible to improve on the realized variance estimator of the integrated variance estimator, for a fixed $n\ll\nu T$ of time-steps with durations $\Delta$, by replacing it by $$\hat{D}_{\text{est}}(T)=\sum_{i=1}^{n}\hat{D}_{\text{est}}\{X(t):t\in\mathbb{S% }_{i}\}~{},$$ (12) where the functional $\hat{D}_{\text{est}}\{X(t):t\in\mathbb{S}_{i}\}$ is an improved estimator of the spot variance given by definition 2. The subscript est is used to refer to some particular estimator and the subscript real means that this estimator reduces to the realized variance estimator. Definition 4 The estimator $\hat{D}_{\text{est}}(T)$ defined by (12) is said to be unbiased if, for all intervals $i=1,...,n$, $$\text{E}\left[\hat{D}_{\text{est}}\{X(t):t\in\mathbb{S}_{i}\}\right]=\Delta% \cdot\sigma_{i}^{2}~{},$$ (13) which implies $$\text{E}\left[\hat{D}_{\text{est}}(T)\right]=\Delta\sum_{i=1}^{n}\sigma^{2}_{i% }~{}.$$ (14) When there exists at least one interval $j$, such that condition (13) does not hold, the estimator is considered biased. 2.3 Estimator efficiency Let $\hat{D}_{\text{est}}(T)$ be some unbiased variance estimator. We propose to quantify its efficiency in terms of the coefficient of variation $$\rho[\hat{D}_{\text{est}}(T)]={\sqrt{\text{Var}[\hat{D}_{\text{est}}(T)]}\over% \text{E}[\hat{D}_{\text{est}}(T)]}.$$ (15) As an illustration, the coefficient of variation of the realized variance for a Wiener process with zero drift ($\mu(t)\equiv 0$) is equal to $$\rho\left[[X,X]_{T}\right]=\displaystyle\sqrt{2\sum_{i=1}^{n}\sigma_{i}^{4}}% \Bigg{/}\displaystyle\sum_{i=1}^{n}\sigma_{i}^{2}.$$ (16) We will need the following theorem: Theorem 2.1 The lower bound of the function $$f(\boldsymbol{s}):=\sqrt{\sum_{i=1}^{n}s_{i}^{2}}\Big{/}\displaystyle\sum_{i=1% }^{n}s_{i},\qquad\boldsymbol{s}=\{s_{1},s_{2},\dots,s_{n}\},\qquad\forall s_{i% }>0$$ (17) is equal to $$\rho(n):=\inf_{\forall s_{i}>0}f(\boldsymbol{s})={1\over\sqrt{n}}~{}.$$ (18) And this lower bound is attained iff all $s_{i}$ are identical: $s_{i}\equiv s>0$. Proof. Let $\{s_{i}\}$ be a realization of some random variable $S$ with probabilities $\Pr\{S=s_{i}\}={1\over n},~{}i=1,\dots,n$. Expected and mean square values of the random variable $S$ are equal to $$\text{E}\left[S\right]={1\over n}\sum_{i=1}^{n}s_{i},\qquad\text{E}\left[S^{2}% \right]={1\over n}\sum_{i=1}^{n}s^{2}_{i}~{}.$$ (19) Since, for any random variable $S$, the inequality $\sqrt{\text{E}\left[S^{2}\right]}\geqslant\text{E}\left[S\right]$ holds, this implies $f(s)\geqslant{1\over\sqrt{n}}$. The inequality becomes an equality iff all $s_{i}\equiv s$ for $\forall s>0$. $\blacksquare$ Applying this theorem to the right-hand-side of expression (16) shows that $\rho\left[[X,X]_{T}\right]$ satisfies the inequality $$\rho\left[[X,X]_{T}\right]\geqslant\rho_{\text{real}}(n),\qquad\rho_{\text{% real}}(n)=\sqrt{{2\over n}}~{},$$ (20) where the lower bound $\rho_{\text{real}}(n)$ of the efficiency is attained only if all $\{\sigma_{i}\}$ are identical. Below, we will compare the efficiencies of different estimators via the comparison of their lower bounds $$\rho_{\text{est}}(n)=\inf_{\forall\sigma_{i}}\rho_{\text{est}}[\hat{D}(T)]~{}.$$ (21) 3 Realized bridge variance estimators 3.1 Basic definitions An important motivation for the introduction of a new class of so-called “realized bridge variance estimators” is to obtain much reduced biases compared that of the realized variance (5) observed for nonzero drifts $\mu(t)\not\equiv 0$. Definition 5 The bridge $Y(t,\mathbb{S}_{i})$ in discrete time steps of the original process $X(t)$ is defined by $$Y(t,\mathbb{S}_{i}):=X(t)-X_{i-1}-{t-t_{i-1}\over\Delta}\left(X_{i}-X_{i-1}% \right),\qquad t\in\mathbb{S}_{i},$$ (22) where $X_{i}:=X(t_{i})$, $t_{i}:=i\Delta$ and $\Delta={T\over n}$. As an example, let $X(t)$ be the Wiener process with drift $X(t,\mu,\sigma)$ defined by (7). Using the transition and scale invariant properties of the Wiener process leads to $$Y(t,\mathbb{S}_{i})\sim\sigma\sqrt{\Delta}\left(W(\zeta)-\zeta W(1)\right),\ % \quad\zeta={t-t_{i-1}\over\Delta}\in(0,1]~{}.$$ (23) This means that the bridge $Y(t,\mathbb{S}_{i})$ (22) is identical in law to $$Y(t,\mathbb{S}_{i})\sim\sigma\sqrt{\Delta}~{}Y(\zeta),$$ (24) where $$Y(t):=W(t)-t\cdot W(1),\qquad t\in(0,1],$$ (25) is the canonical bridge whose basic properties are given in Appendix A. Remark 2 The canonical bridge $Y(t)$ is completely independent of the drift $\mu$. This property is the fundamental reason for the better performance of the variance bridge estimators compared with the realized variance: the biases and efficiencies of bridge variance estimators do not depend on the drift $\mu$. In the following, we explore the statistical properties of the bridge variance estimators $$\hat{D}_{\text{est}}(T)=\sum_{i=1}^{n}\hat{D}_{\text{est}}\{Y(t,\mathbb{S}_{i}% ):t\in\mathbb{S}_{i}\}~{},$$ (26) obtained from the general expression (12) by replacing the initial process $\{X(t_{i})\}$ by its corresponding bridge $\{Y(t_{i},\mathbb{S}_{i})\}$. Definition 6 The estimator (26) is called homogeneous if, when applied to the Wiener processes with drift (6), the following identity in law holds $$\hat{D}_{\text{est}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}\sim\sigma^{2}_{% i}\Delta\cdot\hat{d}_{\text{est}}~{},$$ (27) where $$\hat{d}_{\text{est}}:=\hat{D}_{\text{est}}\{Y(t):t\in(0,1]\}$$ (28) is the canonical estimator of the spot variance depending on the canonical bridge $Y(t)$ (25). Obviously, the estimator (26) is unbiased if and only if $\text{E}\left[\hat{d}_{\text{est}}\right]=1$. Theorem 3.2 Under Assumption 1, the lower bound of the efficiency of the unbiased homogeneous integrated bridge variance estimator (27) is $$\rho_{\text{est}}(n)=\sqrt{\text{Var}\left[\hat{d}_{\text{est}}\right]\big{/}n% }~{},$$ (29) where $\text{Var}\left[\hat{d}_{\text{est}}\right]$ is the variance of the canonical spot variance estimator $\hat{d}_{\text{est}}$ (28). Proof. Under Assumption 1, the unbiased homogeneous bridge variance estimator (26) is identical in law to $$\hat{D}_{\text{est}}(T)\sim\Delta\sum_{i=1}^{n}\sigma_{i}^{2}\cdot\hat{d}_{% \text{est}}^{i}~{},$$ (30) where $\{\hat{d}^{i}_{\text{est}}\}$ are iid random variables with mean value $\text{E}\left[\hat{d}_{\text{est}}\right]=1$ and variance $\text{Var}\left[\hat{d}_{\text{est}}\right]$. Accordingly, the expected value and variance of the unbiased bridge variance estimator (26) are equal to $$\text{E}\left[\hat{D}_{\text{est}}(T)\right]=\Delta\sum_{i=1}^{n}\sigma_{i}^{2% },\qquad\text{Var}\left[\hat{D}_{\text{est}}(T)\right]=\Delta^{2}~{}\text{Var}% \left[\hat{d}_{\text{est}}\right]\sum_{i=1}^{n}\sigma_{i}^{4}.$$ (31) Substitute these relations into (15), we obtain $$\rho\left[\hat{D}_{\text{est}}(T)\right]=\displaystyle\sqrt{\text{Var}\left[% \hat{d}_{\text{est}}\right]\sum_{i=1}^{n}\sigma_{i}^{4}}\Bigg{/}\displaystyle% \sum_{i=1}^{n}\sigma_{i}^{2}.$$ (32) Using theorem 2.1, this yields the result (29). $\blacksquare$ 3.2 Simplest bridge variance estimator Our first example of an homogeneous bridge variance estimator is $$\hat{D}_{\text{simple}}(T)=\sum_{i=1}^{n}\hat{D}_{\text{simple}}\{Y(t,\mathbb{% S}_{i}):t\in\mathbb{S}_{i}\},$$ (33) where the estimator of the spot variance is given by $$\hat{D}_{\text{simple}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}=AY^{2}(t_{i}% (\eta)),~{}~{}t_{i}(\eta)=t_{i-1}+\eta\cdot\Delta,~{}~{}\eta\in(0,1)~{},$$ (34) and $A$ is a normalizing factor. The estimator $\hat{D}_{\text{simple}}(T)$ is homogeneous and, if relations (6) are valid, then $$Y^{2}(t_{i}(\eta),\mathbb{S}_{i})\sim\sigma^{2}_{i}\Delta\cdot Y^{2}_{i}(\eta)% ~{},$$ (35) where $\{Y_{i}(\eta)\}$ are iid random variables that are identical in law to the canonical bridge (25). Substituting relation (35) into (33) leads to the identity in law $$\hat{D}_{\text{simple}}(T)\sim A\Delta\sum_{i=1}^{n}\sigma^{2}_{i}Y_{i}^{2}(% \eta)~{}.$$ (36) The fact that the canonical bridge $Y(\eta)$ is Gaussian with mean value $\text{E}[Y^{2}(\eta)]=\eta(1-\eta)$ implies that the estimator (33) is unbiased in the sense of definition 4 if $A=1\big{/}\eta(1-\eta)$. Accordingly, the variance of the estimator (33) is equal to the variance of the realized variance obtained for zero drift ($\mu(t)\equiv 0$): $$\text{Var}[\hat{D}_{\text{simple}}(T)]=2\Delta^{2}\sum_{i=1}^{n}\sigma_{i}^{4}% ~{}.$$ (37) This result means that the lower bound of the efficiency of the simplest bridge estimator (33) is equal to the lower bound of the efficiency of the realized variance estimator at zero drift: $$\rho_{\text{simple}}(n)=\rho_{\text{real}}(n)=\sqrt{{2\over n}}~{}.$$ (38) The shortcoming of the estimator (33) is that it is actually less efficient than the realized variance at zero drift in a sense discussed below. 3.3 Comparative efficiencies of realized variance estimators Definition 7 Let the estimator of the spot variance $$\hat{D}_{\text{est}}\{X(t):t\in\mathbb{S}_{i}\}\qquad\text{or}\qquad\hat{D}_{% \text{est}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}$$ depends on $\kappa_{\text{est}}$ values of the process $X(t)$ or $Y(t,\mathbb{S}_{i})$ at $\kappa_{\text{est}}$ time-step within the time interval $t\in\mathbb{S}_{i}$. The corresponding estimators of the realized volatility $\hat{D}_{\text{est}}(T)$ (12) or (33) are then using a total number $n_{\text{eff}}=\kappa_{\text{est}}\cdot n$ of time-steps. Example 1 The realized variance corresponds to $\kappa_{\text{real}}=1$. Indeed, the two values $\{X_{i-1},X_{i}\}$ are used to estimate the spot realized variance (4), and the first value is excluded from the semi-closed interval $\mathbb{S}_{i}$ (3). Example 2 For the simplest bridge estimator (33) with (34), $\kappa_{\text{simple}}=2$. Indeed, the estimator (34) depends on the bridge $Y(t_{i}(\eta),\mathbb{S}_{i})$ for $t_{i}(\eta)\in\mathbb{S}_{i}$ and $Y(t_{i}(\eta),\mathbb{S}_{i})$ (22) is defined by the open and close values $\{X_{i-1},X_{i}\}$ of the original stochastic process $X(t)$. Excluding the open value, this yields $\kappa_{\text{simple}}=2$. Example 3 Consider the Garman & Klass (G&K) variance estimator based on open, high, low and close prices, used as the spot variance estimator in expression (12): $$\begin{array}[]{c}\displaystyle\hat{D}_{\text{GK}}\{X(t):t\in\mathbb{S}_{i}\}=% k_{1}(H_{i}-L_{i})^{2}-k_{2}(C_{i}(H_{i}-L_{i})-2H_{i}L_{i})-k_{3}C_{i}^{2},\\ \displaystyle k_{1}=0.511,\qquad k_{2}=0.019,\qquad k_{3}=0.383,\end{array}$$ (39) where $\{O_{i},C_{i},H_{i},L_{i}\}$ are the open, close, high and low values $$O_{i}=X_{i-1},\quad C_{i}=X_{i},\quad H_{i}=\sup_{t\in\mathbb{S}_{i}}[X(t)-O_{% i}],\quad L_{i}=\inf_{t\in\mathbb{S}_{i}}[X(t)-O_{i}].$$ Excluding the open value leads to $\kappa_{\text{GK}}=3$. Definition 8 We characterize the efficiencies of the novel variance estimators by comparing them with that of the standard realized variance estimator. The corresponding comparative efficiency $\mathcal{R}_{\text{est}}$ is constructed as the ratio of the lower bounds of the efficiencies of the realized variance and novel variance estimator: $$\mathcal{R}_{\text{est}}={\rho_{\text{real}}(\kappa_{\text{est}}\cdot n)\over% \rho_{\text{est}}(n)}~{}.$$ (40) Putting in this expression $\rho_{\text{real}}(n)$ given by equation (20) and $\rho_{\text{est}}(n)$ given by expression (29) yields $$\mathcal{R}_{\text{est}}=\sqrt{{2\over\kappa_{\text{est}}\cdot\text{Var}[\hat{% d}_{\text{est}}]}}~{}.$$ (41) Remark 3 For a given duration $T$ used to define the integrated variance (2), relation (41) takes into account that the typical waiting time between successive data samples is given by $\Delta_{\text{eff}}\simeq T\big{/}n_{\text{eff}}$. Such waiting time should be approximately the same for the different generalized variance estimators proposed below, leading to similar distortions to the adequacy of the Itô process (1) in its ability to describe the real price process in the presence of discrete tick-by-tick and other microstructure noise. Example 4 Let us come back to the simple variance estimator based on expression (34) for $\hat{D}_{\text{simple}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}$. The result (38) is equivalent to $\text{Var}[\hat{d}_{\text{simple}}]=2$. Substituting this value in (41) yields $$\mathcal{R}_{\text{simple}}={1\over\sqrt{\mathstrut\kappa_{\text{simple}}}}={1% \over\sqrt{2}}\simeq 0.707~{}.$$ (42) The efficiency of the simplest bridge estimator is smaller than that of the realized variance. Example 5 Let us evaluate the comparative efficiency of the generalized realized variance estimator based on the spot G&K variance estimator in the case of zero drift $\mu(t)\equiv 0$. It is known that the variance of the spot G&K variance estimator given by (39) is equal to $$\text{Var}\left[\hat{D}_{\text{GK}}\{X(t):t\in\mathbb{S}_{i}\}\right]=\sigma_{% i}^{2}\Delta\cdot 0.2693\quad\Rightarrow\quad\text{Var}\left[\hat{d}_{\text{GK% }}\right]=0.2693~{}.$$ (43) This gives $$\mathcal{R}_{\text{GK}}=\sqrt{{2\over\mathstrut\kappa_{\text{GK}}\cdot 0.2693}% }=\sqrt{{2\over\mathstrut 3\cdot 0.2693}}\simeq 1.573~{}.$$ (44) Therefore, for zero drift, the G&K realized variance estimator is approximately 1.6 times more efficient than the realized variance estimator. 3.4 High bridge variance estimator The fact that the G&K realized variance estimator based on open-high-low-close prices is significantly more efficient than the standard realized variance, at least for Itô process $X(t)$ (1) with zero drift $\mu(t)\equiv 0$, suggests to study other estimators using different combinations of the open-high-low-close prices. Let us start by analyzing the simplest case of what we will refer to as the “high bridge variance estimator”, defined through its spot variance given by $$\hat{D}_{\text{high}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}=A\cdot H_{i}^{% 2}~{},$$ (45) where $A$ is normalizing factor and $$H_{i}=\sup_{t\in\mathbb{S}_{i}}Y(t,\mathbb{S}_{i}),$$ (46) is the high value of the bridge $Y(t,\mathbb{S}_{i})$. Note that we use here the same notation for the high value of the bridge $Y(t,\mathbb{S}_{i}$) as for that of the original process $X(t)$, hoping that this will not give rise to any confusion. It follows from (24) that $$\hat{D}_{\text{high}}\{Y(t,\mathbb{S}_{t}):t\in\mathbb{S}_{t}\}\sim\sigma^{2}_% {i}\Delta\cdot\hat{d}_{\text{high}},\qquad\hat{d}_{\text{high}}=AH^{2}~{},$$ (47) where the high value $H$ of the canonical bridge $Y(t)$ (25) has the following probability density function (pdf) $$\varphi_{\text{high}}(h)=4he^{-2h^{2}},\qquad h>0~{}.$$ (48) The derivation of the pdf (48) is given in Jeanblanc et al. (2009) (see also the derivations presented in Appendix B). Accordingly, the expected value and the variance of the square of $H$ are given by $$\text{E}\left[H^{2}\right]={1\over 2},\qquad\text{Var}\left[H^{2}\right]={1% \over 4}.$$ (49) In order for the high spot bridge variance estimator to be unbiased, we have to choose in (51) the value $A=2$ for the normalizing factor. This gives $\text{Var}\left[\hat{d}_{\text{high}}\right]=1$. With $\kappa_{\text{high}}=2$, we find that the comparative efficiency (41) of the high bridge realized variance estimator is $\mathcal{R}_{\text{high}}=1$. Thus, the high bridge realized variance estimator has the same efficiency as the standard realized variance. But the advantage of the former is that, under Assumption (1), it is unbiased for any drift $\mu(t)\neq 0$. Remark 4 Let us give the intuition for the above result, obtained despite the larger value of $\kappa_{\text{high}}=2$ compared to $\kappa_{\text{real}}=1$. The reason is that the pdf of the random variable $2H^{2}$ is narrower than that of the random variable $W^{2}$ defining the spot realized variance at zero drift. The same reason underlies the comparative efficiency of the G&K as well the other high and low bridge realized variance estimators discussed below. The narrowness of the pdf’s of high’s and low’s compared with the pdf’s of the increments of the original stochastic process $X(t)$ results from a weak version of the Law of Large Numbers, in the sense that the high’s and low’s incorporate significant additional information about the underlying process within a given time-step, thus leading to narrower pdfs’. 3.5 Time-high bridge variance estimator We now introduce a novel ingredient to improve further the estimation of the variance. In addition to using only the high $H_{i}$ of the bridge $Y(t,\mathbb{S}_{i})$, we also assume that the time $t_{\text{high}}^{i}$ of the occurrence of this high is recorded: $$t_{\text{high}}^{i}:~{}H_{i}=Y(t_{\text{high}}^{i},\mathbb{S}_{i})~{}.$$ (50) The corresponding time-high bridge spot variance estimator is given by $$\hat{D}_{\text{est}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}=A\cdot s\left({% t_{\text{high}}^{i}-t_{i-1}\over\Delta}\right)\cdot H_{i}^{2}~{},$$ (51) where $A$ is a normalizing factor, while $s(t),t\in(0,1)$ is some function that remains to be determined so as to make the above spot variance estimator as efficient as possible. Before providing the solution of this problem, let us note that the following identify in law follows from (24) $$\hat{D}_{\text{est}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}\sim\sigma^{2}_{% i}\Delta\cdot\hat{d}_{\text{est}}~{},$$ (52) where $$\hat{d}_{\text{est}}=A\cdot s(t_{\text{high}})\cdot H^{2}$$ (53) is the canonical time-high bridge estimator of the spot variance, $H$ is the high value of the canonical bridge $Y(t)$ (25), and $t_{\text{high}}$ is the corresponding time-point (50). The expected value of the canonical estimator (53) is equal to $$\text{E}\left[\hat{d}_{\text{est}}\right]=A\int_{0}^{1}s(t)\alpha(t;2)dt,\quad% \alpha(t;\lambda):=\int_{0}^{\infty}h^{\lambda}\varphi_{\text{high}}(h,t)dh$$ (54) where $\varphi_{\text{high}}(h,t)$ is the joint pdf of $H$ and $t_{\text{high}}$. Taking $$A=1\Big{/}\int_{0}^{1}s(t)\alpha(t;2)dt~{},$$ (55) we obtain an unbiased time-high canonical bridge estimator: $$\hat{d}_{\text{est}}={s(t_{\text{high}})H^{2}\over\int_{0}^{1}s(t)\alpha(t;2)% dt}~{}.$$ (56) Its variance is $$\text{Var}\left[\hat{d}_{\text{est}}\right]={\int_{0}^{1}s^{2}(t)\alpha(t;4)dt% \over\left(\int_{0}^{1}s(t)\alpha(t;2)dt\right)^{2}}-1.$$ (57) Theorem 3.3 The function $s(t)$ that minimizes the variance (57) of the unbiased time-high canonical bridge estimator (56) is $$s_{\textnormal{t-high}}(t)={\alpha(t;2)\over\alpha(t;4)}.$$ (58) The corresponding minimal variance is equal to $$\begin{array}[]{c}\displaystyle\text{Var}\left[{s_{\text{t-high}}(t_{\text{% high}})H^{2}\over\int_{0}^{1}s(t)\alpha(t;2)dt}\right]=\inf_{\forall\,s(t)}% \text{Var}\left[\hat{d}_{\text{est}}\right]={1\over\mathcal{E}_{\text{t-high}}% }-1,\\ \displaystyle\mathcal{E}_{\text{t-high}}=\int_{0}^{1}{\alpha^{2}(t;2)\over% \alpha(t;4)}dt.\end{array}$$ (59) Proof. We use the Schwarz inequality $$\left(\int_{0}^{1}A(t)B(t)dt\right)^{2}\leqslant\int_{0}^{1}A^{2}(t)dt\int_{0}% ^{1}B^{2}(t)dt$$ (60) with $$A(t)=s(t)\sqrt{\alpha(t;4)},\qquad B(t)={\alpha(t;2)\over\sqrt{\alpha(t;4)}}~{},$$ (61) to obtain $$\left(\int_{0}^{1}s(t)\alpha(t;2)dt\right)^{2}\leqslant\int_{0}^{1}s^{2}(t)% \alpha(t;4)dt\int_{0}^{1}{\alpha^{2}(t;2)\over\alpha(t;4)}dt.$$ (62) After simple transformations, we rewrite the last inequality in the form $$\text{Var}\left[\hat{d}_{\text{t-high}}\right]={\int_{0}^{1}s^{2}(t)\alpha(t;4% )dt\over\left(\int_{0}^{1}s(t)\alpha(t;2)dt\right)^{2}}-1\geqslant{1\over\int_% {0}^{1}{\alpha^{2}(t;2)\over\alpha(t;4)}dt}-1~{}.$$ (63) The equality in (63) is reached by substituting in it $s(t)=s_{\text{t-high}}(t)$ given by expression (58). $\blacksquare$ The joint pdf of $H$ and $t_{\text{high}}$ is derived in Appendix B and reads $$\varphi_{\text{high}}(h,t)=\sqrt{2\over\pi}{h^{2}\over\sqrt{t^{3}(1-t)^{3}}}% \exp\left(-{h^{2}\over 2t(1-t)}\right),\qquad h>0,\qquad t\in(0,1)~{}.$$ (64) Substituting this expression for $\varphi_{\text{high}}(h,t)$ into (54) yields $$\alpha(t;\lambda)={2\over\sqrt{\pi}}\left[2t(1-t)\right]^{\lambda\over 2}% \Gamma\left({3+\lambda\over 2}\right)~{}.$$ (65) Therefore, $$s_{\text{t-high}}(t)={1\over 5t(1-t)},\qquad\mathcal{E}_{\text{t-high}}={3% \over 5}\quad\Rightarrow\quad\text{Var}\left[\hat{d}_{\text{t-high}}\right]={2% \over 3}~{},$$ (66) and $$\mathcal{R}_{\text{t-high}}=\sqrt{3\over 2}\simeq 1.225~{}.$$ (67) Thus, the time-high bridge realized variance estimator is less efficient than the corresponding G&K estimator at zero drift, but is more efficient than the realized variance. Remark 5 The numerical result (67) takes into account that the use of $t^{i}_{\text{high}}$ does not increase the number of sample values used in the spot estimator (51). Thus, $\kappa_{\text{t-high}}=\kappa_{\text{high}}=2$. 4 Bridge time-high-low estimators 4.1 Bridge Parkinson estimator Definition 9 The bridge realized variance estimator (26) that uses as spot variance estimator $$\hat{D}_{\text{bPark}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}=A\cdot(H_{i}-% L_{i})^{2}$$ (68) is called the bridge Parkinson estimator. In expression (68), $H_{i}$ and $L_{i}$ are the high and low values of the bridges $Y(t,\mathbb{S}_{i})$ (22). The bridge Parkinson estimator is identical in law to $$\hat{D}_{\text{bPark}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}\sim\sigma_{i}% ^{2}\Delta\cdot\hat{d}_{\text{bPark}},\qquad\hat{d}_{\text{bPark}}=A\cdot(H-L)% ^{2},$$ (69) where $H$, $L$ are the high and low values of the canonical bridge $Y(t)$ (25). The joint pdf of $H$ and $L$ have been derived by Saichev et al. (2009) and reads $$\begin{array}[]{c}\displaystyle\varphi(h,\ell)=\sum_{m=-\infty}^{\infty}m\left% [m\mathcal{I}(m(h-\ell))+(1-m)\mathcal{I}(m(h-\ell)+\ell)\right],\\ \displaystyle\mathcal{I}(h)=4(4h^{2}-1)~{}e^{-2h^{2}}.\end{array}$$ (70) It will be clear below that it is convenient to describe the joint statistical properties of the high $H$ and low $L$ by using polar coordinates $$H=R\cos\Theta,\qquad L=R\sin\Theta,\qquad R\in(0,\infty),\qquad\theta\in\left(% -{\pi\over 2},0\right)~{}.$$ (71) Accordingly, we rewrite the canonical estimator (69) in the form $$\hat{d}_{\text{bPark}}=AR^{2}\left(1-\sin 2\Theta\right)~{}.$$ (72) Choosing the constant $A$ that makes the estimator (69) unbiased, we obtain $$\begin{array}[]{c}\displaystyle\hat{d}_{\text{bPark}}={R^{2}(1-\sin 2\Theta)% \over\int_{-\pi/2}^{0}(1-\sin 2\theta)\alpha(\theta;2)d\theta},\\ \displaystyle\alpha(\theta;\lambda)=\int_{0}^{\infty}r^{\lambda+1}\varphi(r% \cos\theta,r\sin\theta)dr.\end{array}$$ (73) Substituting expression (70) yields $$\begin{array}[]{c}\displaystyle\alpha(\theta;\lambda)=\\ \displaystyle\sum_{m=-\infty}^{\infty}m\left[m\beta(m(\cos\theta-\sin\theta);% \lambda)+(1-m)\beta(m(\cos\theta-\sin\theta)+\sin\theta;\lambda)\right],\\ \displaystyle\beta(y;\lambda)={C(\lambda)\over|y|^{2+\lambda}},\qquad C(% \lambda)={1+\lambda\over\sqrt{2^{\lambda}}}\Gamma\left({2+\lambda\over 2}% \right).\end{array}$$ (74) The variance of the canonical bridge Parkinson estimator is equal to $$\text{Var}\left[\hat{d}_{\text{bPark}}\right]={\int_{-\pi/2}^{0}(1-\sin 2% \theta)^{2}\alpha(\theta;4)d\theta\over\left(\int_{-\pi/2}^{0}(1-\sin 2\theta)% \alpha(\theta;2)d\theta\right)^{2}}-1\simeq 0.2000~{}.$$ (75) Substituting this value into (41) and taking into account that $\kappa_{\text{bPark}}=3$ for the bridge Parkinson estimator, we obtain the comparative efficiency $$\text{Var}\left[\hat{d}_{\text{bPark}}\right]=0.2000,\quad\kappa_{\text{bPark}% }=3,\qquad\Rightarrow\qquad\mathcal{R}_{\text{bPark}}\simeq 1.823~{},$$ (76) which means that the bridge Parkinson estimator is significantly more efficient than the G&K estimator at zero drift. Remark 6 We stress that the canonical estimator $\hat{d}_{\text{bPark}}$ is significantly different from the well-known canonical Parkinson estimator (see Parkinson (1980)) $$\hat{d}_{\text{Park}}={(H-L)^{2}\over 4\ln 2}~{},$$ (77) where $H$ and $L$ are the high and low values of the canonical Wiener process with drift $X(t,\gamma)$ (9). In contrast with the bridge Parkinson estimator (73) which is unbiased for any $\gamma$, the standard Parkinson estimator is biased at nonzero drift. Moreover, the variance of the standard Parkinson estimator at zero drift is $$\text{Var}\left[\hat{d}_{\text{Park}}\right]\simeq 0.4073~{},$$ (78) which is approximately twice the variance of the bridge Parkinson estimator (76). 4.2 Non-quadratic homogeneous estimators Until now, we have considered homogeneous (in the sense of definition 6) high-low estimators that are quadratic functions of the high and low values. We now consider the more general class of homogeneous estimators, whose spot variance estimators have the form $$\hat{D}_{\text{est}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i}\}=\mathcal{D}_{% \text{est}}(H_{i},L_{i})~{},$$ (79) where $\mathcal{D}_{\text{est}}(h,\ell)$ is an arbitrary homogeneous function of second order. Example 6 To illustrate the notion of non-quadratic homogeneous functions of second order, consider the typical example $$\mathcal{D}_{\text{est}}(H_{i},L_{i})={(H_{i}-L_{i})^{3}\over\sqrt{H_{i}^{2}+L% _{i}^{2}}}~{},$$ (80) which satisfies the scaling property $$\mathcal{D}_{\text{est}}(\delta\cdot H_{i},\delta\cdot L_{i})\equiv\delta^{2}% \cdot\mathcal{D}_{\text{est}}(H_{i},L_{i})\qquad\forall\delta>0~{}.$$ (81) The following theorem states that the spot variance estimator (79) satisfies the relations (27), (28) of definition 6 for homogeneous estimators. Theorem 4.4 The spot variance estimator (79) is homogeneous in the sense of definition 6. Proof. Let $H_{i}$ and $L_{i}$ be the high and low values of the bridge $Y(t,\mathbb{S}_{i})$. Due to relation (24) and Assumption 1, the following identity in law holds $$\{H_{i},L_{i}\}\sim\sigma_{i}\sqrt{\Delta}\cdot\{H,L\},$$ (82) where $\{H,L\}$ are the high and low values of the canonical bridge $Y(t)$ (25). Substituting this last relation into (79) yields $$\mathcal{D}_{\text{est}}(H_{i},L_{i})\sim\mathcal{D}_{\text{est}}(\sigma_{i}% \sqrt{\Delta}H,\sigma_{i}\sqrt{\Delta}L)~{}.$$ (83) Using the homogeneity of the function $\mathcal{D}(h,\ell)$, we rewrite the previous relation in the form $$\mathcal{D}_{\text{est}}(H_{i},L_{i})\sim\sigma_{i}^{2}\Delta_{\text{est}}% \cdot\mathcal{D}(H,L)~{},$$ (84) which is analogous to expression (27), where the canonical estimator of the spot variance is equal to $$\hat{d}_{\text{est}}=\mathcal{D}_{\text{est}}(H,L)~{}.$$ (85) $\blacksquare$ Using the polar coordinates (71), the canonical estimator $\hat{d}_{\text{est}}$ reads $$\hat{d}_{\text{est}}=\mathcal{D}_{\text{est}}(R\cos\Theta,R\sin\Theta)~{}.$$ (86) Using the homogeneity of the function $\mathcal{D}_{\text{est}}$, we obtain $$\hat{d}_{\text{est}}=R^{2}\cdot s(\theta),\qquad s(\theta)=\mathcal{D}_{\text{% est}}(\cos\theta,\sin\theta)~{}.$$ (87) Its expected value is equal to $$\text{E}\left[\hat{d}_{\text{est}}\right]=\int_{-\pi/2}^{0}s(\theta)\alpha(% \theta;2)d\theta~{},$$ (88) where the function $\alpha(\theta,\lambda)$ is given by the equality (73). Thus, the homogeneous non-quadratic canonical estimator reads $$\hat{d}_{\text{est}}={R^{2}s(\Theta)\over\int_{-\pi/2}^{0}s(\theta)\alpha(% \theta;2)d\theta}\qquad\Rightarrow\qquad\text{E}[\hat{d}_{\text{est}}]=1.$$ (89) Accordingly, the variance of the unbiased estimator is equal to $$\text{Var}\left[\hat{d}_{\text{est}}\right]={\int_{-\pi/2}^{0}s^{2}(\theta)% \alpha(\theta;4)d\theta\over\left(\int_{-\pi/2}^{0}s(\theta)\alpha(\theta;2)d% \theta\right)^{2}}-1.$$ (90) One can easily prove the result analogous to theorem 3.3 that the minimum value of the variance (90) of the canonical estimator (89) with respect to all possible functions $s(\theta)$ is given by $$\text{Var}\left[\hat{d}_{\text{me}}\right]=\inf_{\forall s(\theta)}\text{Var}% \left[\hat{d}_{\text{est}}\right]={1\over\mathcal{E}_{\text{me}}}-1,\qquad% \mathcal{E}_{\text{me}}=\int_{-\pi/2}^{0}{\alpha^{2}(\theta;2)\over\alpha(% \theta;4)}d\theta,$$ (91) where $\hat{d}_{\text{est}}$ is an arbitrary homogeneous canonical estimator of the form (89), while $\hat{d}_{\text{me}}$ is the corresponding most efficient estimator given by $$\hat{d}_{\text{me}}={1\over\mathcal{E}_{\text{me}}}~{}R^{2}s_{\text{me}}(% \Theta),\qquad s_{\text{me}}(\theta)={\alpha(\theta;2)\over\alpha(\theta;4)}~{}.$$ (92) Calculating the numerical value of the integral in expression (91) yields $$\hat{d}_{\text{me}}=0.1974,\qquad\kappa_{\text{me}}=3,\qquad\Rightarrow\qquad% \mathcal{R}_{\text{me}}\simeq 1.838~{},$$ (93) which shows a high efficiency compared with the standard realized variance. 4.3 Time-high-low homogeneous estimator Let us consider the unbiased homogeneous time-high-low canonical estimator $$\hat{d}_{\text{est}}={R^{2}s(\Theta,t_{\text{last}})\over\int_{0}^{1}dt\int_{-% \pi/2}^{0}d\theta~{}s(\theta,t)\alpha_{\text{last}}(\theta,t;2)},$$ (94) where $s(\theta,t)$ is an arbitrary function, $t_{\text{last}}=\sup\{t_{L},t_{H}\}$ is the larger of the two times at which occur the high and low values of the canonical bridge and $\alpha_{\text{last}}(\theta,t;\lambda)$ is given by (C.17) in Appendix C.3. It is easy to prove the result analogous to theorem 3.3 that the most efficient estimator of the form (94) is $$\hat{d}_{\text{t-me}}={R^{2}\over\mathcal{E}_{\text{t-me}}}~{}{\alpha_{\text{% last}}(\Theta,t_{\text{last}};2)\over\alpha_{\text{last}}(\Theta,t_{\text{last% }};4)},\quad\mathcal{E}_{\text{t-me}}=\int_{0}^{1}dt\int_{-\pi/2}^{0}d\theta~{% }{\alpha^{2}_{\text{last}}(\theta,t;2)\over\alpha_{\text{last}}(\theta,t;4)},$$ (95) and the variance of this estimator is equal to $$\text{Var}\left[\hat{d}_{\text{t-me}}\right]={1\over\mathcal{E}_{\text{t-me}}}% -1~{}.$$ (96) The numerical calculation of $\mathcal{E}_{\text{t-me}}$ gives $$\text{Var}\left[\hat{d}_{\text{t-me}}\right]\simeq 0.1873,\quad\kappa_{\text{t% -me}}=3,\qquad\Rightarrow\qquad\mathcal{R}_{\text{t-me}}\simeq 1.887.$$ (97) The estimator of the realized variance based on the canonical estimator (95) is significantly more efficient than that based on the G&K estimator at zero drift. 4.4 High-low-close bridge estimator Until now, we have not used explicitly the information contained in the close values $X_{i}$ (4) of the time-step intervals $\mathbb{S}_{i}$ (3). The close values $X_{i}$ have been used only for the construction of the bridge $Y(t,\mathbb{S}_{i})$ (22). It seems plausible that taking into account explicitly the close values $X_{i}$ in the construction of spot variance estimators may produce bridge realized variance estimators $\hat{D}_{\text{est}}(T)=\sum_{i=1}^{n}\hat{D}_{\text{est}}\{Y(t,\mathbb{S}_{i}% ):t\in\mathbb{S}_{i};X_{i}\}$ that are even more efficient than those considered until now. We show that this is indeed the case by studying the example associated with the spot variance estimator given by $$\hat{D}_{\text{est}}\{Y(t,\mathbb{S}_{i}):t\in\mathbb{S}_{i};X_{i}\}=\mathcal{% D}_{\text{est}}(H_{i},L_{i},X_{i})~{},$$ (98) where $\mathcal{D}_{\text{est}}(h,\ell,x)$ is an arbitrary homogeneous function satisfying relation (81). Due to its homogeneity, the following identity in law holds true $$\mathcal{D}_{\text{est}}(H_{i},L_{i},X_{i})\sim\sigma_{i}^{2}\Delta\cdot\hat{d% }_{\text{est}},\qquad\hat{d}_{\text{est}}=\mathcal{D}_{\text{est}}(H,L,X),$$ (99) where $H$ and $L$ are the high and low values of the canonical bridge (25), while $X=\gamma+W$ is the close value of the underlying canonical Wiener process with drift (9). It is known (see, for instance, Jeanblanc et al. (2009)) that the canonical bridge $Y(t)$ and $W$ are statistically independent. Thus, the joint pdf $\varphi(h,\ell,x)$ of the three random variables $\{H,L,X\}$ is equal to $$\varphi(h,\ell,x;\gamma)={1\over\sqrt{2\pi}}\exp\left(-{(x-\gamma)^{2}\over 2}% \right)\varphi(h,\ell)~{},$$ (100) where the joint pdf $\varphi(h,\ell)$ of high and low values is given by expression (70). Analogously to (86), it is convenient to represent the canonical estimator $\hat{d}_{\text{est}}$ (99) in the spherical coordinate system $$\begin{array}[]{c}H=R\cos\Upsilon\cos\Theta,\qquad L=R\cos\Upsilon\sin\Theta,% \qquad X=R\sin\Upsilon,\\ \displaystyle\Upsilon\in(-\pi/2,\pi/2),\qquad\Theta\in(-\pi/2,0).\end{array}$$ (101) Th canonical estimator $\hat{d}_{\text{est}}$ (99) then takes the form $$\hat{d}_{\text{est}}=R^{2}s(\Theta,\Upsilon),$$ (102) where $$s(\Theta,\Upsilon)=\mathcal{D}_{\text{est}}(\cos\Upsilon\cos\Theta,\cos% \Upsilon\sin\Theta,\sin\Upsilon)~{}.$$ (103) Analogously to (94) and (95), the unbiased most efficient high-low-close canonical estimator is given by $$\hat{d}_{\text{me-x}}={1\over\mathcal{E}_{\text{me-x}}}~{}R^{2}s_{\text{me-x}}% (\Theta,\Upsilon;\gamma),\qquad s_{\text{me-x}}(\theta,\upsilon;\gamma)={% \alpha(\theta,\upsilon;2;\gamma)\over\alpha(\theta,\upsilon;4;\gamma)}~{}.$$ (104) The function $\alpha(\theta,\upsilon;\lambda;\gamma)$ is defined by the equality $$\alpha(\theta,\upsilon;\lambda;\gamma)=\int_{0}^{\infty}r^{\lambda+2}\varphi(r% \cos\upsilon\cos\theta,r\cos\upsilon\sin\theta,r\sin\upsilon;\gamma)dr~{}.$$ (105) The variance of the most efficient canonical estimator $\hat{d}_{\text{me-x}}$ is equal to $$\text{Var}\left[\hat{d}_{\text{me-x}}\right]={1\over\mathcal{E}_{\text{me-x}}}% -1,$$ (106) with $$\mathcal{E}_{\text{me-x}}=\int_{-\pi/2}^{0}d\theta\int_{-\pi/2}^{\pi/2}d% \upsilon\cos\upsilon{\alpha^{2}(\theta,\upsilon;2;\gamma)\over\alpha(\theta,% \upsilon;4;\gamma)}~{}.$$ (107) The calculation of the integral (107) for $\gamma=0$ gives $$\text{Var}\left[\hat{d}_{\text{me-x}}\right]\simeq 0.1794,\quad\kappa_{\text{% me-x}}=3,\qquad\Rightarrow\qquad\mathcal{R}_{\text{me-x}}\simeq 1.928~{}.$$ (108) This estimator is definitely better than the most efficient time-high-low canonical estimator, as can be seen by comparing (108) with (97). 4.5 Time-high-low-close bridge estimator The last example we present here is the realized variance estimator that uses in each interval $\mathbb{S}_{i}$ the high and low values $H_{i}$, $L_{i}$ of the bridge $Y(t,\mathbb{S}_{i})$ (22), the close value $X_{i}$ of the original stochastic process $X(t)$ and the time instant $t_{\text{last}}^{i}=\sup\{t_{L}^{i},t_{H}^{i}\}$ defined as the larger of the two times at which occur the high and low values of the canonical bridge. One can rigorously prove that, analogously to (104), the homogeneous time-OHLC bridge canonical estimator that is most efficient for some given value of $\gamma$ value is equal to $$\begin{array}[]{c}\displaystyle\hat{d}_{\text{t-me-x}}(\Theta,\Upsilon,t_{% \text{last}};\gamma)=R^{2}s_{\text{t-me-x}}(\Theta,\Upsilon,t_{\text{last}};% \gamma),\\ \displaystyle s_{\text{t-me-x}}(\theta,\upsilon,t;\gamma)={1\over\mathcal{E}_{% \text{t-me-x}}(\gamma)}~{}{\alpha(\theta,\upsilon,t;2;\gamma)\over\alpha(% \theta,\upsilon,t;4;\gamma)},\end{array}$$ (109) where $$\mathcal{E}_{\text{t-me-x}}(\gamma)=\int_{0}^{1}dt\int_{-\pi/2}^{0}d\theta\int% _{-\pi/2}^{\pi/2}d\upsilon~{}\cos\upsilon{\alpha^{2}(\theta,\upsilon,t;2;% \gamma)\over\alpha(\theta,\upsilon,t;4;\gamma)}$$ (110) and $$\alpha(\theta,\upsilon,t;\lambda;\gamma)=\int_{0}^{\infty}r^{\lambda+2}\varphi% _{\text{last}}(r\cos\upsilon\cos\theta,r\cos\upsilon\sin\theta,r\sin\upsilon,t% ;\gamma)dr.$$ (111) The joint pdf $\varphi(h,\ell,x,t;\gamma)$ is $$\varphi_{\text{last}}(h,\ell,x,t;\gamma)={1\over\sqrt{2\pi}}\exp\left(-{(x-% \gamma)^{2}\over 2}\right)\varphi_{\text{last}}(h,\ell,t),$$ (112) where $\varphi_{\text{last}}(h,\ell,t)$ is given by expression (C.15) in Appendix C.2. Remark 7 Recall that the parameter factor $\gamma$ (9) is unknown, because both the drift $\mu_{i}$ and the instantaneous variances $\sigma^{2}_{i}$ in equations (6) are generally unknown. Therefore, our strategy below is to choose, for definiteness, $\gamma=0$ and then explore the dependence on $\gamma$ of the bias and efficiency of the different “zero drift” estimators. Accordingly, we will use below the following shorthand notations, omitting the argument $\gamma$, such as $$\hat{d}_{\text{t-me-x}}(\Theta,\Upsilon,t):=\hat{d}_{\text{t-me-x}}(\Theta,% \Upsilon,t;\gamma=0).$$ The calculation of the integral (110), where $\alpha(\theta,\upsilon,t;\lambda)$ is given by expression (C.18) in Appendix C.4 yields for $\gamma=0$ $$\text{Var}\left[\hat{d}_{\text{t-me-x}}\right]={1\over\mathcal{E}_{\text{t-me-% x}}}-1\simeq 0.1710,\quad\kappa_{\text{t-me-x}}=3,\quad\Rightarrow\quad% \mathcal{R}_{\text{t-me-x}}\simeq 1.975~{}.$$ (113) This estimator is more efficient than all the previous one discussed until now. 5 Numerical simulations and comments 5.1 Description of numerical simulations The goal of this section is to check by numerical simulations some analytical results obtained above. Realizations of the canonical Wiener process $X(t;\gamma)$ (9) with drift for time $t\in[0,1]$ are obtained numerically as cumulative sums of a number $I(t)=10^{5}$ of Gaussian summands, corresponding to a discrete time step $\Delta=10^{-5}$. For each numerical realization, we calculate the values of the open-close spot variance canonical estimator, equal in this case to $$\hat{d}_{\text{real}}=(\gamma+W)^{2}~{},$$ (114) and the values of the G&K canonical estimator $$\hat{d}_{\text{GK}}=k_{1}(H-L)^{2}-k_{2}(W(H-L)-2HL)-k_{3}(\gamma+W)^{2}~{},$$ (115) where $H$ and $L$ are the high and low values of the simulated process $X(t;\gamma)$. We also constructed numerical realizations of the bridge process $Y(t)$ (25) and calculated the corresponding values of the canonical estimator $\hat{d}_{\text{t-me-x}}$ (109). This estimator depends on the function $\alpha(\theta,\upsilon,t;\lambda)$ defined by expression (C.18) in Appendix C.4, which is explicitly obtained by summing a double-infinite series (C.19). In practice, we estimate this double-sum by keeping only the $101$ first terms in each dimension, corresponding to estimating $101\times 101\simeq 10^{4}$ summands in (C.18). Remark 8 At first glance, it would seem that the calculation of the G&K estimator (115), which needs only a few simple arithmetic operations, is much easier than the evaluation of the large number of summands in the series (C.18) that define the estimator $\hat{d}_{\text{t-me-x}}$ (109). In our computerized world, it turns out that there is actually no significant difference from the computational point of view. 5.2 Statistics of the estimators in the case of zero drift ($\gamma=0$) Figure 1 shows 5000 realizations of the open-close estimator $\hat{d}_{\text{real}}$ (114), of the G&K estimator (115) and of the estimator $\hat{d}_{\text{t-me-x}}$ in the case the Wiener process with zero drift ($\gamma=0$). It is clear that the last estimator is the most efficient in comparison with the open-close and the G&K estimators. The expected values and variances of these three estimators obtained by statistical averaging over $10^{4}$ samples are $$\begin{array}[]{c}\text{E}[\hat{d}_{\text{real}}]\simeq 1.0110,\qquad\text{E}[% \hat{d}_{\text{GK}}]\simeq 1.0058,\qquad\text{E}[\hat{d}_{\text{t-me-x}}]% \simeq 1.0001,\\ \displaystyle\text{Var}[\hat{d}_{\text{real}}]\simeq 1.9947,\qquad\text{Var}[% \hat{d}_{\text{GK}}]\simeq 0.2669,\qquad\text{Var}[\hat{d}_{\text{t-me-x}}]% \simeq 0.1696.\end{array}$$ These values are consistent with the theoretical analytical predictions obtained in previous sections: $$\begin{array}[]{c}\text{E}[\hat{d}_{\text{real}}]=\text{E}[\hat{d}_{\text{GK}}% ]=\text{E}[\hat{d}_{\text{t-me-x}}]=1,\\ \displaystyle\text{Var}[\hat{d}_{\text{real}}]=2,\qquad\text{Var}[\hat{d}_{% \text{GK}}]\simeq 0.2693,\qquad\text{Var}[\hat{d}_{\text{t-me-x}}]\simeq 0.171% 0.\end{array}$$ In order to have truly comparable efficiencies of these realized variance estimators, bearing in mind that their effective sample sizes are different ($\kappa_{\text{real}}=1$, $\kappa_{\text{GK}}=\kappa_{\text{t-me-x}}=3$), we performed moving averages with $r=30$ subsequent samples for the open-close estimator (114) and with $r=10$ subsequent samples for the G&K estimator (115) and estimator $\hat{d}_{\text{t-me-x}}$ (109). Figure 2 presents there moving averages, which mimick the normalized estimators of the integrated variance in the case where all instantaneous variances are the same ($\sigma_{i}^{2}=\sigma^{2}=\text{const}$). It is clear that the open-close estimator of the realized variance remains significantly less efficient than the G&K estimator, and much less efficient than the most efficient estimator $\hat{d}_{\text{t-me-x}}$. 5.3 $\gamma$-dependence of biases and efficiencies of canonical estimators In the previous subsection, we presented detailed calculations of the comparative efficiency of unbiased variance estimators for the particular case of Wiener processes with zero drift. In real financial markets, the drift process $\mu(t)$ is unknown and there is not reason for it to vanish. Thus, it is important to explore quantitatively the dependence on the parameter $\gamma$ (9) of the biases and efficiencies of the spot variance canonical estimators described above. We begin with the open-close spot variance canonical estimator $\hat{d}_{\text{real}}$ (114). It is easy to show that its expected value and variance are quadratic functions of $\gamma$: $$\text{E}\left[\hat{d}_{\text{real}}\right]=1+\gamma^{2},\qquad\text{Var}\left[% \hat{d}_{\text{real}}\right]=2+4\gamma^{2}.$$ (116) The spot variance homogeneous time-open-high-low canonical bridge estimators, such as the Park estimator $\hat{d}_{\text{bPark}}$ (73) and the time-high-low estimator $\hat{d}_{\text{t-me}}$ (95), are unbiased for all $\gamma$: $$\text{E}\left[\hat{d}_{\text{bPark}}\right]=\text{E}\left[\hat{d}_{\text{t-me}% }\right]\equiv 1~{}.$$ Their variances do not depend on $\gamma$ at all: $$\text{Var}\left[\hat{d}_{\text{bPark}}\right]\simeq 0.2000,\qquad\text{Var}% \left[\hat{d}_{\text{t-me}}\right]\simeq 0.1873\qquad\forall~{}\gamma.$$ (117) To obtain the $\gamma$-dependence of the biases and variances of the G&K canonical estimator $\hat{d}_{\text{GK}}$ (115) and of the canonical estimator $\hat{d}_{\text{t-me-x}}$ (109), we generate $10^{4}$ numerical realizations of the canonical Wiener process $X(t,\gamma)$ (9) with drift, for $\gamma=0;0.1;\dots 1.5;1.6$. Then, we calculated the statistical averages and variances of the corresponding $10^{4}$ realizations of the canonical estimators $\hat{d}_{\text{GK}}$ and $\hat{d}_{\text{t-me-x}}$, which are shown in figure 3. The continuous lines are respectively the expected value (116) of the open-close estimator $\hat{d}_{\text{real}}$ given by expression (114) and the fitted curves $$\text{E}[\hat{d}_{\text{est}}]=a_{\text{est}}\gamma^{2}+b_{\text{est}}$$ for the averaged values of the canonical estimators $\hat{d}_{\text{GK}}$ and $\hat{d}_{\text{t-me-x}}$. Their fitted parameters are $$a_{\text{GK}}\simeq 0.126,\qquad a_{\text{t-me-x}}\simeq 0.082,\qquad b_{\text% {GK}}\simeq b_{\text{t-me-x}}\simeq 1.$$ Figure 4 shows the statistical average of the variances of the canonical estimators $\hat{d}_{\text{GK}}$ and $\hat{d}_{\text{t-me-x}}$. The two horizontal lines indicate the variance values (117). The continuous lines show the fitted curves $$\text{Var}[\hat{d}_{\text{est}}]=c_{\text{est}}\gamma^{2}+d_{\text{est}}$$ of the variances of the canonical estimators $\hat{d}_{\text{GK}}$ and $\hat{d}_{\text{t-me-x}}$. Their parameters are $$c_{\text{GK}}\simeq 0.089,\qquad c_{\text{t-me-x}}\simeq 0.0272,\qquad d_{% \text{GK}}\simeq 0.271,\qquad b_{\text{t-me-x}}\simeq 0.170.$$ 5.4 Construction of general variance estimators We have introduced the canonical estimator $\hat{d}_{\text{t-me-x}}$ given by expression (109) that includes the information on the value of the time $t_{\text{last}}=\sup\{t_{L},t_{H}\}$ defined as the larger of the two times at which occur the high and low values of the canonical bridge. It seems that the canonical estimator $$\begin{array}[]{c}\displaystyle\hat{d}_{\text{tt-me-x}}(\Theta,\Upsilon,t_{% \text{high}},t_{\text{low}};\gamma)=R^{2}s_{\text{tt-me-x}}(\Theta,\Upsilon,t_% {\text{high}},t_{\text{low}};\gamma),\\ \displaystyle s_{\text{tt-me-x}}(\theta,\upsilon,t_{1},t_{2};\gamma)={1\over% \mathcal{E}_{\text{tt-me-x}}(\gamma)}~{}{\alpha(\theta,\upsilon,t_{1},t_{2};2;% \gamma)\over\alpha(\theta,\upsilon,t_{1},t_{2};4;\gamma)},\end{array}$$ (118) taking into account both high’s and low’s and their corresponding occurrence times ($t_{\text{high}}:~{}H=Y(t_{\text{high}}),t_{\text{low}}:~{}L=Y(t_{\text{low}})$) is even more efficient than the estimator (109). In expression (118), we have used the notation $$\alpha(\theta,\upsilon,t_{1},t_{2};\lambda,\gamma)=\int_{0}^{\infty}r^{\lambda% +2}\varphi(r\cos\upsilon\cos\theta,r\cos\upsilon\sin\theta,r\sin\upsilon,t_{1}% ,t_{2};\gamma)dr~{},$$ (119) where $\varphi(h,\ell,x,t_{1},t_{2};\gamma)$ is the joint pdf of the high-low-close-$t_{\text{hight}}$-$t_{\text{low}}$ random variables. We have not explored the statistical properties of the estimator (118) because we have made not yet the effort of deriving the exact analytical expression of $\varphi(h,\ell,x,t_{1},t_{2};\gamma)$. We can however construct the function $\alpha$ (119) using statistical averaging: $$\alpha(\theta,\upsilon,t_{1},t_{2};\lambda,\gamma)\cos\upsilon d\upsilon d% \theta dt_{1}dt_{2}\simeq{1\over K}\sum_{k=1}^{K}R^{\lambda}_{k}\boldsymbol{I}% \left(\Upsilon_{k},\Theta_{k},t_{\text{high},k},t_{\text{low},k}\right)~{}.$$ (120) In this expression, the values $\{\Upsilon_{k},\Theta_{k},t_{\text{high},k},t_{\text{low},k}\}$ are parameters of numerically simulated $k$-th sample of the canonical Wiener process with drift $X(t,\gamma)$ (9), and $\boldsymbol{I}$ is the indicator of the set $$(\upsilon,\upsilon+d\upsilon)\times(\theta,\theta+d\theta)\times(t_{1},t_{1}+% dt_{1})\times(t_{2},t_{2}+dt_{2})~{}.$$ We would like to point out that it is possible to construct the function $\alpha$ by an analogous statistical treatment for more general log-price process that extend the Wiener process with drift to include more adequately the micro-stricture noise, the presence of heavy tails of returns and other stylized facts that can be found for various financial assets. In others words, relations such as (120) offer the possibility of constructing novel most efficient variance estimators of the form (118), extending the standard approach of econometricians looking for new constructions of efficient volatility estimators. The requisite is to be able to simulate numerically the underlying stochastic process that is representing a given financial asset dynamics. Then, the use of statistical averaging, similar to (120), will enable the construction of high-frequency realized estimators that use the most efficient estimators described above as elementary “bricks”. 6 Conclusion We have introduced a variety of integrated variance estimators, based on the open-high-low values of the bridges $Y(t,\mathbb{S}_{i})$ (22), and close values $X_{i}$ (4) of the underlying log-price process $X(t)$. The main peculiarity of some of the introduced estimators is to take into account not only the high and low values but additionally their occurrence time. This last piece of information lead to estimators that are even more efficient. We discussed quantitatively the statistical properties of the estimators for the class off Itô model for the log-price stochastic process. Our work opens the road to the construction of novel types of integrated variance estimators of log-price processes of real financial markets that take into account the microstructure noise, heavy power tails of returns, and chaotic jumps. Acknowledgements: We are grateful to Fulvio Corsi for valuable discussions of some aspects of this paper. Appendix A Basic properties of the canonical bridge A.1 Symmetry properties The canonical bridge $Y(t)$ (25) exhibits the following time reversibility and reflection properties $$Y(t)\sim Y(1-t),\qquad Y(t)\sim-Y(t).$$ (A.1) Some statistical consequences of these symmetry properties are as follows. Let $$H=\sup_{t\in(0,1)}Y(t),\qquad L=\inf_{t\in(0,1)}Y(t),$$ (A.2) be the high and low values of the canonical bridge, while $t_{\text{high}}$ and $t_{\text{low}}$ are their corresponding occurrence times: $$t_{\text{high}}:~{}H=Y(t_{\text{high}}),\qquad t_{\text{low}}:~{}L=Y(t_{\text{% low}}).$$ (A.3) Consider the cumulative distribution (cdf) $$\Phi_{\text{high}}(t)=\Pr\{t_{\text{high}}<t\}$$ of the occurrence time $t_{\text{high}}$ of the high value of the canonical bridge. Due to the reversibility property (A.1), one has $$\Pr\{t_{\text{high}}<t\}=\Pr\{t_{\text{high}}>1-t\}\quad\Rightarrow\quad\Phi_{% \text{high}}(t)+\Phi_{\text{high}}(1-t)=1.$$ (A.4) Accordingly, the pdf of $t_{\text{high}}$ $$\varphi_{\text{high}}(t):={d\Phi_{\text{high}}(t)\over dt}$$ presents the symmetry $$\varphi_{\text{high}}(t)=\varphi_{\text{high}}(1-t).$$ (A.5) Due to the reversibility property of the canonical bridge, the cdf $\Phi_{\text{low}}(t)$ of $t_{\text{low}}$ (A.3) coincides with the cdf of $t_{\text{high}}$: $$\Phi_{\text{low}}(t)=\Phi_{\text{high}}(t)\quad\Rightarrow\quad\varphi_{\text{% low}}(t)=\varphi_{\text{high}}(t)=\varphi_{\text{high}}(1-t).$$ A.2 Interplay between bridge and Wiener processes We will need below the well-known identity in law for the canonical bridge $$Y(t)\sim\mathcal{Y}(t):=(1-t)W\left({t\over 1-t}\right).$$ Using the change of time variable $$\tau={t\over 1-t}\qquad\iff\qquad t={\tau\over 1+\tau}$$ and the scaling properties of the Wiener process, we can replace the compounded process $$\mathcal{Y}(t(\tau))=\mathcal{Y}\left({\tau\over 1+\tau}\right)$$ by the more convenient process, which is identical in law and reads $$\mathcal{Y}(t(\tau))\sim\mathcal{Z}(\tau)={1\over 1+\tau}W(\tau).$$ (A.6) In turn, the following identity in law holds $$Y(t)\sim\mathcal{Z}(\tau(t))=\mathcal{Z}\left({t\over 1-t}\right).$$ (A.7) Appendix B Joint pdf of the high value and its occurrence time B.1 Reflection method Let us consider the function $f(\omega;\tau,h)$ such that $$\Pr\{W(\tau)\in(\omega,\omega+d\omega)\cap W(\tau^{\prime})<h(1+\tau^{\prime})% :\tau^{\prime}\in(0,\tau)\}=f(\omega;\tau,h)d\omega.$$ (B.1) This function $f(\omega;\tau,h)$ satisfies to the following diffusion equation $${\partial f\over\partial\tau}={1\over 2}{\partial^{2}f\over\partial\omega^{2}}$$ (B.2) with initial and absorbing boundary conditions $$f(\omega;\tau=0,h)=\delta(\omega),\qquad f(\omega=h+h\tau;\tau,h)=0.$$ (B.3) We solve the initial-boundary problem (B.2) with (B.3) using the reflection method, which amounts to searching for a solution of the form $$f(\omega;\tau,h)={1\over\sqrt{2\pi\tau}}\left[\exp\left(-{\omega^{2}\over 2% \tau}\right)-A\exp\left(-{(\omega-2h)^{2}\over 2\tau}\right)\right],$$ where the factor $A$ is defined from the absorbing boundary condition (B.3), i.e. $$\exp\left(-{(h+h\tau)^{2}\over 2\tau}\right)=A\exp\left(-{(h-h\tau)^{2}\over 2% \tau}\right)\quad\Rightarrow\quad A=e^{-2h^{2}}.$$ We thus obtain $$f(\omega;\tau,h)={1\over\sqrt{2\pi\tau}}\left[\exp\left(-{\omega^{2}\over 2% \tau}\right)-\exp\left(-2h^{2}-{(\omega-2h)^{2}\over 2\tau}\right)\right].$$ (B.4) B.2 Pdf of the maximal value of the canonical bridge In view of (B.1) and (A.6), the joint pdf of $W(\tau)$ and of the high value $$\mathcal{H}(\tau)=\sup_{\tau^{\prime}\in(0,\tau)}\mathcal{Z}(\tau^{\prime})$$ (B.5) of the stochastic process $\mathcal{Z}(\tau^{\prime})$ within the interval $\tau^{\prime}\in(0,\tau)$ is equal to $$\mathcal{Q}(\omega,h;\tau)={\partial f(\omega;\tau,h)\over\partial h}.$$ Substituting in the above equation the expression (B.4) yields $$\begin{array}[]{c}\displaystyle\mathcal{Q}(\omega,h;\tau)={1\over\tau}\sqrt{{2% \over\pi\tau}}(2h(1+\tau)-\omega)\exp\left[-2h^{2}-{(\omega-2h)^{2}\over 2\tau% }\right],\\ \displaystyle\omega<h(1+\tau),\qquad h>0.\end{array}$$ (B.6) In particular, the pdf of the high value $\mathcal{H}(\tau)$ (B.5) $$\mathcal{Q}(h;\tau)=\int_{-\infty}^{h(1+\tau)}\mathcal{Q}(\omega,h,\tau)d\omega$$ is equal to $$\mathcal{Q}(h;\tau)=\sqrt{{2\over\pi\tau}}\exp\left(-{h^{2}(1+\tau)^{2}\over 2% \tau}\right)+2he^{-2h^{2}}\text{erfc}\left({h(1-\tau)\over\sqrt{2\tau}}\right).$$ Using the identity in law (A.7), the pdf $Q_{\text{high}}(h;t)$ of the high value $$H(t)=\sup_{t^{\prime}\in(0,t)}Y(t^{\prime})$$ is equal to $$Q_{\text{high}}(h;t)=\mathcal{Q}\left(h;{t\over 1-t}\right).$$ (B.7) In particular, the pdf’s of the high values $\mathcal{H}$ (B.5) and $H$ (A.2) are the same and equal to $$\varphi_{\text{high}}(h)=\lim_{\tau\to\infty}\mathcal{Q}(h;\tau)=4he^{-2h^{2}}.$$ (B.8) B.3 Pdf of the high value of the bridge and of its occurrence value In order to derive the joint pdf of the maximal value $H$ (A.2) and of the occurrence time $t_{\text{high}}$ (A.3), we first consider the related joint pdf of the high value $\mathcal{H}$ (B.5) of the auxiliary process $\mathcal{Z}(\tau)$ (A.6) and of its occurrence time $\tau_{\text{high}}:~{}\mathcal{H}=\mathcal{Z}(\tau_{\text{high}})$. The function $F(h,\tau)$ that defines the probability $$F(h,\tau)dh=\Pr\{\mathcal{H}\in(h,h+dh),\tau_{\text{high}}<\tau\}.$$ is given by $$F(h,\tau)=\int_{-\infty}^{h(1+\tau)}\mathcal{Q}(\omega,h;\tau)P(\omega,h,\tau)% d\omega,$$ (B.9) where $\mathcal{Q}(\omega,h;\tau)$ is the joint pdf of $W(\tau)$ and $\mathcal{H}(\tau)$, given by equality (B.6), while $$\begin{array}[]{c}\displaystyle P(\omega,h,\tau)=\lim_{\theta\to\infty}P(% \omega,h,\tau,\theta),\\ \displaystyle P(\omega,h,\tau,\theta)=\Pr\{W(\tau^{\prime}|\tau,\omega)<h(1+% \tau^{\prime}):\tau^{\prime}\in(\tau,\tau+\theta)\}.\end{array}$$ (B.10) Here, $W(\tau^{\prime}|\tau,\omega)$ is the conditioned Wiener process that takes the value $\omega$ at $\tau^{\prime}=\tau$. Due to the identity in law (A.6), $P(\omega,h,\tau)$ is equal to the probability that the following inequality holds $$\mathcal{Z}(\tau^{\prime}|\tau,\omega)<h,\qquad\tau^{\prime}\in(\tau,\infty),$$ where $\mathcal{Z}(\tau^{\prime}|\tau,\omega)$ is the conditioned stochastic process $\mathcal{Z}(\tau^{\prime})$, which is equal to $\omega/(1+\tau)$ at $\tau^{\prime}=\tau$. The probability $P(\omega,h,\tau,\theta)$ (B.10) is given by $$P(\omega,h,\tau,\theta)=\int_{-\infty}^{h(1+\tau+\theta)}f(x;\omega,h,\tau,% \theta)dx,$$ (B.11) where the pdf $f(x,\omega,h,\tau,\theta)$ satisfies the initial-boundary value problem $$\begin{array}[]{c}\displaystyle{\partial f\over\partial\theta}={1\over 2}{% \partial^{2}f\over\partial x^{2}},\\ \displaystyle f(x;\omega,h,\tau,\theta=0)=\delta(x-\omega),\quad f(x=h(1+\tau+% \theta);\omega,h,\tau,\theta)=0.\end{array}$$ Its solution, obtained by the reflection method, is $$\begin{array}[]{c}\displaystyle f(x;\omega,h,\tau,\theta)={1\over\sqrt{2\pi% \theta}}\bigg{[}\exp\left(-{(x-\omega)^{2}\over 2\theta}\right)-\\ \displaystyle\exp\left(-2h(h(1+\tau)-\omega)-{(x+\omega-2h(1+\tau))^{2}\over 2% \theta}\right)\bigg{]}.\end{array}$$ Substituting this last expression into (B.11) yields $$\begin{array}[]{c}P(\omega,h,\tau,\theta)=\\ \displaystyle{1\over 2}\left[\text{erfc}\left({\omega-h(1+\tau+\theta)\over% \sqrt{2\theta}}\right)-e^{-2h(h(1+\tau)-\omega)}\text{erfc}\left({h(1+\tau-% \theta)-\omega\over\sqrt{2\theta}}\right)\right].\end{array}$$ In particular, in the limiting case $\theta\to\infty$, one has $$P(\omega,h,\tau)=1-e^{-2h(h(1+\tau)-\omega)}.$$ (B.12) Substituting $\mathcal{Q}(\omega,h;\tau)$ (B.6) and $P(\omega,h,\tau)$ (B.12) into (B.9), after integration, we obtain $$F(h,\tau)=2he^{-2h^{2}}\text{erfc}\left({h(1-\tau)\over\sqrt{2\tau}}\right).$$ (B.13) Consider now the probability $$\Phi_{\text{high}}(h,t)dh=\Pr\{H\in(h,h+dh),t_{\text{high}}<t\}.$$ Due to the identity in law (A.7), $\Phi_{\text{high}}(h,t)$ is equal to $$\Phi_{\text{high}}(h,t)=F\left(h,{t\over 1-t}\right)=2he^{-2h^{2}}\text{erfc}% \left({h(1-2t)\over\sqrt{2t(1-t)}}\right).$$ (B.14) The integration over $h\in(0,\infty)$ gives the cumulative distribution function (cdf) of the random occurrence times $t_{\text{high}}$ (A.3): $$\Phi_{\text{high}}(t)=\Pr\{t_{\text{high}}<t\}=\int_{0}^{\infty}\Phi(h,t)dh=t,% \qquad t\in(0,1).$$ This means that the occurrence time $t_{\text{high}}$ of the high value of the canonical bridge is uniformly distributed. The above cdf satisfies the symmetry property (A.4). The corresponding pdf $\varphi_{\text{high}}(t)=1$ satisfies obviously to symmetry property (A.5). The sought joint pdf of the high value $H$ of canonical bridge $Y(t)$ and of its corresponding occurrence time $t_{\text{high}}$ is $$\varphi_{\text{high}}(h,t)={\partial\Phi_{\text{high}}(h,t)\over\partial t}.$$ (B.15) Substituting here $\Phi_{\text{high}}(h,t)$ (B.14) yields $$\varphi_{\text{high}}(h,t)=\sqrt{2\over\pi}{h^{2}\over\sqrt{t^{3}(1-t)^{3}}}% \exp\left(-{h^{2}\over 2t(1-t)}\right).$$ (B.16) Appendix C Statistics of the high, low and occurrence time of the last extremum of the canonical bridge C.1 Statistical description of the joint pdf of the high, low and occurrence time of the last extremum The occurrence times of the first and last absolute extremes (A.3) of canonical bridge $Y(t)$ are formally defined as $$t_{\text{first}}=\inf\{t_{L},t_{H}\},\qquad t_{\text{last}}=\sup\{t_{L},t_{H}% \}~{}.$$ (C.1) The joint pdf of the high $H$ and low $L$ (A.2) together with the cdf of the occurrence time $t_{\text{last}}$ is given by $$\Phi_{\text{last}}(h,\ell,t)dhd\ell=\Pr\{H\in(h,h+dh)\cap L\in(\ell,\ell+d\ell% )\cap t_{\text{last}}<t\}~{}.$$ (C.2) We derive the function $\Phi_{\text{last}}(h,\ell,t)$ by using a natural generalization of the reasoning presented in Appendix B that led to the joint pdf $\Phi_{\text{high}}(h,t)$ (B.14) of the high value $H$ and of the cdf of the occurrence time $t_{\text{high}}$. Namely, we calculate first the probability $$F(h,\ell,\tau)dhd\ell=\Pr\{\mathcal{H}\in(h,h+dh),\mathcal{L}\in(\ell,\ell+d% \ell),\tau_{\text{last}}<\tau\},$$ (C.3) where $$\begin{array}[]{c}\displaystyle\mathcal{H}=\sup_{\tau\in(0,\infty)}\mathcal{Z}% (\tau),\qquad\mathcal{L}=\inf_{\tau\in(0,\infty)}\mathcal{Z}(\tau),\\ \tau_{\text{last}}=\sup\{\tau_{\text{low}},\tau_{\text{high}}\},\quad\tau_{% \text{low}}:~{}\mathcal{L}=\mathcal{Z}(\tau_{\text{low}}),\quad\tau_{\text{% high}}:~{}\mathcal{H}=\mathcal{Z}(\tau_{\text{high}}).\end{array}$$ Analogously to (B.9), $F(h,\ell,\tau)$ is equal to $$F(h,\ell,\tau)=\int_{\ell(1+\tau)}^{h(1+\tau)}\mathcal{Q}(\omega,h,\ell,\tau)P% (\omega,h,\ell,\tau)d\omega,$$ (C.4) where $$\mathcal{Q}(\omega,h,\ell,\tau)=-{\partial^{2}f(\omega;h,\ell,\tau)\over% \partial h\partial\ell}$$ (C.5) and the pdf $f(\omega;h,\ell,\tau)$ satisfies the initial-boundary problem $$\begin{array}[]{c}\displaystyle{\partial f\over\partial\tau}={1\over 2}{% \partial^{2}f\over\partial\omega^{2}},\qquad f(\omega;h,\ell,\tau=0)=\delta(% \omega),\\ \displaystyle f(\omega=h(1+\tau);h,\ell,\tau)=0,\quad f(\omega=\ell(1+\tau);h,% \ell,\tau)=0,\quad\tau>0.\end{array}$$ (C.6) Similarly to $P(\omega,h,\tau)$ (B.10), the probability $P(\omega,h,\ell,\tau)$ is given by $$\begin{array}[]{c}\displaystyle P(\omega,h,\ell,\tau)=\lim_{\theta\to\infty}P(% \omega,h,\ell,\tau,\theta),\\ \displaystyle P(\omega,h,\ell,\tau,\theta)=\Pr\{\ell(1+\tau^{\prime})<B\tau^{% \prime}|\tau,\omega)<h(1+\tau^{\prime}):\tau^{\prime}\in(\tau,\tau+\theta)\}.% \end{array}$$ Analogously to (B.11), the last probability $P(\omega,h,\ell,\tau,\theta)$ is equal to $$P(\omega,h,\ell,\tau,\theta)=\int_{\ell(1+\tau+\theta)}^{h(1+\tau+\theta)}f(x;% \omega,h,\ell,\tau,\theta)dx,$$ (C.7) where $f(x;\omega,h,\ell,\tau,\theta)$ is the solution of the initial-boundary problem $$\begin{array}[]{c}\displaystyle{\partial f\over\partial\theta}={1\over 2}{% \partial^{2}f\over\partial x^{2}},\quad f(x;\omega,h,\ell,\tau,\theta=0)=% \delta(x-\omega),\\ \displaystyle f(x=h(1+\tau+\theta);\omega,h,\ell,\tau,\theta)=0,\quad f(x=\ell% (1+\tau+\theta);\omega,h,\ell,\tau,\theta)=0.\end{array}$$ (C.8) Knowing the function $F(h,\ell,\tau)$ defined by equality (C.3), one can find the sought function $\Phi_{\text{last}}(h,\ell,t)$ (C.2) using the following relation $$\Phi_{\text{last}}(h,\ell,t)=F\left(h,\ell,{t\over 1-t}\right),$$ (C.9) which is analogous to (B.14). In turn, one can find the joint pdf of the high $H$, low $L$ values (A.2) and occurrence time of the last absolute extremum $t_{\text{last}}$ (C.1) of the canonical bridge $Y(t)$ using, analogously to (B.15), the relation $$\varphi_{\text{last}}(h,\ell,t)={\partial\Phi_{\text{last}}(h,\ell,t)\over% \partial t}.$$ (C.10) C.2 Solutions of boundary-value problems Using the initial-boundary problem (C.6) with the reflection method, we obtain $$\begin{array}[]{c}\displaystyle f(\omega;h,\ell,\tau)=\sum_{m=-\infty}^{\infty% }\big{[}e^{-2(h-\ell)^{2}m^{2}}g(\omega+2(h-\ell)m;\tau)-\\ \displaystyle e^{-2((h-\ell)m+h)^{2}}g(\omega-2(h+(h-\ell)m);\tau)\big{]},\end% {array}$$ (C.11) where $$g(\omega;\tau)={1\over\sqrt{2\pi\tau}}\exp\left(-{\omega^{2}\over 2\tau}\right).$$ In turn, the solution of the initial-boundary problem (C.8) is given by $$\begin{array}[]{c}\displaystyle f(x;\omega,h,\ell,\tau,\theta)=\sum_{m=-\infty% }^{\infty}\bigg{[}e^{-2(h-\ell)^{2}m^{2}(1+\tau)+2\omega(h-\ell)m}\times\\ \displaystyle g(y-\omega+2m(h-\ell)(1+\tau);\theta)-\\ \displaystyle e^{-2((h-\ell)m+h)^{2}(1+\tau)+2\omega((h-\ell)m+h)}g(y+\omega-2% ((h-\ell)m+h)(1+\tau);\theta)\bigg{]}.\end{array}$$ (C.12) After substituting $f(\omega;h,\ell,\tau)$ (C.11) into (C.5), we obtain $$\begin{array}[]{c}\displaystyle\mathcal{Q}(\omega,h,\ell,\tau)={4\over\tau^{2}% }\sum_{-\infty}^{\infty}m\bigg{[}me^{-2(h-\ell)^{2}m^{2}}\times\\ \displaystyle[(\omega+2m(h-\ell)(1+\tau))^{2}-\tau(1+\tau)]g(\omega+2m(h-\ell)% ,\tau)-\\ \displaystyle(1+m)e^{-2(m(h-\ell)+h)^{2}}\times\\ \displaystyle[(\omega-2(m(h-\ell)+h)(1+\tau))^{2}-\tau(1+\tau)]g(\omega-2(m(h-% \ell)+h),\tau)\bigg{]}.\end{array}$$ (C.13) Substituting $f(x;\omega,h,\ell,\tau,\theta)$ (C.12) into (C.7), and taking the limit $\theta\to\infty$, we obtain $$\begin{array}[]{c}\displaystyle P(\omega,h,\ell,\tau)=\\ \displaystyle\sum_{m=-\infty}^{\infty}\left[e^{-2(h-\ell)^{2}(1+\tau)m^{2}+2(h% -\ell)m\omega}-e^{-2(h+(h-\ell)m)^{2}(1+\tau)+2(h+(h-\ell)m)\omega}\right].% \end{array}$$ (C.14) After substituting $\mathcal{Q}(\omega,h,\ell,\tau)$ (C.13) and $P(\omega,h,\ell,\tau)$ (C.14) into (C.4), we obtain the explicit expression for $F(h,\ell,\tau)$. Substituting it into (C.9) and using relation (C.9), we obtain the pdf of the high $H$, low $L$ values and occurrence time $t_{\text{last}}$ of the last extremum under the form $$\begin{array}[]{c}\displaystyle\varphi_{\text{last}}(h,\ell,t)=\sum_{m=-\infty% }^{\infty}\sum_{n=-\infty}^{\infty}\Big{(}m^{2}\Big{[}\\ \displaystyle g(h,t,2(h-\ell)m,2(h-\ell)n)-g(\ell,t,2(h-\ell)m,2(h-\ell)n)-\\ \displaystyle g(h,t,2(h-\ell)m,2(h+(h-\ell)n))+g(\ell,t,2(h-\ell)m,2(h+(h-\ell% )n))\Big{]}-\\ \displaystyle m(m+1)\Big{[}g(h,t,-2(h+(h-\ell)m),2(h-\ell)n)-\\ \displaystyle g(\ell,t,-2(h+(h-\ell)m),2(h-\ell)n)-\\ \displaystyle g(h,t,-2(h+(h-\ell)m),2(h+(h-\ell)n))+\\ \displaystyle g(\ell,t,-2(h+(h-\ell)m),2(h+(h-\ell)n))\Big{]}\Big{)},\\ \displaystyle g(y,t,a,c)=-\sqrt{{2\over\pi(1-t)^{3}t^{7}}}\exp\left(-{(a+y)^{2% }-(a+c)(a-c+2y)t\over 2t(1-t)}\right)\times\\ \displaystyle\left[(a+y)^{3}-(a+y)(3+(a+y)(a-c+2y))t+(3a-c+4y)t^{2}\right].% \end{array}$$ (C.15) C.3 Function $\alpha_{\text{last}}(\theta,t;\lambda)$ Some of the most efficient estimators introduced in this paper are defined through the function $$\alpha_{\text{last}}(\theta,t;\lambda)=\int_{0}^{\infty}r^{\lambda+1}\varphi_{% \text{last}}(r\cos\theta,r\sin\theta,t)dr,$$ (C.16) which is analogous to (73), The calculation of the integral (C.16) yields $$\begin{array}[]{c}\displaystyle\alpha_{\text{last}}(\theta,t;\lambda)=-{1\over% \sqrt{8\pi(1-t)^{3}t^{7}\delta^{5+\lambda}}}~{}\Gamma\left({3+\lambda\over 2}% \right)\sum_{m,n=-\infty}^{\infty}\Big{(}m^{2}\times\\ \displaystyle\Big{[}\beta(co,t,2sc\cdot m,2sc\cdot n;\lambda)-\beta(si,t,2sc% \cdot m,2sc\cdot n;\lambda)-\\ \displaystyle\beta(co,t,2sc\cdot m,2(co+sc\cdot n);\lambda)+\\ \displaystyle\beta(si,t,2sc\cdot m,2(co+sc\cdot n);\lambda)\Big{]}-\\ \displaystyle m(m+1)\Big{[}\beta(co,t,-2(co+sc\cdot m),2sc\cdot n;\lambda)-\\ \displaystyle\beta(si,t,-2(si+sc\cdot m),2sc\cdot n;\lambda)-\\ \displaystyle\beta(co,t,-2(co+sc\cdot m),2(co+sc\cdot n);\lambda)+\\ \displaystyle\beta(si,t,-2(co+sc\cdot m),2(co+sc\cdot n);\lambda)\Big{]}\Big{)% },\end{array}$$ (C.17) where $$\begin{array}[]{c}\displaystyle co=\cos\theta,\qquad si=\sin\theta,\qquad sc=% co-si,\\ \displaystyle\beta(y,t,a,c;\lambda)=\Big{[}(a+y)^{2}[a+y-(a-c+2y)t](3+\lambda)% +\\ \displaystyle\delta t[(6a-2c+8y)t-6(a+y)]\Big{]},\\ \displaystyle\delta=\delta(y,t,a,c)={(a+y)^{2}-(a+c)(a-c+2y)t\over 2t(1-t)}.% \end{array}$$ C.4 Function $\alpha(\theta,\upsilon,t;\lambda)$ Consider the function $$\alpha(\theta,\upsilon,t;\lambda)=\int_{0}^{\infty}r^{\lambda+2}\varphi_{\text% {last}}(r\cos\upsilon\cos\theta,r\cos\upsilon\sin\theta,r\sin\upsilon,t;\gamma% =0)dr,$$ (C.18) that enters into the definition of the canonical estimator (109) in the case of zero drift $\gamma=0$. Using expression (112) for the pdf $\varphi_{\text{last}}(h,\ell,x,t;\gamma)$, we obtain after calculations the following expression $$\begin{array}[]{c}\displaystyle\alpha(\theta,\upsilon,t;\lambda)={\Gamma\left(% {4+\lambda\over 2}\right)\over 4\pi\sqrt{\mathstrut t^{7}(1-t)^{3}}}\sum_{m,n=% -\infty}^{\infty}\Big{(}m^{2}\times\\ \displaystyle\Big{[}\beta^{\prime}(x,co,t,sc\cdot m,sc\cdot n;\lambda)-\beta^{% \prime}(x,si,t,sc\cdot m,sc\cdot n;\lambda)-\\ \displaystyle\beta^{\prime}(x,co,t,sc\cdot m,cc+sc\cdot n;\lambda)+\beta^{% \prime}(x,si,t,sc\cdot m,cc+sc\cdot n;\lambda)\Big{]}-\\ \displaystyle m(m+1)\Big{[}\beta^{\prime}(x,co,t,-cc-sc\cdot m,sc\cdot n;% \lambda)-\\ \displaystyle\beta^{\prime}(x,si,t,-cc-sc\cdot m,sc\cdot n;\lambda)-\\ \displaystyle\beta^{\prime}(x,co,t,-cc-sc\cdot m,cc+sc\cdot n;\lambda)+\\ \displaystyle\beta^{\prime}(x,si,t,-cc-sc\cdot m,cc+sc\cdot n;\lambda)\Big{]}% \Big{)}.\end{array}$$ (C.19) which is analogous to (C.17). Here, we have set $$\begin{array}[]{c}x=\sin\upsilon,\qquad co=\cos\theta\cos\upsilon,\qquad si=% \sin\theta\cos\upsilon,\\ cc=2\cos\theta\cos\upsilon,\qquad sc=2(\cos\theta-\sin\theta)\cos\upsilon,\\ \displaystyle\beta^{\prime}(x,y,t,a,c,\lambda)=\big{[}r(4+\lambda)(a+y-(a-c+2y% )t)+\\ \displaystyle\delta t((6a-2c+8y)t-6(a+y))\big{]}\delta^{-(6+\lambda)/2},\\ \displaystyle\delta={r-(a+c)(a-c+2y)t\over 2t(1-t)}+{x^{2}\over 2},\qquad r=(a% +y)^{2}.\end{array}$$ References Aït-Sahalia, Y., P.A. Mykland, and L. Zhang (2005). How often to sample a continuous-time process in the presence of market microstructure noise. Review of Financial Studies 18, 351-416. Andersen, T. G., T. Bollershev, F. X. Diebolt and P. Labys (2003). Modeling and Forecasting Realized Volatility. Econometrica 71, 529-626. Garman, M. and M. J. Klass (1980). On the Estimation of Security Price Volatilities From Historical Data. Journal of Business 53, 67-78. Jeanblanc J. & M. Yor, and M. Chesney (2009). Mathematical Methods for Financial Markets. Springer. Parkinson, M. (1980). The Extreme Value Method for Estimating the Variance of the Rate of Return. Journal of Business 53, 61-65. Saichev, A., D. Sornette, V. Filimonov F. Corsi (2009). Homogeneous Volatility Bridge Estimators. ETH Zurich working paper, http://ssrn.com/abstract=1523225. Saichev A., Y. Malevergne, D. Sornette (2010) Theory of Zipf’s Law and Beyond (Lecture Notes in Economics and Mathematical Systems), Springer. Zhang, L., Mykland, P.A. and A•t-Sahalia, Y. (2005). A tale of two time scales: determining integrated volatility with noisy high-frequency data. Journal of the American Statistical Association 100, 1394-1411. Fig. 1: 5000 realizations of the open-close estimator (114), of the G&K estimator (115), and of the most efficient estimator $\hat{d}_{\text{t-me-x}}$ (109). Fig. 2: Moving averages of the open-close (top panel), G&K (middle panel) and time-OHLC (109) (lower panel) estimators over respective windows sizes of 30 samples for the top panel and 10 samples for the two other panels. As explained in the text, this moving average mimicks the normalized estimators of the integrated variance in the case where all instantaneous variances are the same ($\sigma_{i}^{2}=\sigma^{2}=\text{const}$). Fig. 3: Top to bottom, $\gamma$-dependence of the expected values of the open-close $\hat{d}_{\text{real}}$ (114), G&K $\hat{d}_{\text{GK}}$ (115) and most efficient $\hat{d}_{\text{t-me-x}}$ (109) canonical estimators. The horizontal line is the expected value of the canonical estimators $\hat{d}_{\text{bPark}}$ and $\hat{d}_{\text{t-me}}$ Fig. 4: $\gamma$-dependence of the statistical average of the variances of the canonical estimators $\hat{d}_{\text{GK}}$ (upper open circles) and $\hat{d}_{\text{t-me-x}}$ (lower open circles). The two horizontal lines are the variances (117) of the canonical estimators $\hat{d}_{\text{bPark}}$ (top) and $\hat{d}_{\text{t-me}}$ (bottom), respectively.
Fast-reaction limit for Glauber-Kawasaki dynamics with two components Anna De Masi, Tadahisa Funaki, Errico Presutti and Maria Eulália Vares (December 2, 2020) Abstract We consider the Kawasaki dynamics of two types of particles under a killing effect on a $d$-dimensional square lattice. Particles move with possibly different jump rates depending on their types. The killing effect acts when particles of different types meet at the same site. We show the existence of a limit under the diffusive space-time scaling and suitably growing killing rate: segregation of distinct types of particles does occur, and the evolution of the interface between the two distinct species is governed by the two-phase Stefan problem. We apply the relative entropy method and combine it with some PDE techniques. 1 Introduction The study of the fast-reaction limit in the reaction diffusion systems goes back more than 20 years. The motivation of this study comes from population dynamics [2], [1], [8], mass-action kinetics chemistry [3] and others. Consider the system consisted of two types of species, say $A$ and $B$, and assume each of them moves by diffusion with rates $d_{1}$ and $d_{2}$, respectively. When distinct species meet, they kill each other with high rate $K$. This problem is formulated in PDE terminology as the system of equations for densities $u_{1}(t,r)$ and $u_{2}(t,r)$ of species $A$ and $B$, respectively, written as (1.1) $$\partial_{t}u_{i}(t)=d_{i}\Delta u_{i}(t)-Ku_{1}(t)u_{2}(t),\quad i=1,2.$$ Several papers including those cited above studied the limit as $K\to\infty$ of the solutions $u_{i}(t,r)$ of the system (1.1) or its extensions, that is, the limit as the killing rate of distinct species gets large. This is called the fast-reaction limit. It is known that the segregation of two species occurs in the limit and the interface separating two distinct species evolves according to the two-phase Stefan free boundary problem. In the present paper, we formulate the problem at the original level of species, i.e., at the underlying microscopic level, and model it as a system with two distinct types of particles. Under a diffusive space-time scaling combined with the limit as $K\to\infty$ taken properly, we prove that the segregation of species occurs at macroscopic level and derive the Stefan problem directly from our microscopic system. The proof is divided into two parts and given as a combination of the techniques of the hydrodynamic limit and the fast-reaction limit. In the first part, which is probabilistic, we consider the relative entropy of the real system with respect to the local equilibria defined as a product measure with mean changing in space and time chosen according to the discretized hydrodynamic equation, which is a discrete version of (1.1). Then, we show that the relative entropy behaves as a small order of the total volume of the system. This proves that the macroscopic density profile of the system is close to the solution of the discretized hydrodynamic equation. We take product measures as local equilibria, since those with constant means are global equilibria of the Kawasaki dynamics. In the second part of the paper, we apply PDE results to analyze the discrete equation and derive the Stefan problem from it. 1.1 Model Let ${\mathbb{T}}_{N}^{d}:=({\mathbb{Z}}/N{\mathbb{Z}})^{d}\equiv\{1,2,\ldots,N\}^{d}$ be the $d$-dimensional periodic square lattice of size $N$ and consider a system that consists of particles of two distinct types. The configuration space is $\mathcal{X}_{N}^{2}=\mathcal{X}_{N}\times\mathcal{X}_{N}$, where $\mathcal{X}_{N}=\{0,1\}^{{\mathbb{T}}_{N}^{d}}$. Its elements are denoted by $\tilde{\sigma}\equiv(\sigma_{1},\sigma_{2})=\big{(}\{\sigma_{1,x}\}_{x\in{% \mathbb{T}}_{N}^{d}},\{\sigma_{2,x}\}_{x\in{\mathbb{T}}_{N}^{d}}\big{)}$. The generator of the Kawasaki dynamics of particles of a single type is defined by $$\displaystyle(L_{0}^{1}f)(\sigma)$$ $$\displaystyle=\frac{1}{2}\sum_{x,y\in{\mathbb{T}}_{N}^{d}:|x-y|=1}\left\{f(% \sigma^{x,y})-f(\sigma)\right\},\quad\sigma=\{\sigma_{x}\}_{x\in{\mathbb{T}}_{% N}^{d}}\in\mathcal{X}_{N},$$ for functions $f\colon\mathcal{X}_{N}\to\mathbb{R}$, and where $\sigma^{x,y}\in\mathcal{X}_{N}$ is defined from $\sigma\in\mathcal{X}_{N}$ as $$(\sigma^{x,y})_{z}=\left\{\begin{array}[]{lll}\sigma_{y}&\hbox{if $z=x$,}\\ \sigma_{x}&\hbox{if $z=y$,}\\ \sigma_{z}&\hbox{\text{ otherwise}.}\\ \end{array}\right.$$ The generator of the two-component system is given by $L_{N}=N^{2}L_{0}+KL_{G}$ with $K=K(N)$, where $$\displaystyle(L_{0}f)(\sigma_{1},\sigma_{2})=d_{1}(L_{0}^{1}f(\cdot,\sigma_{2}% ))(\sigma_{1})+d_{2}(L_{0}^{1}f(\sigma_{1},\cdot))(\sigma_{2}),$$ $$\displaystyle(L_{G}f)(\sigma_{1},\sigma_{2})=\sum_{x\in{\mathbb{T}}_{N}^{d}}% \sigma_{1,x}\sigma_{2,x}\left\{f(\sigma_{1}^{x},\sigma_{2}^{x})-f(\sigma_{1},% \sigma_{2})\right\},$$ for functions $f\colon\mathcal{X}_{N}^{2}\to\mathbb{R}$, and $d_{1},d_{2}>0$. Here $\sigma^{x}\in\mathcal{X}_{N}$ is defined from $\sigma\in\mathcal{X}_{N}$ as $$\sigma^{x}_{z}=\left\{\begin{array}[]{ll}1-\sigma_{x}&\hbox{if $z=x$,}\\ \sigma_{z}&\hbox{if $z\neq x$.}\end{array}\right.$$ Let $\bar{\sigma}^{N}(t)=(\sigma_{1}^{N}(t),\sigma_{2}^{N}(t))\equiv(\sigma_{1,x}^{% N}(t),\sigma_{2,x}^{N}(t))_{x\in{\mathbb{T}}_{N}^{d}}$ be the Markov process on $\mathcal{X}_{N}^{2}$ generated by $L_{N}$. The macroscopic empirical measures on ${\mathbb{T}}^{d}:=({\mathbb{R}}/{\mathbb{Z}})^{d}\equiv[0,1)^{d}$ associated with a configuration $\bar{\sigma}=(\sigma_{1},\sigma_{2})\in\mathcal{X}_{N}^{2}$ are defined by $$\alpha_{i}^{N}(dr;\bar{\sigma})=\frac{1}{N^{d}}\sum_{x\in{\mathbb{T}}_{N}^{d}}% \sigma_{i,x}\delta_{\frac{x}{N}}(dr),\quad r\in{\mathbb{T}}^{d},\;i=1,2.$$ The goal is to study the limit of the macroscopic empirical measures of the process $\bar{\sigma}^{N}(t)$ as $N\to\infty$, with properly scaled $K(N)$. 1.2 Main result We first summarize our assumptions on the initial distribution of $\bar{\sigma}^{N}(0)$. (A1) Let $u_{i}^{N}(0,\cdot)=\{u_{i}^{N}(0,x)\}_{x\in{\mathbb{T}}_{N}^{d}},i=1,2$ be given and satisfy two bounds $$e^{-c_{1}K}\leq u_{i}^{N}(0,x)\leq c_{2}\quad\text{ and }\quad|\nabla^{N}u_{i}% ^{N}(0,x)|\leq C_{0}K,\mathbb{}$$ for every $i=1,2$, $x\in{\mathbb{T}}_{N}^{d}$ with $c_{1}>0,0<c_{2}<1,C_{0}>0$, where $\nabla^{N}$ is defined as (1.2) $$\nabla^{N}u(x)=\left\{N(u(x+e_{i})-u(x))\right\}_{i=1}^{d},$$ and $e_{i}\in{\mathbb{Z}}^{d}$ are the unit vectors in the $i$th positive direction. (A2) Let $\mu_{0}^{N}$ be the distribution of $\tilde{\sigma}^{N}(0)$ on $\mathcal{X}_{N}^{2}$ and let $\nu_{0}^{N}=\nu_{u_{1}^{N}(0,\cdot),u_{2}^{N}(0,\cdot)}$ be the Bernoulli measure on $\mathcal{X}_{N}^{2}$ with means $u_{1}^{N}(0,\cdot),u_{2}^{N}(0,\cdot)$ given as above. We assume the relative entropy defined in (2.1) satisfies $H(\mu_{0}^{N}|\nu_{0}^{N})=O(N^{d-\delta_{0}})$ as $N\to\infty$ with some $\delta_{0}>0$. (A3) We assume $u_{i}^{N}(0,r),r\in{\mathbb{T}}^{d}$ defined from $u^{N}_{i}(0,x)$ through (4.1) converge to some $u_{i}(0,r)$ weakly in $L^{2}({\mathbb{T}}^{d})$ as $N\to\infty$, for $i=1,2$, respectively. (A4)${}_{\delta}$ $K=K(N)$ satisfies $1\leq K(N)\leq\delta(\log N)^{1/2}$ and $K(N)\to\infty$ as $N\to\infty$. Our main theorem is formulated as follows. Theorem 1.1. We assume the four conditions (A1)-(A3),(A4)${}_{\delta}$ with $\delta>0$ chosen sufficiently small depending on $T>0$. Then, we have the following. (1) The macroscopic empirical measures $\alpha_{i}^{N}(t,dr):=\alpha_{i}^{N}(dr;\bar{\sigma}^{N}(t))$ of the process $\bar{\sigma}^{N}(t)$ converge to $u_{i}(t,r)dr$, respectively, for $i=1,2$, that is $$\lim_{N\to\infty}P(|\langle\alpha_{i}^{N}(t),\varphi\rangle-\langle u_{i}(t),% \varphi\rangle|>\varepsilon)=0,\quad i=1,2,$$ for every $\varepsilon>0$, $t\in[0,T]$ and $\varphi\in C^{\infty}({\mathbb{T}}^{d})$, and $u_{1}(t,r)u_{2}(t,r)=0$ a.e. $r$ holds, where $\langle\alpha,\varphi\rangle$ and $\langle u,\varphi\rangle$ stand for the integrals over ${\mathbb{T}}^{d}$. (2) $w(t,r):=u_{1}(t,r)-u_{2}(t,r)$ is the unique weak solution of (1.3) $$\left\{\begin{aligned} &\displaystyle\partial_{t}w=\Delta\mathcal{D}(w),\quad% \text{ in }{\mathbb{T}}^{d},\\ &\displaystyle w(0,r)=u_{1}(0,r)-u_{2}(0,r),\end{aligned}\right.$$ where $\Delta$ is the Laplacian on ${\mathbb{T}}^{d}$, and $\mathcal{D}(s)=d_{1}s,s\geq 0$ and $=d_{2}s,s<0$. The weak solution of (1.3) is defined as follows. Definition 1.1. We call $w=w(t,r)$ a weak solution of (1.3) if it satisfies (i) $w\in L^{\infty}([0,T]\times{\mathbb{T}}^{d})$ for every $T>0$; (ii) For all $T>0$ and $\psi(t,x)\in C^{1,2}([0,T]\times{\mathbb{T}}^{d})$ such that $\psi(T,r)=0$, we have $$\int_{0}^{T}\int_{{\mathbb{T}}^{d}}(w\partial_{t}\psi+\mathcal{D}(w)\Delta\psi% )drdt=-\int_{{\mathbb{T}}^{d}}w_{0}\psi(0,r)dr.$$ The uniqueness of the weak solution of (1.3) is shown in [1], Corollary 3.8. As pointed out in [2], (1.3) is the weak formulation of the following two-phase Stefan problem for $u_{1}$ and $u_{2}$: (1.4) $$\left\{\begin{aligned} &\displaystyle\partial_{t}u_{1}=d_{1}\Delta u_{1},\quad% \text{ on }D_{1}(t)=\{r\in{\mathbb{T}}^{d};u_{1}(t,r)>0,\;u_{2}(t,r)=0\},\\ &\displaystyle\partial_{t}u_{2}=d_{2}\Delta u_{2},\quad\text{ on }D_{2}(t)=\{r% \in{\mathbb{T}}^{d};u_{1}(t,r)=0,\;u_{2}(t,r)>0\},\\ &\displaystyle u_{1}=u_{2}=0,\qquad\text{ on }\Gamma(t):=\partial D_{1}(t)=% \partial D_{2}(t),\\ &\displaystyle d_{1}\partial_{n}u_{1}=-d_{2}\partial_{n}u_{2},\quad\text{ on }% \Gamma(t),\end{aligned}\right.$$ where $n$ is the unit normal vector on $\Gamma(t)$ directed to $D_{1}(t)$. Indeed, if the system (1.4) has a smooth solution, that is, if $\Gamma(t)$ is smooth, $u_{i}(t,r),i=1,2,$ are smooth in $D_{i}(t)$ and continuous on ${\mathbb{T}}^{d}$, then it determines a weak solution. The proof of Theorem 1.1 is divided into two parts as we mentioned above. The main task is to show that the relative entropy of our system compared with the local equilibria defined through the discretized hydrodynamic equation (2.3) behaves as $o(N^{d})$, namely, the relative entropy per volume tends to $0$ as $N\to\infty$. This is formulated in Theorem 2.2 and shown in Sections 2 and 3. Once this is shown, one can prove that the macroscopic empirical measures $\alpha_{i}^{N}(t)$ is close to the solution of (2.3), see Section 4. In the last Section 5, we show that the solution of (2.3) converges to the weak solution of (1.3). A related model with instantaneous annihilation was studied by Funaki [5] and the same equation (1.3) was derived in the limit. Briefly saying, $1\ll K\ll N$ in our model, while $1\ll N\ll K=\infty$ in [5]. Sasada [10] considered the model with non-instantaneous annihilation together with creation of two distinct types of particles. 2 Relative entropy method The relative entropy of two probability measures $\mu$ and $\nu$ on $\mathcal{X}_{N}^{2}$ is defined as (2.1) $$H(\mu|\nu):=\int_{\mathcal{X}_{N}^{2}}\frac{d\mu}{d\nu}\log\frac{d\mu}{d\nu}% \cdot d\nu.$$ For a probability measure $\nu$ on $\mathcal{X}_{N}^{2}$, the Dirichlet form $\mathcal{D}(f;\nu)$, $f$:$\mathcal{X}_{N}^{2}\to{\mathbb{R}}$, associated to the generator $L_{0}$ is defined as $$\mathcal{D}(f;\nu)=\frac{1}{4}\sum_{\begin{subarray}{c}x,y\in{\mathbb{T}}_{N}^% {d}\\ |x-y|=1\end{subarray}}\int_{\mathcal{X}_{N}^{2}}\left[d_{1}\{f(\sigma_{1}^{x,y% },\sigma_{2})-f(\sigma_{1},\sigma_{2})\}^{2}+d_{2}\{f(\sigma_{1},\sigma_{2}^{x% ,y})-f(\sigma_{1},\sigma_{2})\}^{2}\right]d\nu.$$ Let $\mu_{t}=\mu_{t}^{N}$ be the law of $(\sigma_{1}^{N}(t),\sigma_{2}^{N}(t))$ generated by $L_{N}=N^{2}L_{0}+KL_{G}$ on $\mathcal{X}_{N}^{2}$. We have the following estimate on the time derivative of the relative entropy. See [6], [7] for the proof. Proposition 2.1. For any probability measures $\{\nu_{t}\}$ and $m$ on $\mathcal{X}_{N}^{2}$ both with full supports in $\mathcal{X}_{N}^{2}$, we have (2.2) $$\frac{d}{dt}H(\mu_{t}|\nu_{t})\leq-2N^{2}\mathcal{D}\left(\sqrt{\frac{d\mu_{t}% }{d\nu_{t}}};\nu_{t}\right)+\int_{\mathcal{X}_{N}^{2}}\left(L_{N}^{*,\nu_{t}}1% -\partial_{t}\log\psi_{t}\right)d\mu_{t}.$$ where $L_{N}^{*,\nu_{t}}$ is the adjoint of $L_{N}$ on $L^{2}(\nu_{t})$ and $\displaystyle{\psi_{t}:=\frac{d\nu_{t}}{dm}}$. This estimate was first used by Guo, Papanicolaou and Varadhan taking $\nu_{t}$ to be a global equilibrium which is independent of $t$ and then by Yau dropping the negative Dirichlet form term, see [6]. Then Jara and Menezes introduced (2.2) as a combination of these two estimates, cf. [9]. We use (2.2) with the following Bernoulli measures $\nu_{t}$. Let $u_{i}^{N}(t)=\{u_{i}(t,x)=u_{i}^{N}(t,x)\}_{x\in{\mathbb{T}}_{N}^{d}}$, $i=1,2$ be the solution of the system of the discretized hydrodynamic equation: (2.3) $$\partial_{t}u_{i}^{N}(t,x)=d_{i}\Delta^{N}u_{i}^{N}(t,x)-Ku_{1}^{N}(t,x)u_{2}^% {N}(t,x),\qquad i=1,2$$ where $\Delta^{N}=N^{2}\Delta$ and $$(\Delta u)(x)=\sum_{y\in{\mathbb{T}}_{N}^{d}:|y-x|=1}\big{(}u(y)-u(x)\big{)}.$$ Note that (2.3) is a discrete version of (1.1). We define $\nu_{t}=\nu_{u_{1}(t,\cdot),u_{2}(t,\cdot)}$, where we denote by $\nu_{u_{1}(\cdot),u_{2}(\cdot)}$ for $u_{i}(\cdot)=\{u_{i}(x)\}_{x\in{\mathbb{T}}_{N}^{d}},i=1,2$ the Bernoulli measure on $\mathcal{X}_{N}^{2}$ such that $$\nu_{u_{1}(\cdot),u_{2}(\cdot)}(\sigma_{i,x}=1)=u_{i}(x),$$ for every $x\in{\mathbb{T}}_{N}^{d}$ and $i=1,2$. The main result in the probabilistic part is the following Theorem. Theorem 2.2. Assume the initial distribution verifies the Hypothesis of Theorem 1.1. Then, we have $$H(\mu_{t}^{N}|\nu_{t}^{N})=o(N^{d}),\quad t\in[0,T],$$ as $N\to\infty$. The proof of this theorem needs some preliminary results proved in the following subsections. 2.1 Calculation of the second term in (2.2) We define the normalized variables $\omega_{i,x,t}$ by (2.4) $$\omega_{i,x,t}=\frac{\bar{\sigma}_{i,x}(t)}{\chi(u_{i}(t,x))},\qquad\bar{% \sigma}_{i,x}(t)=\sigma_{i,x}(t)-u_{i}(x,t),\quad i=1,2,$$ where $\chi(u)=u(1-u)$ for $u\in(0,1)$. In this subsection we prove the following proposition. Proposition 2.3. (2.5) $$L_{N}^{*,\nu_{t}}1-\partial_{t}\log\psi_{t}=V_{1}(t)+V_{2}(t)+V(t)$$ with (2.6) $$\displaystyle V_{i}(t)=-\frac{d_{i}N^{2}}{2}\sum_{x,y\in{\mathbb{T}}_{N}^{d}:|% x-y|=1}(u_{i}(t,y)-u_{i}(t,x))^{2}\omega_{i,x,t}\omega_{i,y,t},\qquad i=1,2$$ (2.7) $$\displaystyle V(t)=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\big{(}u_{1}(t,x)+u_{2}(t,x% )-1\big{)}u_{1}(t,x)u_{2}(t,x)\omega_{1,x,t}\omega_{2,x,t}.$$ Proof. We first compute $L_{N}^{*,\nu_{t}}g$ for a generic function $g$ on $\mathcal{X}_{N}^{2}$, calling $\nu=\nu_{u_{1}(\cdot),u_{2}(\cdot)}$. For $L_{G}$ we have $$\displaystyle\int gL_{G}fd\nu=\sum_{\sigma_{1},\sigma_{2}}g(\sigma_{1},\sigma_% {2})\sum_{x\in{\mathbb{T}}_{N}^{d}}\sigma_{1,x}\sigma_{2,x}\left\{f(\sigma_{1}% ^{x},\sigma_{2}^{x})-f(\sigma_{1},\sigma_{2})\right\}\nu(\sigma_{1},\sigma_{2}).$$ By making change of variables $\eta_{1}=\sigma_{1}^{x}$ and $\eta_{2}=\sigma_{2}^{x}$, the sum containing $f(\sigma_{1}^{x},\sigma_{2}^{x})$ can be rewritten as $$\displaystyle\sum_{\eta_{1},\eta_{2}}g(\eta_{1}^{x},\eta_{2}^{x})\sum_{x\in{% \mathbb{T}}_{N}^{d}}(1-\eta_{1,x})(1-\eta_{2,x})f(\eta_{1},\eta_{2})\nu(\eta_{% 1}^{x},\eta_{2}^{x}).$$ Next observe that for $\eta_{1},\eta_{2}$ satisfying $\eta_{1,x}=\eta_{2,x}=0$, $$\nu(\eta_{1}^{x},\eta_{2}^{x})=\frac{u_{1}(x)u_{2}(x)}{(1-u_{1}(x))(1-u_{2}(x)% )}\nu(\eta_{1},\eta_{2}),$$ Thus $$\displaystyle L_{G}^{*,\nu}g(\sigma_{1},\sigma_{2})=\sum_{x\in{\mathbb{T}}_{N}% ^{d}}$$ $$\displaystyle\Bigg{\{}\frac{u_{1}(x)u_{2}(x)}{(1-u_{1}(x))(1-u_{2}(x))}(1-% \sigma_{1,x})(1-\sigma_{2,x})g(\sigma_{1}^{x},\sigma_{2}^{x})$$ $$\displaystyle                                                    -\sigma_{1,x}% \sigma_{2,x}g(\sigma_{1},\sigma_{2})\Bigg{\}}.$$ Using the above equality with $g\equiv 1$ and writing $\bar{\sigma}_{i,x}=\omega_{i,x,t}\chi(u_{i}(\cdot))$ (recall (2.4)) we get $$\displaystyle KL_{G}^{*,\nu_{u_{1}(\cdot),u_{2}(\cdot)}}1=$$ $$\displaystyle-K\sum_{x\in{\mathbb{T}}_{N}^{d}}u_{1}(x)u_{2}(x)\big{(}\omega_{1% ,x}+\omega_{2,x}\big{)}$$ $$\displaystyle+K\sum_{x\in{\mathbb{T}}_{N}^{d}}\big{(}u_{1}(x)+u_{2}(x)-1\big{)% }u_{1}(x)u_{2}(x)\omega_{1,x}\omega_{2,x}.$$ For the Kawasaki part, from the computation in [6] or [7], we obtain $$\displaystyle L_{0}^{*,\nu}1=$$ $$\displaystyle-\frac{d_{1}}{2}\sum_{\begin{subarray}{c}x,y\in{\mathbb{T}}_{N}^{% d}\\ |x-y|=1\end{subarray}}(u_{1}(y)-u_{1}(x))^{2}\omega_{1,x}\omega_{1,y}+d_{1}% \sum_{x\in{\mathbb{T}}_{N}^{d}}(\Delta u_{1})(x)\omega_{1,x}$$ $$\displaystyle-\frac{d_{2}}{2}\sum_{\begin{subarray}{c}x,y\in{\mathbb{T}}_{N}^{% d}\\ |x-y|=1\end{subarray}}(u_{2}(y)-u_{2}(x))^{2}\omega_{2,x}\omega_{2,y}+d_{2}% \sum_{x\in{\mathbb{T}}_{N}^{d}}(\Delta u_{2})(x)\omega_{2,x}.$$ We next observe that $$\partial_{t}\log\psi_{t}(\eta)=\sum_{x\in{\mathbb{T}}_{N}^{d}}\partial_{t}u_{1% }(t,x)\omega_{1,x,t}+\sum_{x\in{\mathbb{T}}_{N}^{d}}\partial_{t}u_{2}(t,x)% \omega_{2,x,t}.$$ this equality is proved similarly to [6] or [7]. By using (2.3) the linear terms in $\omega$ cancel and we finally obtain (2.5). ∎ 2.2 Estimates on the solution of (2.3) Let $(u_{1}(t),u_{2}(t))=(u_{1}^{N}(t,x),u_{2}^{N}(t,x))$ be the solution of the discretized hydrodynamic equation (2.3). We derive estimates on $(u_{1}(t),u_{2}(t))$ and their gradients. First two lemmas, especially taking $c=c_{2}<1$ with $c_{2}$ in (A1), are useful to estimate $1/\chi(u_{i}^{N}(t,x))$ appearing in the definition of $\omega_{i,x,t}$ from above. Lemma 2.4. If the initial values satisfy $0\leq u_{i}^{N}(0,x)\leq c$ for every $x\in{\mathbb{T}}_{N}^{d}$ and $i=1,2$ with some $c>0$, we have $$0\leq u_{i}^{N}(t,x)\leq c,$$ for every $t\geq 0$, $x\in{\mathbb{T}}_{N}^{d}$ and $i=1,2$. Proof. One can apply the maximum principle in our discrete setting, cf. [1], Lemma 2.1. Also, a similar argument to the proof of the next lemma works. ∎ Lemma 2.5. If the initial values satisfy $0<u_{0}\leq u_{i}^{N}(0,x)\leq 1$ for every $x\in{\mathbb{T}}_{N}^{d}$ and $i=1,2$, we have $$u_{i}^{N}(t,x)\geq\underline{u}(t):=u_{0}e^{-Kt},$$ for every $t\geq 0$, $x\in{\mathbb{T}}_{N}^{d}$ and $i=1,2$. Proof. From (2.3) and $u_{2}(t,x)\leq 1$, since $\underline{u}(t)$ satisfies $\partial_{t}\underline{u}(t)=-K\underline{u}(t)$, we have $$\displaystyle\partial_{t}\big{(}u_{1}(t,x)-\underline{u}(t)\big{)}$$ $$\displaystyle=d_{1}\Delta^{N}\big{(}u_{1}(t,x)-\underline{u}(t)\big{)}-K\big{(% }u_{1}(t,x)u_{2}(t,x)-\underline{u}(t)\big{)}$$ $$\displaystyle\geq d_{1}\Delta^{N}\big{(}u_{1}(t,x)-\underline{u}(t)\big{)}-K% \big{(}u_{1}(t,x)-\underline{u}(t)\big{)}.$$ Assume that $u_{1}(s,y)>\underline{u}(s)$ holds for $0\leq s<t$ and every $y\in{\mathbb{T}}_{N}^{d}$, and at some $x$ and $t$, $u_{1}(t,x)=\underline{u}(t)$ holds. Then, $\Delta^{N}\big{(}u_{1}(t,x)-\underline{u}(t)\big{)}\geq 0$ and $-K\big{(}u_{1}(t,x)-\underline{u}(t)\big{)}=0$. Therefore, $\partial_{t}\big{(}u_{1}(t,x)-\underline{u}(t)\big{)}\geq 0$. This means that $u_{1}(t,x)-\underline{u}(t)$ is increasing and $u_{1}(t,x)$ can not be below $\underline{u}(t)$. Same argument works for $u_{2}(t,x)$. ∎ Let $p^{N}(t,x,y)$ be the discrete heat kernel corresponding to $\Delta^{N}$ on ${\mathbb{T}}_{N}^{d}$. Then, we have the following estimate, which is global in $t$. Lemma 2.6. There exist $C,c>0$ such that $$|\nabla^{N}p^{N}(t,x,y)|\leq\frac{C}{\sqrt{t}}p^{N}(ct,x,y),\quad t>0,$$ where $\nabla^{N}$ is defined by (1.2). Proof. Let $p(t,x,y)$ be the heat kernel corresponding to the discrete Laplacian $\Delta$ on ${\mathbb{Z}}^{d}$. Then, we have the estimate $$|\nabla p(t,x,y)|\leq\frac{C}{\sqrt{1\vee t}}p(ct,x,y),\quad t>0,\;x,y\in{% \mathbb{Z}}^{d},$$ with some constants $C,c>0$, independent of $t$ and $x,y$, where $\nabla=\nabla^{1}$. This should be well-known, but we refer to [4] Theorem 1.1 (1.4) which discusses general case with random coefficients, see also [11]. Then, since $$p^{N}(t,x,y)=\sum_{k\in(N{\mathbb{Z}})^{d}}p(N^{2}t,x,y+k),$$ the result follows. ∎ We have the following estimate, though it might not be the best possible one. Proposition 2.7. The gradients of the solution of (2.3) are estimated as $$|\nabla^{N}u_{i}^{N}(t,x)|\leq K(C_{0}+C\sqrt{t}),\quad t>0,\;i=1,2,$$ if $|\nabla^{N}u_{i}^{N}(0,x)|\leq C_{0}K$ holds. Proof. From Duhamel’s formula, we have $$u_{i}^{N}(t,x)=\sum_{y\in{\mathbb{T}}_{N}^{d}}u_{i}^{N}(0,y)p^{N}(d_{i}t,x,y)-% K\int_{0}^{t}ds\sum_{y\in{\mathbb{T}}_{N}^{d}}u_{1}^{N}(s,y)u_{2}^{N}(s,y)p^{N% }(d_{i}(t-s),x,y).$$ By noting the symmetry of $p^{N}$ in $(x,y)$ and $0\leq u_{1}^{N}(s,x),u_{2}^{N}(s,x)\leq 1$, this shows $$|\nabla^{N}u_{i}^{N}(t,x)|\leq\sum_{y\in{\mathbb{T}}_{N}^{d}}|\nabla^{N}u_{i}^% {N}(0,y)|p^{N}(d_{i}t,x,y)+K\int_{0}^{t}ds\sum_{y\in{\mathbb{T}}_{N}^{d}}|% \nabla^{N}p^{N}(d_{i}(t-s),x,y)|.$$ Thus, from Lemma 2.6, we obtain the desired estimate. ∎ 2.3 Proof of Theorem 2.2 Notation: We simply denote $\mu_{t}=\mu_{t}^{N},\nu_{t}=\nu_{t}^{N}$ and set $f_{t}\equiv f_{t}^{N}:=\frac{d\mu_{t}^{N}}{d\nu_{t}^{N}}$. Recalling Proposition 2.3, and using the estimates of subsection 2.2, in Section 3 we prove the following Theorem. Theorem 2.8. For $\alpha$ and $\kappa>0$ small, there is $C_{\alpha,\kappa}>0$ so that (2.8) $$\int_{\mathcal{X}_{N}^{2}}V(t)d\mu_{t}\leq\alpha N^{2}\mathcal{D}(\sqrt{f_{t}}% ;\nu_{t})+C_{\alpha,\kappa}KH(\mu_{t}|\nu_{t})+N^{d-1+\kappa},$$ and also (2.9) $$\int_{\mathcal{X}_{N}^{2}}[V_{1}(t)+V_{2}(t)]d\mu_{t}\leq\alpha N^{2}\mathcal{% D}(\sqrt{f_{t}};\nu_{t})+C_{\alpha,\kappa}K^{2}H(\mu_{t}|\nu_{t})+N^{d-1+% \kappa},$$ the term $N^{d-1+\kappa}$ is replaced by $N^{\frac{1}{2}+\kappa}$ when $d=1$. By using Proposition 2.1, (2.5) and the above Theorem, we obtain $$\frac{d}{dt}H(\mu_{t}|\nu_{t})\leq CK^{2}H(\mu_{t}|\nu_{t})+O(N^{d-\delta_{1}}),$$ with $0<\delta_{1}<1$. We have chosen $\alpha\in(0,1)$ so that the terms of positive Dirichlet forms are absorbed by the negative Dirichlet form in (2.2). Thus, Gronwall’s inequality shows $$H(\mu_{t}|\nu_{t})\leq\left(H(\mu_{0}|\nu_{0})+tO(N^{d-\delta_{1}})\right)e^{% CK^{2}t}.$$ Noting $e^{CK^{2}t}\leq N^{Ct\delta^{2}}$ from $1\leq K=K(N)\leq\delta(\log N)^{1/2}$ and $H(\mu_{0}|\nu_{0})=O(N^{d-\delta_{0}})$ by the assumption, this concludes the proof of Theorem 2.2, if $\delta=\delta_{T}>0$ is small enough such that $CT\delta^{2}<\delta_{0}\wedge\delta_{1}$. ∎ 3 Proof of Theorem 2.8 We split the proof in two subsections. 3.1 Proof of (2.8) We omit the dependence on $t$ and define $$V:=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,x}\omega_{2,x},$$ where $\tilde{\omega}_{1,x}=\big{(}u_{1}(x)+u_{2}(x)-1\big{)}u_{1}(x)u_{2}(x)\omega_{% 1,x}$. The first step is to replace $V$ by its local sample average $V^{\ell}$ defined by (3.1) $$\displaystyle V^{\ell}:=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\overleftarrow{(\tilde% {\omega}_{1})}_{x,\ell}\overrightarrow{(\omega_{2})}_{x,\ell},$$ where $$\overrightarrow{g}_{x,\ell}:=\frac{1}{|\Lambda_{\ell}|}\sum_{y\in\Lambda_{\ell% }}g_{x+y},\qquad\overleftarrow{g}_{x,\ell}:=\frac{1}{|\Lambda_{\ell}|}\sum_{y% \in\Lambda_{\ell}}g_{x-y},$$ for a function $g=\{g_{x}(\sigma_{1},\sigma_{2})\}$ and $\Lambda_{\ell}=[0,\ell-1]^{d}\cap{\mathbb{Z}}^{d}$. Proposition 3.1. We assume the conditions of Theorem 2.2, in particular, we take $\delta>0$ sufficiently small. Let $\nu=\nu_{u_{1}(\cdot),u_{2}(\cdot)}$, $d\mu=fd\nu$ (recall we omit $t$) and we choose $\ell=N^{\frac{1}{d}-\kappa^{\prime}}$ with $\kappa^{\prime}(=\kappa/d)>0$ when $d\geq 2$ and $\ell=N^{\frac{1}{2}-\kappa}$ when $d=1$, with small $\kappa>0$. Then the cost of the replacement is estimated as (3.2) $$\displaystyle\int(V-V^{\ell})fd\nu\leq\alpha N^{2}\mathcal{D}(\sqrt{f};\nu)+C_% {\alpha,\kappa}\left(H(\mu|\nu)+N^{d-1+\kappa}\right),$$ for every $\alpha,\kappa>0$ with some $C_{\alpha,\kappa}>0$ when $d\geq 2$ and the last $N^{d-1+\kappa}$ is replaced by $N^{\frac{1}{2}+\kappa}$ when $d=1$. The first tool to show this proposition is the flow lemma for the telescopic sum. We call $\Phi=\{\Phi(x,y)\}_{b=\{x,y\}\in G^{*}}$ a flow on a finite set $G$ connecting two probability measures $p$ and $q$ on $G$ if $\Phi(x,y)=-\Phi(y,x)$ hold for all $\{x,y\}\in G^{*}$ and $\sum_{z\in G}\Phi(x,z)=p(x)-q(x)$ hold for all $x\in G$, where $G^{*}$ is the set of all bonds in $G$. The following lemma is found in Appendix G of [9], see also [6], [7]. Lemma 3.2. (Flow lemma) There exists a flow $\Phi^{\ell}$ on $\Lambda_{2\ell}:=\{0,1,\ldots,2\ell-1\}^{d}$ connecting $\delta_{0}$ and $q_{\ell}:=p_{\ell}*p_{\ell}$, $p_{\ell}(x)=\frac{1}{|\Lambda_{\ell}|}1_{\Lambda_{\ell}}(x)$, such that $$\sum_{x\in\Lambda_{2\ell-1}}\sum_{j=1}^{d}\Phi^{\ell}(x,x+e_{j})^{2}\leq C_{d}% g_{d}(\ell),$$ where $e_{j}$ is a unit vector to $j$th positive direction, and $g_{d}(\ell)=\ell$ when $d=1$, $\log\ell$ when $d=2$ and $1$ when $d\geq 3$. Remark 3.1. (1) When $d=1$, the flow $\Phi^{\ell}$ on $\Lambda_{\ell+1}=\{0,1,\ldots,\ell\}$ connecting $\delta_{0}$ and $p_{\ell}(x)=\frac{1}{\ell}1_{\{1,\ldots,\ell\}}(x)$ is given by $\Phi^{\ell}(x,x+1)=\frac{\ell-x}{\ell},0\leq x\leq\ell-1$. Indeed, the condition on $\Phi^{\ell}$ is $$\Phi^{\ell}(x,x+1)+\Phi^{\ell}(x,x-1)=\delta_{0}(x)-p_{\ell}(x),\quad x\in% \Lambda_{\ell},$$ with $\Phi^{\ell}(\ell,\ell+1)=\Phi^{\ell}(0,-1)=0$. Or equivalently, recalling that $\Phi^{\ell}(x,x-1)=-\Phi^{\ell}(x-1,x)$ and setting $\tilde{\Phi}(x):=\Phi^{\ell}(x,x+1)$, the condition is $$\displaystyle\nabla\tilde{\Phi}(x)\left(=\tilde{\Phi}(x)-\tilde{\Phi}(x-1)% \right)=-\frac{1}{\ell},\quad 1\leq x\leq\ell,$$ $$\displaystyle\tilde{\Phi}(0)=1,\tilde{\Phi}(\ell)=0,$$ i.e., the gradient of $\tilde{\Phi}$ is a constant so that $\tilde{\Phi}$ is an affine function. This equation is easily solved and we obtain $\tilde{\Phi}(x)=\frac{\ell-x}{\ell}$. (2) In Lemma 3.2, we are concerned with $q_{\ell}$ instead of $p_{\ell}$. When $d=1$, $$\displaystyle q_{\ell}(x)$$ $$\displaystyle=\sum_{y\in{\mathbb{T}}_{N}^{d}}p_{\ell}(x-y)p_{\ell}(y)=\frac{1}% {\ell^{2}}\sum_{1\leq x-y\leq\ell,1\leq y\leq\ell}1$$ $$\displaystyle=\frac{1}{\ell^{2}}\sharp\{y:1\leq y\leq\ell,x-\ell\leq y\leq x-1\}$$ $$\displaystyle=\left\{\begin{aligned} &\displaystyle\frac{x-1}{\ell^{2}}\quad(% \text{if }x-\ell\leq 1,\text{ i.e. }x\leq\ell+1),\\ &\displaystyle\frac{2\ell+1-x}{\ell^{2}}\quad(\text{if }x-\ell\geq 1,\text{ i.% e. }x\geq\ell+1),\end{aligned}\right.$$ i.e., $q_{\ell}$ is piecewise affine. Therefore, its integration $\Phi^{\ell}$ is piecewise quadratic. Note that $$\displaystyle(g*p_{\ell})(x)$$ $$\displaystyle=\sum_{y\in{\mathbb{T}}_{N}^{d}}g_{x-y}p_{\ell}(y)$$ $$\displaystyle=\frac{1}{|\Lambda_{\ell}|}\sum_{y\in\Lambda_{\ell}}g_{x-y}=% \overleftarrow{g}_{x,\ell},$$ and similarly $(g*\hat{p}_{\ell})(x)=\overrightarrow{g}_{x,\ell}$, where $\hat{p}_{\ell}(y):=p_{\ell}(-y)$. Therefore, $$\displaystyle V^{\ell}$$ $$\displaystyle=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\overleftarrow{(\tilde{\omega}_{% 1})}_{x,\ell}\overrightarrow{(\omega_{2})}_{x,\ell}$$ $$\displaystyle=K\sum_{y\in{\mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,y}(\omega_{2}*% q_{\ell})(y).$$ Accordingly, from Lemma 3.2 and $\Phi^{\ell}(y,y-e_{j})=-\Phi^{\ell}(y-e_{j},y)$, one can rewrite $$\displaystyle V-V^{\ell}$$ $$\displaystyle=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,x}\{\omega_{2,% x}-(\omega_{2}*q_{\ell})(x)\}$$ $$\displaystyle=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,x}\left\{% \omega_{2,x}-\sum_{y\in{\mathbb{T}}_{N}^{d}}\omega_{2,x-y}q_{\ell}(y)\right\}$$ $$\displaystyle=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,x}\sum_{y\in{% \mathbb{T}}_{N}^{d}}\omega_{2,x-y}\left\{\delta_{0}(y)-q_{\ell}(y)\right\}$$ $$\displaystyle=K\sum_{x\in{\mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,x}\sum_{y\in{% \mathbb{T}}_{N}^{d}}\omega_{2,x-y}\sum_{\pm}\sum_{j=1}^{d}\Phi^{\ell}(y,y\pm e% _{j})$$ $$\displaystyle=K\sum_{j=1}^{d}\sum_{x\in{\mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,% x}\sum_{y\in{\mathbb{T}}_{N}^{d}}(\omega_{2,x-y}-\omega_{2,x-y-e_{j}})\Phi^{% \ell}(y,y+e_{j})$$ $$\displaystyle=K\sum_{j=1}^{d}\sum_{x\in{\mathbb{T}}_{N}^{d}}\left(\sum_{y\in{% \mathbb{T}}_{N}^{d}}\tilde{\omega}_{1,x+y+e_{j}}\Phi^{\ell}(y,y+e_{j})\right)% \{\omega_{2,x+e_{j}}-\omega_{2,x}\}.$$ Thus, we have shown (3.3) $$V-V^{\ell}=K\sum_{j=1}^{d}\sum_{x\in{\mathbb{T}}_{N}^{d}}h_{x}^{\ell,j}(\omega% _{2,x+e_{j}}-\omega_{2,x}),$$ where (3.4) $$h_{x}^{\ell,j}\equiv h_{x}^{\ell,j}(\sigma_{1})=\sum_{y\in\Lambda_{2\ell-1}}% \tilde{\omega}_{1,x+y+e_{j}}\Phi^{\ell}(y,y+e_{j}).$$ Note that $h_{x}^{\ell,j}$ satisfies $h_{x}^{\ell,j}(\sigma_{1}^{x,x+e_{j}})=h_{x}^{\ell,j}(\sigma_{1})$. This property becomes useful to study the first and second terms of (2.5). For the third term $V$ of (2.5), which we concern now, we will use the property $h_{x}^{\ell,j}(\sigma_{2}^{x,x+e_{j}})=h_{x}^{\ell,j}(\sigma_{2})$, which is obvious since $h_{x}^{\ell,j}$ is a function of $\sigma_{1}$, see Lemma 3.4 below. Another lemma we use is the integration by parts formula under the Bernoulli measure $\nu_{u_{1}(\cdot),u_{2}(\cdot)}$ on $\mathcal{X}_{N}^{2}$ with a spatially dependent mean. We will apply this formula for the function $h=h_{x}^{\ell,j}$. The formula is stated for general $h$ with an error caused by the non-constant property of $u_{2}(\cdot)$. Lemma 3.3. (Integration by parts) Let $\nu=\nu_{u_{1}(\cdot),u_{2}(\cdot)}$ and assume $e^{-c_{1}K}\leq u_{2}(x),u_{2}(y)\leq c_{2}$ holds for $x,y\in{\mathbb{T}}_{N}^{d}:|x-y|=1$ with some $c_{1}>0,0<c_{2}<1$. Then, for $h=h(\sigma_{1},\sigma_{2})$ and a probability density $f=f(\sigma_{1},\sigma_{2})$ with respect to $\nu$, we have $$\int h(\sigma_{2,y}-\sigma_{2,x})fd\nu=\int h(\sigma_{1},\sigma_{2}^{x,y})% \sigma_{2,x}\big{(}f(\sigma_{1},\sigma_{2}^{x,y})-f(\sigma_{1},\sigma_{2})\big% {)}d\nu+R_{1},$$ and the error term $R_{1}=R_{1,x,y}$ is bounded as $$|R_{1}|\leq Ce^{2c_{1}K}|\nabla_{x,y}^{1}u_{2}|\int|h(\sigma_{1},\sigma_{2})|% fd\nu+\|h-h(\sigma_{1},\sigma_{2}^{x,y})\|_{\infty},$$ with some $C=C_{c_{2}}>0$, where $\nabla_{x,y}^{1}u=u(x)-u(y)$. Proof. First we write $$\int h(\sigma_{2,y}-\sigma_{2,x})fd\nu=\sum_{\sigma_{1},\sigma_{2}}h(\sigma_{1% },\sigma_{2})(\sigma_{2,y}-\sigma_{2,x})f(\sigma_{1},\sigma_{2})\nu(\sigma_{1}% ,\sigma_{2}).$$ Then, by a change of variables $\zeta:=\sigma_{2}^{x,y}$ and writing $\zeta$ by $\sigma_{2}$ again, we have $$\displaystyle\sum_{\sigma_{2}}h(\sigma_{1},\sigma_{2})\sigma_{2,y}f(\sigma_{1}% ,\sigma_{2})\nu_{2}(\sigma_{2})=\sum_{\sigma_{2}}h(\sigma_{1},\sigma_{2}^{x,y}% )\sigma_{2,x}f(\sigma_{1},\sigma_{2}^{x,y})\nu_{2}(\sigma_{2}^{x,y}),$$ where $\nu_{2}=\nu_{u_{2}(\cdot)}$ is a probability measure on $\mathcal{X}_{N}$, recall $\nu=\nu_{u_{1}(\cdot)}\otimes\nu_{u_{2}(\cdot)}$. To replace the last $\nu_{2}(\sigma_{2}^{x,y})$ by $\nu_{2}(\sigma_{2})$, we observe $$\displaystyle\frac{\nu_{2}(\sigma^{x,y})}{\nu_{2}(\sigma)}$$ $$\displaystyle=1_{\{\sigma_{x}=1,\sigma_{y}=0\}}\frac{(1-u_{2}(x))u_{2}(y)}{u_{% 2}(x)(1-u_{2}(y))}+1_{\{\sigma_{x}=0,\sigma_{y}=1\}}\frac{u_{2}(x)(1-u_{2}(y))% }{(1-u_{2}(x))u_{2}(y)}+1_{\{\sigma_{x}=\sigma_{y}\}}$$ $$\displaystyle=1+r_{x,y}(\sigma),$$ with $$\displaystyle r_{x,y}(\sigma)=1_{\{\sigma_{x}=1,\sigma_{y}=0\}}\frac{u_{2}(y)-% u_{2}(x)}{u_{2}(x)(1-u_{2}(y))}+1_{\{\sigma_{x}=0,\sigma_{y}=1\}}\frac{u_{2}(x% )-u_{2}(y)}{(1-u_{2}(x))u_{2}(y)}.$$ By the condition on $u_{2}$, this error is bounded as $$|r_{xy}(\sigma)|\leq C_{0}e^{c_{1}K}|\nabla_{x,y}^{1}u_{2}|,\quad C_{0}=C_{c_{% 2}}>0.$$ These computations are summarized as $$\displaystyle\int h$$ $$\displaystyle(\sigma_{2,y}-\sigma_{2,x})fd\nu=\int h(\sigma_{1},\sigma_{2}^{x,% y})\sigma_{2,x}f(\sigma_{1},\sigma_{2}^{x,y})(1+r_{xy}(\sigma_{2}))d\nu-\int h% \sigma_{2,x}fd\nu$$ $$\displaystyle=$$ $$\displaystyle\int h(\sigma_{1},\sigma_{2}^{x,y})\sigma_{2,x}\big{(}f(\sigma_{1% },\sigma_{2}^{x,y})-f(\sigma_{1},\sigma_{2})\big{)}d\nu$$ $$\displaystyle+\int(h(\sigma_{1},\sigma_{2}^{x,y})-h(\sigma_{1},\sigma_{2}))% \sigma_{2,x}fd\nu+\int h(\sigma_{1},\sigma_{2}^{x,y})\sigma_{2,x}f(\sigma_{1},% \sigma_{2}^{x,y})r_{xy}(\sigma_{2})d\nu.$$ The second term is bounded by $\|h(\sigma_{1},\sigma_{2}^{x,y})-h(\sigma_{1},\sigma_{2})\|_{\infty}$, since $|\sigma_{2,x}|\leq 1$ and $\int fd\nu=1$. For the third term denoted by $R_{0}$, applying the change of variables again, we have $$\displaystyle|R_{0}|$$ $$\displaystyle=\left|\sum_{\sigma_{1},\sigma_{2}}h(\sigma_{1},\sigma_{2})\sigma% _{2,y}f(\sigma_{1},\sigma_{2})r_{xy}(\sigma_{2}^{x,y})\nu(\sigma_{1},\sigma_{2% }^{x,y})\right|$$ $$\displaystyle=\left|\sum_{\sigma_{1},\sigma_{2}}h(\sigma_{1},\sigma_{2})\sigma% _{2,y}f(\sigma_{1},\sigma_{2})r_{xy}(\sigma_{2}^{x,y})\big{(}1+r_{xy}(\sigma_{% 2})\big{)}\nu(\sigma_{1},\sigma_{2})\right|$$ $$\displaystyle\leq C_{0}e^{c_{1}K}|\nabla_{x,y}^{1}u_{2}|(1+C_{0}e^{c_{1}K}|% \nabla_{x,y}^{1}u_{2}|)\int|h(\sigma)|fd\nu$$ $$\displaystyle\leq Ce^{2c_{1}K}|\nabla_{x,y}^{1}u_{2}|\int|h(\sigma)|fd\nu,$$ since $|\sigma_{2,y}|\leq 1$ and $|\nabla_{x,y}^{1}u_{2}|\leq 2c_{2}$. This completes the proof. ∎ We apply Lemma 3.3 to $V-V^{\ell}$ given in (3.3). Note that $h_{x}^{\ell,j}(\sigma_{1})$ is invariant under the transform $\sigma_{2}\mapsto\sigma_{2}^{x,y}$. Since we have $\omega_{2,x}=\frac{\sigma_{2,x}-u_{2}(x)}{\chi(u_{2}(x))}$ in (3.3) instead of $\sigma_{2,x}$ in Lemma 3.3, we need to estimate the error caused by the $x$-dependence of $\omega_{2,x}$ through $u_{2}(x)$. Lemma 3.4. We assume that $\nu=\nu_{u_{1}(\cdot),u_{2}(\cdot)}$ satisfies the same condition as in Lemma 3.3. Then, we have (3.5) $$\int h_{x}^{\ell,j}(\omega_{2,x+e_{j}}-\omega_{2,x})fd\nu=\int h_{x}^{\ell,j}% \frac{\sigma_{2,x}}{\chi(u_{2}(x))}\big{(}f(\sigma_{1},\sigma_{2}^{x,x+e_{j}})% -f(\sigma_{1},\sigma_{2})\big{)}d\nu+R_{2},$$ and the error term $R_{2}=R_{2,x,j}$ is bounded as (3.6) $$|R_{2}|\leq Ce^{3c_{1}K}|\nabla_{x,x+e_{j}}^{1}u_{2}|\int|h_{x}^{\ell,j}(% \sigma_{1},\sigma_{2})|fd\nu,$$ with some $C=C_{c_{2}}>0$. Proof. By the definition of $\omega_{x}$, denoting $y=x+e_{j}$, we have $$\displaystyle\int h_{x}^{\ell,j}(\omega_{2,y}-\omega_{2,x})fd\nu$$ $$\displaystyle=\int h_{x}^{\ell,j}\left(\frac{\sigma_{2,y}}{\chi(u_{2}(y))}-% \frac{\sigma_{2,x}}{\chi(u_{2}(x))}\right)fd\nu$$ $$\displaystyle\qquad-\int h_{x}^{\ell,j}\left(\frac{u_{2}(y)}{\chi(u_{2}(y))}-% \frac{u_{2}(x)}{\chi(u_{2}(x))}\right)fd\nu$$ $$\displaystyle=:I_{1}-I_{2}.$$ For $I_{2}$, we have $$\displaystyle\left|\frac{u_{2}(y)}{\chi(u_{2}(y))}-\frac{u_{2}(x)}{\chi(u_{2}(% x))}\right|$$ $$\displaystyle\leq\frac{1}{\chi(u_{2}(x))\chi(u_{2}(y))}\{\chi(u_{2}(x))|u_{2}(% y)-u_{2}(x)|+|u_{2}(x)||\chi(u_{2}(x))-\chi(u_{2}(y))|\}$$ $$\displaystyle\leq Ce^{c_{1}K}|\nabla^{1}_{x,y}u_{2}|.$$ On the other hand, $I_{1}$ can be rewritten as $$\displaystyle I_{1}$$ $$\displaystyle=\int\frac{h_{x}^{\ell,j}}{\chi(u_{2}(x))}(\sigma_{2,y}-\sigma_{2% ,x})fd\nu+\int h_{x}^{\ell,j}\left(\frac{1}{\chi(u_{2}(y))}-\frac{1}{\chi(u_{2% }(x))}\right)\sigma_{2,y}fd\nu$$ $$\displaystyle=:I_{1,1}+I_{1,2}.$$ For $I_{1,1}$, recalling the invariance of $h_{x}^{\ell,j}$, one can apply Lemma 3.3 and obtain $$I_{1,1}=\frac{1}{\chi(u_{2}(x))}\int h_{x}^{\ell,j}\sigma_{2,x}\big{(}f(\sigma% _{1},\sigma_{2}^{x,y})-f(\sigma_{1},\sigma_{2})\big{)}d\nu+\frac{1}{\chi(u_{2}% (x))}R_{1}.$$ Finally for $I_{1,2}$, $$\left|\frac{1}{\chi(u_{2}(y))}-\frac{1}{\chi(u_{2}(x))}\right|=\frac{|\chi(u_{% 2}(x))-\chi(u_{2}(y))|}{\chi(u_{2}(x))\chi(u_{2}(y))}\leq Ce^{2c_{1}K}|\nabla_% {x,y}^{1}u_{2}|.$$ Therefore, we obtain the conclusion. ∎ We can estimate the first term in the right hand side of (3.5) with $y=x+e_{j}$ by the Dirichlet form and obtain Lemma 3.5. Let $\nu=\nu_{u_{1}(\cdot),u_{2}(\cdot)}$ be the Bernoulli measure satisfying the same condition as in Lemma 3.3. Then, for every $\beta>0$, we have $$\int h_{x}^{\ell,j}(\omega_{2,x+e_{j}}-\omega_{2,x})fd\nu\leq\beta\mathcal{D}_% {x,x+e_{j}}(\sqrt{f};\nu)+\frac{C}{\beta}e^{3c_{1}K}\int(h_{x}^{\ell,j})^{2}fd% \nu+R_{2,x,j},$$ where $\mathcal{D}_{x,x+e_{j}}(\sqrt{f};\nu)$ is a piece of $\mathcal{D}(\sqrt{f};\nu)$ defined on the bond $\{x,x+e_{j}\}$ and $R_{2,x,j}$ has a bound (3.6). Proof. For simplicity, we write $y$ for $x+e_{j}$. By decomposing $f(\sigma_{1},\sigma_{2}^{x,y})-f(\sigma_{1},\sigma_{2})=\big{(}\sqrt{f(\sigma_% {1},\sigma_{2}^{x,y})}+\sqrt{f(\sigma_{1},\sigma_{2})}\big{)}\big{(}\sqrt{f(% \sigma_{1},\sigma_{2}^{x,y})}-\sqrt{f(\sigma_{1},\sigma_{2})}\big{)}$, the first term in the right hand side of (3.5) can be estimated by $$\leq\beta\mathcal{D}_{x,y}(\sqrt{f};\nu)+\frac{C}{\beta\chi(u_{2}(x))^{2}}\int% (h_{x}^{\ell,j})^{2}\{f(\sigma_{1},\sigma_{2}^{x,y})+f(\sigma_{1},\sigma_{2})% \}d\nu.$$ The integral in the second term divided by $\chi(u_{2}(x))^{2}$ is equal to and bounded by $$\displaystyle\frac{1}{\chi(u_{2}(x))^{2}}$$ $$\displaystyle\int(h_{x}^{\ell,j})^{2}f(\sigma_{1},\sigma_{2})(1+r_{xy}(\sigma_% {2}))d\nu$$ $$\displaystyle\leq\frac{1+C_{0}e^{c_{1}K}|\nabla^{1}_{x,y}u_{2}|}{\chi(u_{2}(x)% )^{2}}\int(h_{x}^{\ell,j})^{2}fd\nu$$ $$\displaystyle\leq e^{2c_{1}K}(1+C_{0}e^{c_{1}K}|\nabla_{x,y}^{1}u_{2}|)\int(h_% {x}^{\ell,j})^{2}fd\nu.$$ This shows the conclusion by recalling $|\nabla_{x,y}^{1}u_{2}|\leq 2c_{2}$. ∎ Proof of Proposition 3.1. Recalling (3.3) and by Lemma 3.5 taking $\beta=\frac{\alpha N^{2}}{K}$ with $\alpha>0$ sufficiently small, we have $$\displaystyle\int$$ $$\displaystyle(V-V^{\ell})fd\nu=K\sum_{j=1}^{d}\sum_{x\in{\mathbb{T}}_{N}^{d}}% \int h_{x}^{\ell,j}(\omega_{2,x+e_{j}}-\omega_{2,x})fd\nu$$ $$\displaystyle\leq\alpha N^{2}\mathcal{D}(\sqrt{f};\nu)+\frac{CK^{2}}{\alpha N^% {2}}e^{3c_{1}K}\sum_{j=1}^{d}\sum_{x\in{\mathbb{T}}_{N}^{d}}\int(h_{x}^{\ell,j% })^{2}fd\nu+K\sum_{j=1}^{d}\sum_{x\in{\mathbb{T}}_{N}^{d}}R_{2,x,j}.$$ For $R_{2,x,j}$, since $|\nabla_{x,x+e_{j}}^{1}u_{2}|\leq\frac{CK}{N}$ from Proposition 2.7, by (3.6) estimating $|h_{x}^{\ell,j}|\leq 1+(h_{x}^{\ell,j})^{2}$, we have $$K|R_{2,x,j}|\leq\frac{CK^{2}}{N}e^{3c_{1}K}\int\left(1+(h_{x}^{\ell,j})^{2}% \right)fd\nu.$$ Thus, we obtain $$\displaystyle\int(V-V^{\ell})fd\nu\leq$$ $$\displaystyle\alpha N^{2}\mathcal{D}(\sqrt{f};\nu)+\frac{C_{\alpha}K^{2}}{N}e^% {3c_{1}K}\sum_{j=1}^{d}\sum_{x\in{\mathbb{T}}_{N}^{d}}\int(h_{x}^{\ell,j})^{2}% fd\nu+CK^{2}e^{3c_{1}K}N^{d-1}.$$ For the second term, we first decompose the sum $\sum_{x\in{\mathbb{T}}_{N}^{d}}$ as $\sum_{y\in\Lambda_{2\ell}}\sum_{z\in(4\ell){\mathbb{T}}_{N}^{d}}$ and apply the entropy inequality noting that the variables $\{h_{x}^{\ell,j}\}$ are $(2\ell-1)$-dependent: $$\displaystyle\sum_{x\in{\mathbb{T}}_{N}^{d}}\int(h_{x}^{\ell,j})^{2}fd\nu$$ $$\displaystyle\leq\frac{1}{\gamma}\sum_{y\in\Lambda_{2\ell}}\left(H(\mu|\nu)+% \log\int\exp\left\{\gamma\sum_{z\in(4\ell){\mathbb{T}}^{d}_{N}}(h_{z+y}^{\ell,% j})^{2}\right\}d\nu\right)$$ $$\displaystyle=\frac{1}{\gamma}(4\ell)^{d}\left(H(\mu|\nu)+\sum_{z\in(4\ell){% \mathbb{T}}^{d}_{N}}\log\int\exp\left\{\gamma(h_{z+y}^{\ell,j})^{2}\right\}d% \nu\right).$$ However, since $h_{x}^{\ell,j}$ is a weighted sum of independent random variables, by applying Lemma 3.6 (concentration inequality) stated below, we have $$\log\int e^{\gamma(h_{x}^{\ell,j})^{2}}d\nu\leq 2$$ for every $0<\gamma\leq\frac{C_{0}}{\sigma^{2}}$, where $C_{0}$ is a universal constant and $\sigma^{2}$ is the supremum of the variances of $h_{x}^{\ell,j}$. By Lemma 3.2, $$\sigma^{2}\leq C_{d}g_{d}(\ell).$$ Therefore, we have $$\displaystyle\sum_{x\in{\mathbb{T}}_{N}^{d}}\int(h_{x}^{\ell,j})^{2}fd\nu\leq% \frac{1}{\gamma}(4\ell)^{d}\left(H(\mu|\nu)+2(\tfrac{N}{4\ell})^{d}\right).$$ Thus, choosing $\frac{1}{\gamma}=\frac{C_{d}}{C_{0}}g_{d}(\ell)$, we have shown $$\displaystyle\int(V-V^{\ell})fd\nu\leq\alpha N^{2}\mathcal{D}(\sqrt{f};\nu)+% \frac{\bar{C}_{\alpha}\ell^{d}g_{d}(\ell)K^{2}e^{3c_{1}K}}{N}\left(H(\mu|\nu)+% \frac{N^{d}}{\ell^{d}}\right)+CK^{2}e^{3c_{1}K}N^{d-1}.$$ Now recall $1\leq K\leq\delta\log N$ so that $e^{3c_{1}K}\leq N^{3c_{1}\delta}$ and choose $\delta>0$ such that $3c_{1}\delta\leq\kappa$ for a given small $\kappa>0$. Choose $\ell=N^{\frac{1}{d}-\kappa}$ when $d\geq 2$ and $N^{\frac{1}{2}-\kappa}$ when $d=1$. Then, when $d\geq 2$, we have $$\frac{\ell^{d}g_{d}(\ell)K^{2}}{N}e^{3c_{1}K}\leq CN^{-\kappa(d-1)}(\log N)^{3% }\leq 1,\quad\frac{N^{d}}{\ell^{d}}=N^{d-1+d\kappa},\quad K^{2}e^{3c_{1}K}N^{d% -1}\leq N^{d-1+2\kappa},$$ which shows (3.2). When $d=1$, $$\frac{\ell^{2}K^{2}}{N}e^{3c_{1}K}\leq CN^{-\kappa}(\log N)^{2}\leq 1,\quad% \frac{N}{\ell}=N^{\frac{1}{2}+\kappa},\quad K^{2}e^{3c_{1}K}N^{d-1}\leq N^{2% \kappa}.$$ This shows the conclusion for $d=1$. ∎ Lemma 3.6. (concentration inequality) Let $\{X_{i}\}_{i=1}^{n}$ be independent random variables with values in the intervals $[a_{i},b_{i}]$. Set $\bar{X}_{i}=X_{i}-E[X_{i}]$ and $\kappa=\sum_{i=1}^{n}(b_{i}-a_{i})^{2}$. Then, for every $\gamma\in[0,\kappa^{-1}]$, we have $$\log E\left[\exp\left\{\gamma\left(\sum_{i=1}^{n}\bar{X}_{i}\right)^{2}\right% \}\right]\leq 2\gamma\kappa.$$ The second step is to estimate the integral $\int V^{\ell}fd\nu$, where $V^{\ell}$ is given by (3.1). Proposition 3.7. We assume the same conditions as Proposition 3.1. Then, for $\kappa>0$, we have (3.7) $$\int V^{\ell}fd\nu\leq CKH(\mu_{t}|\nu_{t})+C_{\kappa}N^{d-1+\kappa},$$ with some $C_{\kappa}>0$ when $d\geq 2$. When $d=1$, the last term is replaced by $C_{\kappa}N^{\frac{1}{2}+\kappa}$. Proof. We again decompose the sum $\sum_{x\in{\mathbb{T}}_{N}^{d}}$ in (3.1) as $\sum_{y\in\Lambda_{2\ell}}\sum_{z\in(4\ell){\mathbb{T}}_{N}^{d}}$, and then, noting the $(2\ell)$-dependence of $\overleftarrow{(\tilde{\omega}_{1})}_{x,\ell}\overrightarrow{(\omega_{2})}_{x,\ell}$, use the entropy inequality and the concentration inequality to show $$\displaystyle\int V^{\ell}fd\nu$$ $$\displaystyle\leq\frac{K}{\gamma}\sum_{y\in\Lambda_{2\ell}}\left\{H(\mu_{t}|% \nu_{t})+\sum_{z\in(4\ell){\mathbb{T}}_{N}^{d}}\log E^{\nu_{t}}[e^{\gamma% \overleftarrow{(\tilde{\omega}_{1})}_{z+y,\ell}\overrightarrow{(\omega_{2})}_{% z+y,\ell}}]\right\}$$ $$\displaystyle\leq\frac{K(4\ell)^{d}}{\gamma}\left\{H(\mu_{t}|\nu_{t})+\frac{N^% {d}}{(4\ell)^{d}}C_{1}\gamma\ell^{-d}\right\},$$ for $\gamma=c\ell^{d}$ with $c>0$ small enough. Note that, by the central limit theorem, $\overleftarrow{(\tilde{\omega}_{1})}_{x,\ell},\overrightarrow{(\omega_{2})}_{x% ,\ell}$ are close to $C_{2}\ell^{-d/2}N(0,1)$ in law for large $\ell$, respectively. Since $\ell=N^{\frac{1}{d}-\kappa}$ when $d\geq 2$, we have $\frac{KN^{d}}{\ell^{d}}\leq N^{d-1+d\kappa}\cdot\delta\log N$ and obtain (3.7). When $d=1$, since $\ell=N^{\frac{1}{2}-\kappa}$, we have $\frac{KN^{d}}{\ell^{d}}=N^{\frac{1}{2}+\kappa}\delta\log N$. ∎ 3.2 Proof of (2.9) We now discuss the contribution of $$V_{1}:=-\frac{N^{2}}{2}\sum_{x,y\in{\mathbb{T}}_{N}^{d}:|x-y|=1}(u_{1}(y)-u_{1% }(x))^{2}\omega_{1,x}\omega_{1,y}$$ in (2.5), which arises from the Kawasaki part. The second term $V_{2}$ can be treated similarly. We may think $N^{2}(u_{1}(y)-u_{1}(x))^{2}$ as if $K$ in the argument we have developed. However, from Proposition 2.7, we have (3.8) $$N^{2}(u_{1}(y)-u_{1}(x))^{2}\leq CK^{2}.$$ This means that we may replace $K$ by $K^{2}$ properly in the estimates obtained in Propositions 3.1 and 3.7 for the first and second terms. Since $K^{2}=\delta^{2}\log N$ appearing in the error terms can be absorbed by $N^{\kappa}$ for every $\kappa>0$, this leads to (3.9) $$\int(V_{1}+V_{2})d\mu_{t}\leq\alpha N^{2}\mathcal{D}\left(\sqrt{\tfrac{d\mu_{t% }}{d\nu_{t}}};\nu_{t}\right)+C_{\kappa}K^{2}H(\mu_{t}|\nu_{t})+C_{\alpha,% \kappa}N^{d-1+\kappa},$$ for every $a,\kappa>0$, when $d\geq 2$ and the last term is replaced by $C_{\alpha,\kappa}N^{\frac{1}{2}+\kappa}$ when $d=1$. 4 Consequence of Theorem 2.2 Recall that $\mu_{t}^{N}$ is the distribution of $\tilde{\sigma}^{N}(t)$ on $\mathcal{X}_{N}^{2}$ and $u_{i}^{N}(t)=\{u_{i}^{N}(t,x)\}_{x\in{\mathbb{T}}_{N}^{d}}$, $i=1,2$ is the solution of the discretized hydrodynamic equation (2.3). The Bernoulli measure on $\mathcal{X}_{N}^{2}$ with mean $\{u_{i}^{N}(t,x)\}_{x\in{\mathbb{T}}_{N}^{d}}$ is denoted by $\nu_{t}^{N}$. Then Theorem 2.2 shows $H(\mu_{t}^{N}|\nu_{t}^{N})=o(N^{d})$ under a proper choice of $K=K(N)\nearrow\infty$. We define macroscopic functions $u_{i}^{N}(t,r),r\in{\mathbb{T}}^{d}$ as step functions (4.1) $$u_{i}^{N}(t,r)=\sum_{x\in{\mathbb{T}}_{N}^{d}}u_{i}^{N}(t,x)1_{B(\frac{x}{N},% \frac{1}{N})}(r),\quad r\in{\mathbb{T}}^{d},$$ from the microscopic functions $u_{i}^{N}(t,x),x\in{\mathbb{T}}_{N}^{d}$, where $B(\frac{x}{N},\frac{1}{N})=\prod_{j=1}^{d}[\frac{x_{j}}{N}-\frac{1}{2N},\frac{% x_{j}}{N}+\frac{1}{2N})$ is the box with center $\frac{x}{N}$ and side length $\frac{1}{N}$. Under our choice of $K$, the entropy inequality $$\mu_{t}^{N}(A)\leq\frac{\log 2+H(\mu_{t}^{N}|\nu_{t}^{N})}{\log\{1+1/\nu_{t}^{% N}(A)\}}$$ combined with Proposition 4.1 stated below shows that (4.2) $$\lim_{N\to\infty}\mu_{t}^{N}({\cal A}_{N,t}^{\varepsilon})=0,$$ for every $\varepsilon>0$, where $${\cal A}_{N,t}^{\varepsilon}\equiv{\cal A}_{N,t,\varphi}^{\varepsilon}:=\left% \{\eta\in{\cal X}_{N}\,;~{}\left|\langle\alpha_{i}^{N},\varphi\rangle-\langle u% _{i}^{N}(t,\cdot),\varphi\rangle\right|>\varepsilon,\,i=1,2\right\},\quad% \varphi\in C^{\infty}({\mathbb{T}}^{d}).$$ Proposition 4.1. There exists $C=C_{\varepsilon,\varphi}>0$ such that $$\nu_{t}^{N}({\cal A}_{N,t}^{\varepsilon})\leq e^{-CN^{d}}.$$ Proof. Since $$X_{i}:=\langle\alpha_{i}^{N},\varphi\rangle-\langle u_{i}^{N}(t,\cdot),\varphi% \rangle=\frac{1}{N^{d}}\sum_{x\in{\mathbb{T}}_{N}^{d}}\left\{\sigma_{i,x}-u_{i% }^{N}(t,\frac{x}{N})\right\}\varphi(\frac{x}{N})+o(1),$$ for $\varphi\in C^{\infty}({\mathbb{T}}^{d})$, we have $$\displaystyle\nu_{t}^{N}({\cal A}_{N,t}^{\varepsilon})$$ $$\displaystyle\leq e^{-\gamma\varepsilon N^{d}}E^{\nu_{t}^{N}}[e^{\gamma N^{d}|% X_{i}|}]$$ $$\displaystyle\leq e^{-\gamma\varepsilon N^{d}}\left\{E^{\nu_{t}^{N}}[e^{\gamma N% ^{d}X_{i}}]+E^{\nu_{t}^{N}}[e^{-\gamma N^{d}X_{i}}]\right\},$$ for every $\gamma>0$. However, by the independence of $\sigma_{i,x}$ under $\nu_{t}^{N}$, we have $$\displaystyle E^{\nu_{t}^{N}}[e^{\pm\gamma N^{d}X_{i}}]$$ $$\displaystyle=\prod_{x\in{\mathbb{T}}_{N}^{d}}E^{\nu_{t}^{N}}[e^{\pm\gamma\{% \sigma_{i,x}-u_{i,x}\}\varphi_{x}+o(1)}]$$ $$\displaystyle=\prod_{x\in{\mathbb{T}}_{N}^{d}}\left\{e^{\pm\gamma(1-u_{i,x})% \varphi_{x}}u_{i,x}+e^{\mp\gamma u_{i,x}\varphi_{x}}(1-u_{i,x})\right\}+o(1),$$ where $u_{i,x}=u_{i}^{N}(t,x)$ and $\varphi_{x}=\varphi(\frac{x}{N})$. However, by the Taylor’s formula applied at $\gamma=0$, we see $$\left|e^{\pm\gamma(1-u_{i,x})\varphi_{x}}u_{i,x}+e^{\mp\gamma u_{i,x}\varphi_{% x}}(1-u_{i,x})-1\right|\leq\frac{\gamma^{2}}{2}C,\quad C=C_{\|\varphi\|_{% \infty}},$$ for $0<\gamma\leq 1$. Thus we obtain $$\nu_{t}^{N}({\cal A}_{N,t}^{\varepsilon})\leq e^{-\gamma\varepsilon N^{d}+C% \gamma^{2}N^{d}},$$ for $\gamma>0$ sufficiently small. This shows the conclusion. ∎ 5 Convergence of the solution of the discretized hydrodynamic equation to that of the free boundary problem We show $u_{i}^{N}(t,r),t\in[0,T],r\in{\mathbb{T}}^{d},i=1,2$ appearing in (4.2), which is defined by (4.1) from the solution of the discretized hydrodynamic equation (2.3), converges to the unique weak solution of the free boundary problem (1.3). This can be done along with [1], in a discrete setting. Once this is shown, combined with (4.2), the proof of Theorem 1.1 is complete. Lemma 5.1. $$\int_{0}^{T}\int_{{\mathbb{T}}^{d}}u_{1}^{N}(t,r)u_{2}^{N}(t,r)dtdr\leq\frac{1% }{K}.$$ Proof. (cf. Lemma 2.3 of [1] with $\varphi\equiv 1$) From (2.3), we have $$\displaystyle K$$ $$\displaystyle\sum_{x\in{\mathbb{T}}_{N}^{d}}\int_{0}^{T}u_{1}^{N}(t,x)u_{2}^{N% }(t,x)dt$$ $$\displaystyle=d_{1}\sum_{x\in{\mathbb{T}}_{N}^{d}}\int_{0}^{T}\Delta^{N}u_{1}^% {N}(t,x)dt+\sum_{x\in{\mathbb{T}}_{N}^{d}}u_{1}^{N}(0,x)-\sum_{x\in{\mathbb{T}% }_{N}^{d}}u_{1}^{N}(T,x)\leq N^{d},$$ which implies the conclusion. ∎ Lemma 5.2. $$\int_{0}^{T}\int_{{\mathbb{T}}^{d}}|\nabla^{N}u_{i}^{N}(t,r)|^{2}dtdr\leq\frac% {1}{2d_{i}},\quad i=1,2,$$ where $\nabla^{N}u(r)=\{N(u(r+\frac{1}{N}e_{j})-u(r))\}_{j=1}^{d}$. Proof. (cf. Lemma 2.4 of [1] with $\varphi\equiv 1$) From (2.3), we have $$\displaystyle\frac{1}{2}\frac{d}{dt}\int_{{\mathbb{T}}^{d}}u_{1}^{N}(t,r)^{2}% dr+d_{1}\int_{{\mathbb{T}}^{d}}|\nabla^{N}u_{1}^{N}(t,r)|^{2}dr=-K\int_{{% \mathbb{T}}^{d}}u_{1}^{N}(t,r)^{2}u_{2}^{N}(t,r)dr\leq 0,$$ and this implies $$\displaystyle d_{1}\int_{0}^{T}dt\int_{{\mathbb{T}}^{d}}|\nabla^{N}u_{1}^{N}(t% ,r)|^{2}dr\leq\frac{1}{2}\int_{{\mathbb{T}}^{d}}\left\{u_{1}^{N}(0,r)^{2}-u_{1% }^{N}(T,r)^{2}\right\}dr\leq\frac{1}{2}.$$ The proof for $u_{2}^{N}$ is similar. ∎ These two lemmas with the help of Fréchet-Kolmogorov theorem show that $\{u_{i}^{N}(t,r)\}_{N}$ are relatively compact in $L^{2}([0,T]\times{\mathbb{T}}^{d})$. In fact, two lemmas prove the equi-continuity of $\{u_{i}^{N}(t,r)\}_{N}$ in the space $L^{2}([0,T]\times{\mathbb{T}}^{d})$ as in Lemmas 2.6 and 2.7 of [1]. Corollary 5.3. (cf. Corollary 3.1 of [1]) From any subsequence of $\{u_{i}^{N}(t,r)\}_{N}$, $i=1,2$, one can find further subsequences $\{u_{i}^{N_{k}}(t,r)\}_{k}$, $i=1,2$, and $u_{i}\in L^{2}([0,T]\times{\mathbb{T}}^{d})$, $i=1,2$ such that $$u_{i}^{N_{k}}\to u_{i}\quad\text{ strongly in }L^{2}([0,T]\times{\mathbb{T}}^{% d})\text{ and a.e.\ in }[0,T]\times{\mathbb{T}}^{d}$$ as $k\to\infty$. Lemma 5.4. (cf. Lemma 3.2 of [1]) $u_{1}u_{2}=0$ a.e. in $[0,T]\times{\mathbb{T}}^{d}$. Set $$w^{N}:=u_{1}^{N}-u_{2}^{N}\quad\text{ and }\quad w:=u_{1}-u_{2}.$$ From Corollary 5.3 and Lemma 5.4, $w^{N_{k}}\to w$ strongly in $L^{2}([0,T]\times{\mathbb{T}}^{d})$ and a.e. in $[0,T]\times{\mathbb{T}}^{d}$ as $k\to\infty$ and furthermore $$u_{1}=w^{+}\quad\text{ and }\quad u_{2}=w^{-}.$$ Proposition 5.5. $w$ is the unique weak solution of (1.3). Proof. It is sufficient to check the property (ii) of Definition 1.1 for $w$. From (2.3), for $\psi\in C^{1,2}([0,T]\times{\mathbb{T}}^{d})$ such that $\psi(T,0)=0$, $$\displaystyle\int_{0}^{T}\int_{{\mathbb{T}}^{d}}$$ $$\displaystyle(u_{1}^{N}(t,r)-u_{2}^{N}(t,r))\partial_{t}\psi(t,r)drdt-\int_{{% \mathbb{T}}^{d}}(u_{1}^{N}(0,r)-u_{2}^{N}(0,r))\psi(0,r)dr$$ $$\displaystyle=\int_{0}^{T}\int_{{\mathbb{T}}^{d}}(d_{1}u_{1}^{N}(t,r)-d_{2}u_{% 2}^{N}(t,r))\Delta^{N}\psi(t,r)drdt.$$ We obtain the property (ii) for $w$ by passing to the limit $k\to\infty$ along with the subsequence $N=N_{k}$. ∎ Because of the uniqueness of $w$, without taking subsequences, $u_{i}^{N}(t,r)$, $i=1,2$ themselves converge to $u_{i}(t,r)$ strongly in $L^{2}([0,T]\times{\mathbb{T}}^{d})$ and a.e. in $[0,T]\times{\mathbb{T}}^{d}$ as $N\to\infty$. This combined with (4.2) completes the proof of Theorem 1.1. Acknowledgments: T. Funaki is supported in part by JSPS KAKENHI, Grant-in-Aid for Scientific Researches (A) 18H03672 and (S) 16H06338. E. Presutti thanks the GSSI. M. E. Vares acknowledges support of CNPq (grant 305075/2016-0) and FAPERJ (grant E-26/203.048/2016). References [1] E.C.M. Crooks, E.N. Dancer, D. Hilhorst, M. Mimura and H. Ninomiya, Spatial segregation limit of a competition-diffusion system with Dirichlet boundary conditions, Nonlinear Anal. Real World Appl., 5 (2004), 645–665. [2] E.N. Dancer, D. Hilhorst, M. Mimura and L.A. Peletier, Spatial segregation limit of a competition-diffusion system. European J. Appl. Math., 10 (1999), 97–115. [3] E. Daus, L. Desvillettes and A. Jüngel Cross-diffusion systems and fast-reaction limits, arXiv:1710.03590. [4] T. Delmotte and J.-D. Deuschel, On estimating the derivatives of symmetric diffusions in stationary random environment, with applications to $\nabla\phi$ interface model, Probab. Theory Related Fields, 133 (2005), 358–390. [5] T. Funaki, Free boundary problem from stochastic lattice gas model, Ann. Inst. H. Poincaré, Probab. Statist., 35 (1999), 573–603. [6] T. Funaki, Hydrodynamic limit for exclusion processes, Comm. Math. Statist., 6 (2018), 417–480. [7] T. Funaki and K. Tsunoda, Motion by mean curvature from Glauber-Kawasaki dynamics, arXiv:1812.10182. [8] M. Iida, H. Monobe, H. Murakawa and H. Ninomiya, Vanishing, moving and immovable interfaces in fast reaction limits, J. Differential Equations, 263 (2017), 2715–2735. [9] M. Jara and O. Menezes, Non-equilibrium fluctuations of interacting particle systems, arXiv:1810.09526. [10] M. Sasada, Hydrodynamic limit for two-species exclusion processes., Stoch. Proc. Appl., 120 (2010), 494–521. [11] D.W. Stroock and W. Zheng, Markov chain approximations to symmetric diffusions, Ann. Inst. H. Poincare Probab. Statist., 33 (1997), 619–649. Anna De Masi Dipartimento di Ingegneria e Scienze dell’Informazione e Matematica, Università degli studi dell’Aquila, L’Aquila, 67100 Italy e-mail: [email protected] Tadahisa Funaki Department of Mathematics, School of Fundamental Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan. e-mail: [email protected] Errico Presutti GSSI, viale F. Crispi 7, 67100 L’Aquila, Italy e-mail: [email protected] Maria Eulalia Vares Instituto de Matemática. Universidade Federal do Rio de Janeiro, Centro de Tecnologia - Bloco C, Av. Athos da Silveira Ramos, 149, Ilha do Fundão, 21941–909, Rio de Janeiro, RJ, Brazil e-mail: [email protected]
Family of intersecting totally real manifolds of $({C}^{n},0)$ and CR-singularities Laurent Stolovitch CNRS UMR 5580, Laboratoire Emile Picard, Universite Paul Sabatier, 118 route de Narbonne, 31062 Toulouse cedex 4, France. Courriel : [email protected] (le 20 novembre 2020) Abstract The first part of this article is devoted to the study families of totally real intersecting $n$-submanifolds of $({C}^{n},0)$. We give some conditions which allow to straighten holomorphically the family. If this is not possible to do it formally, we construct a germ of complex analytic set at the origin which interesection with the family can be holomorphically staightened. The second part is devoted to the study real analytic $(n+r)$-submanifolds of $({C}^{n},0)$ having a CR-singularity at the origin ($r$ is a nonnegative integer). We consider deformations of quadrics and we define generalized Bishop invariants. Such a quadric intersects the complex linear manifold ${z_{p+1}=\cdots=z_{n}=0}$ along some real linear set ${\cal L}$. We study what happens to this intersection when the quadric is analytically perturbed. On the other hand, we show, under some assumptions, that if such a submanifold is formally equivalent to its associated quadric then it is holomorphically equivalent to it. All these results rely on a result stating the existence (and caracterization) of a germ of complex analytic set left invariant by an abelian group of germs of holomorphic diffeomorphisms (not tangent to the identity at the origin). Contents 1 Introduction 2 Abelian group of diffeomorphisms of $({C}^{n},0)$ and their invariant sets 3 Family of totally real $n$-manifolds in $({C}^{n},0)$ 4 Real analytic manifolds with CR singularities 4.1 Preparation 4.2 Complexification 4.3 Quadrics and linear involutions 4.4 Submanifolds with CR-singularities 4.5 Holomorphic equivalence to quadrics 4.6 Cutting varieties 1 Introduction The aim of the article is to study the geometry of some germs of real analytic submanifolds of $({C}^{n},0)$. On the one hand, we shall study family of totally real submanifolds of $({C}^{n},0)$ intersecting at the origin. On the other hand, we shall study submanifolds having a CR-singularity at the origin. In both cases, we are primary interested in the holomorphic classification of such objects, that is the orbit of the action of the group of germs of holomorphic diffeomorphisms fixing the origin. In this article, we shall mainly focus on the existence of complex analytic subsets intersecting such germs of real analytic manifolds. In the first problem, we shall also be interested in the problem of straightening holomorphically the family. We mean that we shall give sufficient condition which will ensure that, in a good holomorphic coordinates system, each submanifold of the family is an $n$-plane. In the case there are formal obstructions to straighten the family, we show the existence of a germ complex analytic variety which interesects the family along a set that can be straightened. The first part of this work takes its roots in and generalizes a recent work of Sidney Webster [Web03] from which it is very inspired. This part of the work start after having listen to Sidney Webster at the Partial Differential Equations and Several Complex Variables conference held in Wuhan University in June 2004. The starting point of the first problem appeared already in the work of E. Kasner [Kas13] and was studied, from the formal view point, by G.A. Pfeiffer [Pfe15]. They were interested in pairs of real analytic curves in $({C},0)$ passing throught the origin. We shall not consider the case were some of the submanifolds are tangent to some others. We refer the reader to the works of I. Nakai [Nak98], J.-M. Trépreau [Tré03] and X. Gong [AG05] in this direction. In the second part, we shall study $(n+p-1)$-real analytic submanifolds of ${C}^{n}$ of the form $$\begin{cases}y_{p+1}=F_{p+1}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ \vdots\\ y_{p+q}=F_{p+q}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ z_{n}=G(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\end{cases}$$ where $z^{\prime}=(z_{1},\ldots,z_{p})\in{C}^{p}$, $z^{\prime\prime}=(z_{p+1},\ldots,z_{n-1})\in{C}^{n-1-p}$, $p\in{N}^{*}$. The $F$’s and $G$ vanish at the second order at the origin. The origin is a singularity for the Cauchy-Riemann structures. The most studied case, up to now, is the case where $p=1$ ($n$-submanifold). Nevertheless, some work has been done for smaller dimensional submanifold by Adam Coffman (see for instance [Cof04a, Cof04b]). A nondegenerate real analytic surface in $({C}^{2},0)$ which is totally real except at the origin where it has a complex tangent can be regarded as a third order analytic deformation of the quadric $$z_{2}=z_{1}\bar{z}_{1}+\gamma(z_{1}^{2}+\bar{z_{1}}^{2}).$$ This is due to Bishop [Bis65] and the nonnegative number $\gamma$ is called the Bishop invariant. When $0<\gamma<1/2$ (we say elliptic), J. Moser and S. Webster showed, in their pioneering work [MW83] that an analytic deformation of such a quadric could be transformed into a normal form (in fact a real algebraic variety) by the mean of a germ of holomorphic diffeomorphism preserving the origin. All the geometry a such a deformation is understood by the study of the normal form. When $1/2<\gamma$ (we say hyperbolic), it is known that such a statement doesn’t hold. So, what about the geometry ? Wilhelm Klingenberg Jr. showed [Kli85] that there exists a germ of complex curve passing throught the origin cutting the submanifold along two transversal real analytic curves which are tangent to $\{\bar{z}_{1}=\lambda z_{1}\}$ and $\{\bar{z}_{1}=\lambda^{-1}z_{1}\}$ respectively. Here, $\lambda$ is a solution of $\gamma\lambda^{2}+\lambda+\gamma=0$ which is assumed to satify a diophantine condition : there exists $M,\delta>0$ such that, for all positive integer $k$, $|\lambda^{k}-1|>Mk^{-\delta}$. We refer to [BEPR00] for a summary in this framework, to [Hua04] for a nice introduction and to [Har84] for another point of view. We shall work with nondegenerate submanifolds (in some sense) and then define generalized Bishop invariants $\{\gamma_{i}\}_{i=1,\ldots,p}$ : There is a good holomorphic coordinates system at the origin in which the submanifold is defined by $$(M_{Q})\begin{cases}y_{\alpha}=f_{\alpha}(z^{\prime},\bar{z}^{\prime},x^{% \prime\prime})\quad\alpha=p+1,\ldots,n-1\\ z_{n}=Q(z^{\prime},\bar{z}^{\prime})+g(z^{\prime},\bar{z}^{\prime},x^{\prime% \prime})\end{cases}$$ where the $f_{i}$’s and $g$ are germs of real analytic functions at the origin and of order greater than or equal to $3$ there. The quadratic polynomial $Q$ is of the form $$Q(z^{\prime},\bar{z}^{\prime})=\sum d_{i,l}z^{\prime}_{i}\bar{z}^{\prime}_{l}+% \sum_{i=1}^{p}\gamma_{i}((z^{\prime}_{i})^{2}+(\bar{z}^{\prime}_{i})^{2}),$$ the norm of the sesquilinear part of $Q$ being $1$. It is regarded as a perturbation of the quadric $$(Q)\begin{cases}y_{p+1}=\cdots=y_{n-1}=0\\ z_{n}=Q(z^{\prime},\bar{z}^{\prime})\end{cases}$$ We shall consider the case where none of these invariants vanishes (see [Mos85, HK95] for results in this situation for $p=1$). Then, we begin a study à la Moser-Webster of such an object althought we shall not study here the normal form problem. We associate a pair $(\tau_{1},\tau_{2})$ of germs of holomorphic involutions of $({C}^{n+p-1},0)$ and also a germ of biholomorphism $\Phi=\tau_{1}\circ\tau_{2}$. In the one hand, we shall show under some asumptions (in particular the diophantiness property of $D\Phi(0)$), that if $(M_{Q})$ is formally equivalent to its associated quadric $(Q)$ then it is holomorphically equivalent to it. This generalizes a result by X. Gong [Gon94](p=1). On the other hand, the complex linear space $\{z_{p+1}=\cdots=z_{n}=0\}$ intersects the quadric along the real linear set ${\cal L}:=\{Q(z^{\prime},\bar{z}^{\prime})=0\}$. We shall show that this situation survive under a small analytic perturbation. Namely, we shall show under some assuptions, that there exists a good holomorphic coordinates system at the origin in which the submanifold $(M_{Q})$ intersects the complex linear space $\{z_{p+1}=\cdots=z_{n}=0\}$ along a real analytic subset $V$ passing throught the origin. The latter is completely determined by the eigenvalues of $D\Phi(0)$ and more precisely by the centralizer of the map $v\mapsto D\Phi(0)v$ in the space of non-linear formal maps. This generalizes the result of Wilhelm Klingenberg Jr.(see above) ($p=1$) in the sense that, there are holomorphic coordinates such that $(M_{Q})$ intersects $\{z_{2}=0\}$ along $\{(\zeta_{1},\eta_{1})\in{R}^{2}\;|\;\zeta_{1}\eta_{1}=0\}$. The core of these problems rests on geometric properties of dynamical systems associated to each situation. To be more specific, we shall deal, in the first part of this article, with germs of holomorphic diffeomorphisms of $({C}^{n},0)$ in a neighbourhood of the origin (a common fixed point). We shall consider those whose linear part at the origin is different from the identity. The main point is a result which gives the existence of germ of analytic subset of $({C}^{n},0)$ invariant by a abelian group of such diffeomorphisms under some diophantine condition. This kind of result was obtained by the author for a germ of holomorphic vector field at singular point [Sto94]. 2 Abelian group of diffeomorphisms of $({C}^{n},0)$ and their invariant sets The aim of this section is to prove the existence of complex analytic invariant subset for a commuting family of germs of holomorphic diffeomorphisms in a neighbourhood of a common fixed point. This is very inspired by a previous article of the author concerning holomorphic vector fields. Althought the objects are not the same, some of the computations are identical and we shall refer to them when possible. Let $D_{1}:=\text{diag}(\mu_{1,1},\ldots,\mu_{1,n}),\ldots,D_{l}:=\text{diag}(\mu_{% l,1},\ldots,\mu_{l,n})$ be diagonal invertible matrices. Let us consider a familly $F:=\{F_{i}\}_{i=1,\ldots l}$ of commuting germs of holomorphic diffeomorphisms of $({C}^{n},0)$ which linear part, at the origin, is $D:=\{D_{i}x\}_{i=1,\ldots l}$ : $$F_{i}(x)=D_{i}x+f_{i}(x),\quad\text{with}\quad f_{i}(0)=0,\;Df_{i}(0)=0,\;f_{i% }\in{\cal O}_{n}.$$ Let ${\cal I}$ be an ideal of ${\cal O}_{n}$ generated by monomials of ${C}^{n}$. Let $V({\cal I})$ be the germ at the origin, of the analytic subset of $({C}^{n},0)$ defined by ${\cal I}$. It is left invariant by the familly $D$. Let us set $\hat{\cal I}:=\mathaccent 866{\cal O}_{n}\otimes{\cal I}$. Here we denote ${\cal O}_{n}$ (resp. $\mathaccent 866{\cal O}_{n}$) the ring of germ of holomorphic function at the origin (resp. ring of formal power series) of ${C}^{n}$. Let $Q=(q_{1},\ldots,q_{n})\in{N}^{n}$ and $x=(x_{1},\ldots,x_{n})\in{C}^{n}$, we shall write $$|Q|:=q_{1}+\cdots+q_{n},\quad x^{Q}:=x_{1}^{q_{1}}\cdots x_{n}^{q_{n}}.$$ Let $\{\omega_{k}(D,{\cal I})\}_{k\geq 1}$ be the sequence of positive numbers defined by $$\omega_{k}(D,{\cal I})=\inf\left\{\max_{1\leq i\leq l}|\mu_{i}^{Q}-\mu_{i,j}|% \neq 0\;|\;2\leq|Q|\leq 2^{k},1\leq j\leq n,Q\in{N}^{n},x^{Q}\not\in{\cal I}% \right\}.$$ Let $\{\omega_{k}(D)\}_{k\geq 1}$ be the sequence of positive numbers defined by $$\omega_{k}(D,{\cal I})=\inf\left\{\max_{1\leq i\leq l}|\mu_{i}^{Q}-\mu_{i,j}|% \neq 0\;|\;2\leq|Q|\leq 2^{k},1\leq j\leq n,Q\in{N}^{n}\right\}.$$ Definition 2.1. 1. We shall say that the ideal ${\cal I}$ is properly embedded if it has a set of monomial generators not involving a nonempty set ${\cal S}$ of variables. 2. We shall say that the familly $D$ is diophantine (resp. on ${\cal I}$) if $$-\sum_{k\geq 1}\frac{\ln\omega_{k}(D)}{2^{k}}<+\infty\quad(resp.-\sum_{k\geq 1% }\frac{\ln\omega_{k}(D,{\cal I})}{2^{k}}<+\infty).$$ 3. We shall say that the familly $F$ is formally linearizable on $\hat{\cal I}$ if there exists a formal diffemorphism $\hat{\Phi}$ of $({C}^{n},0)$, tangent to the identity at the origin such that $\hat{\Phi}_{*}F_{i}-D_{i}x\in(\hat{\cal I})^{n}$ for all $1\leq i\leq l$. 4. A linear anti-holomorphic involution of ${C}^{n}$ is a map $\rho(z)=P\bar{z}$ where the matrix $P$ satisfies $P\bar{P}=Id$; $\bar{z}$ denotes the complex conjugate of $z$. 5. We shall say that ${\cal I}$ is compatible with a anti-linear involution $\rho$ if the map $\rho^{*}:\mathaccent 866{\cal O}_{n+p-1}\rightarrow\overline{\mathaccent 866{% \cal O}_{n+p-1}}$) defined $\rho^{*}(f)=f\circ\rho$ maps $\mathaccent 866{\cal I}$ to $\overline{\mathaccent 866{\cal I}}$ and $\mathaccent 866{CI}$ to $\overline{\mathaccent 866{CI}}$. Let $\mathaccent 866{\cal O}_{n}^{D}$ be the ring of formal invariant of the familly $D$, that is $$\mathaccent 866{\cal O}_{n}^{D}:=\{f\in\mathaccent 866{\cal O}_{n}\,|\;f(D_{i}% x)=f(x)\;i=1,\ldots,l\}.$$ It can be shown (as in proposition 5.3.2 of [Sto00]) that this ring is generated by a finite number of monomials $x^{R_{1}},\ldots,x^{R_{p}}$ and that the non-linear centralizer ${\cal C}_{D}$ of $D$ is a module over $\mathaccent 866{\cal O}_{n}^{D}$ of finite type. Let ResIdeal be the ideal generated by the monomials $x^{R_{1}},\ldots,x^{R_{p}}$ in ${\cal O}_{n}$. Theorem 2.1. Let ${\cal I}$ be a monomial ideal (resp. properly embedded). Assume that the familly $D$ is diophantine (resp. on ${\cal I}$). If the familly $F$ is formally linearizable on $\hat{\cal I}$, then it is holomorphically linearizable on ${\cal I}$. Moreover, there exists a unique such a diffeomorphism $\Phi$ such that the projection of the Taylor expansion of $\Phi-Id$ onto ${\cal I}\cup{\cal C}_{D}$ vanishes. Moreover, let $\rho$ be a linear anti-holomorphic involution such that $\rho{\cal C}_{D}\rho={\cal C}_{D}$. We assume that ${\cal I}$ is compatible with $\rho$. Assume that, for all $1\leq i\leq l$, $\rho\circ F_{i}\circ\rho$ belongs to the group generated by the $F_{i}$’s. Then $\Phi$ and $\rho$ commute with each other. This theorem can be rephrased as follow : Under the afore-mentioned diophantine condition, then there exists a germ of holomorphic diffeomorphism $\Phi$ such that $\Phi_{*}F_{i}-D_{i}x\in({\cal I})^{n}$ for all $1\leq i\leq l$. As a consequence, in a good holomorphic coordinates system, the analytic subset $V({\cal I})$ is left invariant by each $F_{i}$ and its restriction to it is the linear mapping $x\mapsto D_{i|V({\cal I})}x$. Remark 2.1. The familly $D$ can be diophantine while none of the $D_{i}$’s is. The second part of the theorem will be used for applications in the third part of the article. Corollary 2.1. 1. If the ring of invariant of $D$ reduces to the constants and if $D$ is diophantine, then $F$ is holomorphically linearizable in a neighbourhood of the origin. For one diffeomorphism, this was obtain by H. Rüssmann [Rüs77, Rüs02] and by T. Gramtchev and M.Yoshino [GY99] for an abelian group under a slightly coarser diophantine condition. 2. The existence of an invariant manifold for a germ of diffeomorphism was obtain by J. Pöschel [Pös86]. Despite the fact that we are dealing with a family of diffeomorphisms, the main difference is that we are able to linearize simultaneously on each irreducible component of analytic set. According to M. Chaperon [Cha86][theorem 4, p.132], if the family of diffeomorphisms is abelian then there exists a formal diffeomorphism $\hat{\Phi}$ such that $$\hat{\Phi}_{*}F_{i}(D_{j}z)=D_{j}\hat{\Phi}_{*}F_{i}(z),\quad 1\leq i,j\leq l.$$ We call the family of $\hat{\Phi}_{*}F_{i}$’s a formal normal form of the family $F$. Then we have the following corollary : Corollary 2.2. Let $F$ be an abelian family of germs of holomorphic diffeomorphisms of $({C}^{n},0)$. Let us assume that $D$ is diophantine on $ResIdeal$. If the non-linear centralizer of $D$ is generated by the $x^{R_{i}}$’s then $F$ is holomorphically linearizable on ${\cal I}$. Remark 2.2. The condition that the non-linear centralizer of $D$ is generated by the $x^{R_{i}}$’s means: if $\mu_{i}^{Q}=\mu_{i,j}$ for some $Q\in{N}^{n}_{2}$, $1\leq j\leq n$ and for all $1\leq i\leq l$, then $x^{Q}$ belongs to the ideal generated by $x^{R_{1}},\ldots,x^{R_{p}}$. This a very weak condition since only all but a finite number of resonances satisfy this condition. We shall prove that there exists a holomorphic map $\phi:({C}^{n},0)\rightarrow({C}^{n},0)$, tangent to the identity at the origin, such that $$\Phi^{-1}\circ F_{i}\circ\Phi(y)=G_{i}(y):=D_{i}y+g_{i}(y)\quad i=1,\ldots,l$$ where the components of $g_{i}$ are non-linear holomorphic functions and belong to the ideal ${\cal I}$. It is unique if we require that its projection on ${\cal I}\cup\text{ResIdeal}$ is zero. Let us set $x_{j}=\Phi_{j}(y):=y_{j}+\phi_{j}(y)$, $j=1,\ldots,n$. Let us expand the equations $F_{i}\circ\Phi(y)=\Phi\circ G_{i}$, $i=1,\ldots,l$. For all $1\leq j\leq n$ and all $i=1,\ldots,l$, we have $$\displaystyle\mu_{i,j}y_{j}+g_{i,j}(y)+\phi_{j}(G_{i}(y))$$ $$\displaystyle=$$ $$\displaystyle\mu_{i,j}(y_{j}+\phi_{j}(y))+f_{i,j}(\Phi(y))$$ $$\displaystyle g_{i,j}(y)+\phi_{j}(D_{i}y)$$ $$\displaystyle=$$ $$\displaystyle\mu_{i,j}\phi_{j}(y)+f_{i,j}(\Phi(y))$$ $$\displaystyle+(\phi_{j}(G_{i}(y))-\phi_{j}(D_{i}y))$$ Let us expand the functions at the origin : $$f_{i,j}(y)=\sum_{Q\in{N}^{n}_{2}}f_{i,j,Q}y^{Q},\;g_{i,j}(y)=\sum_{Q\in{N}^{n}% _{2}}g_{i,j,Q}y^{Q}\text{ and }\phi_{j}(y)=\sum_{Q\in{N}^{n}_{2}}\phi_{j,Q}y^{% Q}.$$ Then we have $$\sum_{Q\in{N}^{n}_{2}}\delta^{i}_{Q,j}\phi_{j,Q}y^{Q}+g_{i,j}(y)=f_{i,j}(\Phi(% y))-(\phi_{j}(G_{i}(y))-\phi_{j}(D_{i}y))$$ (1) where $$\delta^{i}_{Q,j}:=\mu_{i}^{Q}-\mu_{i,j},\quad\mu_{i}:=(\mu_{i,1},\ldots,\mu_{i% ,n}).$$ Let $\{f\}_{Q}$ denotes the coefficient of $x^{Q}$ in the Taylor expansion at the origin of $f$. We compute $\phi_{j,Q}$ and $g_{i,j,Q}$ by induction on $|Q|\geq 2$ in the following way : • if $y^{Q}$ does not belongs to ${\cal I}$ and $\max_{i}|\delta^{i}_{Q,j}|\neq 0$, then there exists $1\leq i_{0}\leq l$ such that $|\delta^{i_{0}}_{Q,j}|=\max_{i}|\delta^{i}_{Q,j}|$. We set $$\displaystyle\phi_{j,Q}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{\delta^{i_{0}}_{Q,j}}\left\{f_{i_{0},j}(\Phi(y))-(\phi_{% j}(G_{i_{0}}(y))-\phi_{j}(D_{i_{0}}y))\right\}_{Q}$$ $$\displaystyle g_{i,j,Q}$$ $$\displaystyle=$$ $$\displaystyle 0.$$ • If $y^{Q}$ does not belongs to ${\cal I}$ and $\max_{i}|\delta^{i}_{Q,j}|=0$, then we have $$\left\{f_{i_{0},j}(\Phi(y))-(\phi_{j}(G_{i_{0}}(y))-\phi_{j}(D_{i_{0}}y))% \right\}_{Q}=0$$ and we set $\phi_{j,Q}=0=g_{i,j,Q}$. • If $y^{Q}$ belongs to ${\cal I}$, we set $$\displaystyle\phi_{j,Q}$$ $$\displaystyle=$$ $$\displaystyle 0$$ $$\displaystyle g_{i,j,Q}$$ $$\displaystyle=$$ $$\displaystyle\left\{f_{i,j}(\Phi(y))-(\phi_{j}(G_{i}(y))-\phi_{j}(D_{i}y))% \right\}_{Q}.$$ Lemma 2.1. The formal diffeomorphism $\Phi$ defined above linearizes simultanueously the family $F$ on $\hat{\cal I}$ where $\hat{\cal I}:=\mathaccent 866{\cal O}_{n}\otimes{\cal I}$. Proof. For all $1\leq i,j\leq j\leq l$, we have $$F_{i}\circ F_{j}=F_{j}\circ F_{i}\;\;\text{ thus }\;\;F_{i}\circ F_{j}\circ% \Phi=F_{j}\circ F_{i}\circ\Phi.$$ Therefore, we have $$D_{i}D_{j}\Phi+D_{i}(f_{j}\circ\Phi)+f_{i}(D_{j}\Phi+f_{j}\circ\Phi)=D_{j}D_{i% }\Phi+D_{i}(f_{i}\circ\Phi)+f_{j}(D_{i}\Phi+f_{i}\circ\Phi),$$ so that $$\displaystyle f_{j}(\Phi\circ D_{i})-D_{i}(f_{j}\circ\Phi)$$ $$\displaystyle=$$ $$\displaystyle f_{i}(\Phi\circ D_{j})-D_{j}(f_{i}\circ\Phi)$$ $$\displaystyle+f_{j}(D_{i}\Phi+f_{i}\circ\Phi)-f_{j}(\Phi\circ D_{i})$$ $$\displaystyle+f_{i}(D_{j}\Phi+f_{j}\circ\Phi)-f_{i}(\Phi\circ D_{j}).$$ Moreover, we have $F_{i}\circ\Phi=\Phi\circ G_{i}$. Hence, we have $$\displaystyle f_{i}(D_{j}\Phi+f_{j}\circ\Phi)-f_{i}(\Phi\circ D_{j})$$ $$\displaystyle=$$ $$\displaystyle f_{i}\circ F_{j}\circ\Phi-f_{i}(\Phi\circ D_{j})$$ $$\displaystyle=$$ $$\displaystyle f_{i}\circ\Phi\circ G_{j}-f_{i}\circ\Phi\circ D_{j}$$ $$\displaystyle=$$ $$\displaystyle D(f_{i}\circ\Phi)(D_{j}y)g_{j}+\cdots$$ Assume the $F_{i}$’s are linearized on $V({\cal I})$ up to order $k\geq 2$. This means that, for any $1\leq m\leq n$ and any $1\leq i\leq l$, the $k$-jet $J^{k}(g_{i,m})$ belongs to ${\cal I}$. The previous computation shows that the $k+1$-jet of $f_{i}(D_{j}\Phi+f_{j}\circ\Phi)-f_{i}(\Phi\circ D_{j})$ depends only on the $k$-jet of $g_{j}$ and belongs to ${\cal I}$. The same is true for $\phi_{j}(G_{i}(y))-\phi_{j}(D_{i}y)$. Therefore, if $Q\in{N}^{n}_{2}$ with $|Q|=k+1$ is such that $x^{Q}$ does not belong to ${\cal I}$, then we have $$\{f_{j}(\Phi\circ D_{i})-D_{i}(f_{j}\circ\Phi)\}_{Q}=\{f_{i}(\Phi\circ D_{j})-% D_{j}(f_{i}\circ\Phi)\}_{Q};$$ that is, for all $1\leq m\leq n$, $$(\mu_{i}^{Q}-\mu_{i,m})\{f_{j,m}\circ\Phi\}_{Q}=(\mu_{j}^{Q}-\mu_{j,m})\{f_{i,% m}\circ\Phi\}_{Q}.$$ This means that equation $(\ref{conjugaison})$ is solved by induction and that $\Phi$ linearizes formally the $F_{i}$’s on $V(\hat{\cal I})$. ∎ Let $\rho$ be a linear anti-holomorphic involution satisfying the assumptions of the theorem. We have $F_{i}\circ\Phi=\Phi\circ G_{i}$ where $G_{i}$ is linearized along $\mathaccent 866{\cal I}$. Hence, we have $$(\rho\circ F_{i}\circ\rho)\circ(\rho\circ\Phi\circ\rho)=(\rho\circ\Phi\circ% \rho)\circ(\rho\circ G_{i}\circ\rho).$$ Let us set $\tilde{F}_{i}:=\rho\circ F_{i}\circ\rho$. By assusmptions, $\tilde{F}_{i}$ belongs to the group generated by the $F_{i}$’s. Since $\rho^{*}\mathaccent 866{\cal I}\subset\overline{\mathaccent 866{\cal I}}$, then $\rho\circ G_{i}\circ\rho$ is a formal diffeomorphism which is linearized on $\mathaccent 866{\cal I}$. By assumptions, the projection of $\rho\circ\Phi\circ\rho-Id$ onto ${\cal I}\cup{\cal C}_{D}$ vanishes identically. By uniqueness, we have $\rho\circ\Phi\circ\rho=\Phi$ since $\Phi$ linearizes $\tilde{F}_{i}$ on ${\cal I}$. We shall prove, by using the majorant method, that $\Phi$ actually converges on a polydisc of positive radius centered at the origin. Let us define ${N}_{2}^{n}\setminus\hat{\cal I}$ to be the set of multiindices $Q\in{N}^{n}$ such that $|Q|\geq 2$ and $x^{Q}\not\in\mathaccent 866{\cal I}$. Let $f=\sum_{Q}f_{Q}x^{Q}$ and $g=\sum_{Q}g_{Q}x^{Q}$ be formal power series. We shall say that $g$ dominates if $|f_{Q}|\leq|g_{Q}|$ for all multiindices $Q$. First of all, for all $1\leq j\leq n$ and all $Q\in{N}_{2}^{n}\setminus\hat{\cal I}$ such that $\max_{1\leq i\leq l}|\delta_{j,Q}^{i}|\neq 0$, we have $$|\phi_{j,Q}||\delta_{j,Q}|=|\{f_{i_{0}(Q),j}(\Phi)\}_{Q}|\leq\{\bar{f}_{i_{0}(% Q),j}(y+\bar{\phi})\}_{Q}$$ where $|\delta_{j,Q}|=\max_{1\leq i\leq l}|\delta_{j,Q}^{i}|=|\delta_{i_{0}(Q,j),j,Q}|$. In fact, $\{f_{i}\circ\Phi\circ G_{j}-f_{i}\circ\Phi\circ D_{j}\}_{Q}=0$ whenever $Q\in{N}_{2}^{n}\setminus\hat{\cal I}$. This inequality still holds if $\max_{1\leq i\leq l}|\delta_{j,Q}^{i}|=0$. Let us set • $\delta_{Q}:=\min\{|\delta_{j,Q}|,{1\leq j\leq n}$ such that $\delta_{j,Q}\neq 0\}$, • $\delta_{Q}:=0$ if $\max_{1\leq i\leq l}|\delta_{j,Q}^{i}|=0$. Let us sum over $1\leq j\leq n$ the previous inequalities. We obtain for all $Q\in{N}_{2}^{n}\setminus\hat{\cal I}$, $$\delta_{Q}\sum_{j=1}^{n}{|\phi_{j,Q}|}\leq\sum_{j=1}^{n}{|\phi_{j,Q}||\delta_{% j,Q}|}\leq\left\{\sum_{j=1}^{n}\bar{f}_{i_{0}(Q,j),j}(y+\bar{\phi})\right\}_{Q% }\leq\left\{\sum_{i=1}^{l}\left(\sum_{j=1}^{n}\bar{f}_{i,j}\right)(y+\bar{\phi% })\right\}_{Q}.$$ Since $\sum_{i=1}^{l}\sum_{j=1}^{n}f_{i,j}$ vanishes at the origin with its derivative as well, there exists positives constants $a,b$ such that $$\sum_{i=1}^{l}\sum_{j=1}^{n}{f_{i,j}}\prec\frac{a\left(\sum_{j=1}^{n}{x_{j}}% \right)^{2}}{1-b\left(\sum_{j=1}^{n}{x_{j}}\right)}.$$ Since the Taylor expansion of the right hand side has non-negative coefficients, we obtain $$\delta_{Q}\tilde{\phi}_{Q}\leq\left\{\frac{a\left(\sum_{j=1}^{n}{y_{j}+\tilde{% \phi}}\right)^{2}}{1-b\left(\sum_{j=1}^{n}{y_{j}+\tilde{\phi}}\right)}\right\}% _{Q}$$ where we have set $\tilde{\phi}_{Q}:=\sum_{j=1}^{n}{|\phi_{j,Q}|}$ and $\tilde{\phi}=\sum_{Q\in{N}_{2}^{n}}\tilde{\phi}_{Q}x^{Q}$. Here, we have set $\tilde{\phi}_{Q}=0$ whenever $\delta_{Q}=0$. Let us define the formal power series $\sigma(y)=\sum_{Q\in{N}_{2}^{n}}{\sigma_{Q}y^{Q}}$ as follow : $$\displaystyle\forall Q\in{N}_{2}^{n}\setminus({N}_{2}^{n}\setminus\hat{\cal I}% )\;\;\;\sigma_{Q}$$ $$\displaystyle=$$ $$\displaystyle 0$$ $$\displaystyle\forall Q\in{N}_{2}^{n}\setminus\hat{\cal I}\;\;\;\sigma_{Q}$$ $$\displaystyle=$$ $$\displaystyle\left\{\frac{a\left(\sum_{j=1}^{n}{y_{j}+\sigma}\right)^{2}}{1-b% \left(\sum_{j=1}^{n}{y_{j}+\sigma}\right)}\right\}_{Q}$$ Lemma 2.2. [Sto94][Lemme 2.1] The series $\sigma$ is convergent in a neighbourhood of the origin $0\in{C}^{n}$. Let us define the sequence $\{\eta_{Q}\}_{Q\in{N}^{n}_{1}\setminus\hat{\cal I}}$ of positive number as follow : 1. $\forall P\in{N}^{n}_{1}\setminus\hat{\cal I}$ tel que $|P|=1$, $\eta_{P}=1$ ( such multiindice exists. ), 2. $\forall Q\in{N}^{n}_{2}\setminus\hat{\cal I}$ with $\delta_{Q}\neq 0$ $$\delta_{Q}\eta_{Q}=\max_{{\begin{subarray}{c}Q_{j}\in{N}^{n}_{1},S\in{N}^{n}\\ Q_{1}+\cdots+Q_{p}+S=Q\end{subarray}}}{\eta_{Q_{1}}\cdots\eta_{Q_{p}}},$$ the maximum been taken over the sets of $p+1$, $1\leq p\leq|Q|$, multiindices $Q_{1},\ldots,Q_{p},S$ such that $\forall 1\leq j\leq p,\;Q_{j}\in{N}_{1}^{n},\;|Q_{j}|<|Q|$, $S\in{N}^{n}$. These sets are not empty. 3. $\forall Q\in{N}^{n}_{2}\setminus\hat{\cal I}$ with $\delta_{Q}=0$, $\eta_{Q}=0$. This sequence is well defined. In fact, if $Q\in{N}^{n}_{2}\setminus\hat{\cal I}$, then there exists multiindices $Q_{1},\ldots,Q_{p},S$ such that $Q=Q_{1}+\ldots+Q_{p}+S$, $\forall 1\leq j\leq p,\;Q_{j}\in{N}_{1}^{n},\;|Q_{j}|<|Q|,\;S\in{N}^{n}$. In this case, $\forall 1\leq j\leq p,\;\;Q_{j}\in{N}_{1}^{n}\setminus\hat{\cal I}$. The following lemmas are the key points. Lemma 2.3. [Sto94][Lemme 2.2] For all $Q\in{N}^{n}_{2}\setminus\hat{\cal I}$, we have $\tilde{\phi}_{Q}\leq\sigma_{Q}\eta_{Q}$. Lemma 2.4. [Sto94][Lemme 2.3] There exists a constant $c>0$ such that $\forall Q\in{N}_{2}^{n}\setminus\hat{\cal I},\;\;\eta_{Q}\leq c^{|Q|}$. Let $\theta>0$ be such that $4\theta:=\min_{i,j}|\lambda_{i,j}|\leq 1$ (we can always assume this, even if this means using the inverse of one of the diffeomorphisms). If the ideal ${\cal I}$ is properly embedded, then we shall set $$4\theta:=\min_{1\leq i\leq l,j\in{\cal S}}|\lambda_{i,j}|\leq 1$$ where ${\cal S}$ denotes the set of variables not involves in any generator. In particular, we have the property that if $x^{Q}\not\in{\cal I}$ then $x_{s}x^{Q}\not\in{\cal I}$ for all $s\in{\cal S}$. By definition, $\eta_{Q}$ is a product of $1/\delta_{Q^{\prime}}$ with $|Q^{\prime}|\leq|Q|$. Let $k$ be a non-negative integer. Let us define $\phi^{(k)}(Q)$ (resp. $\phi^{(k)}_{j}(Q)$) to be the number of $1/\delta_{Q^{\prime}}$’s present in this product and such that $0\neq\delta_{Q^{\prime}}<\theta\omega_{k}(D,{\cal I})$ (resp. and $\delta_{Q}=\delta_{j,Q}$). The lemma is a consequence of the following proposition Proposition 2.1. [Sto94][lemme 2.8] For all $Q\in{{N}}^{n}_{2}\setminus\hat{\cal I}$, we have $\phi^{(k)}(Q)\leq 2n\frac{|Q|}{2^{k}}$ if $|Q|\geq 2^{k}+1$; and $\phi^{(k)}(Q)=0$ if $|Q|\leq 2^{k}$. In fact, $\phi^{(k)}(Q)$ bounds the number of $1/\delta_{Q^{\prime}}$’s appearing in the product defining $\eta_{Q}$ and such that $\theta\omega_{k+1}(D,{\cal I})\leq\delta_{Q^{\prime}}<\theta\omega_{k}(D,{\cal I})$. Proof of lemma 2.4. Let $r$ be the integer such that $2^{r}+1\leq|Q|<2^{r+1}+1$. Then we have $$\eta_{Q}\leq\prod_{k=0}^{r}{\left(\frac{1}{\theta\omega_{k+1}(D,{\cal I})}% \right)^{\phi^{(k)}(Q)}}.$$ By applying the Logarithm and proposition 2.1, we obtain $$\displaystyle\ln\eta_{Q}$$ $$\displaystyle\leq$$ $$\displaystyle\sum_{k=0}^{l}{2n\frac{|Q|}{2^{k}}\left(\ln\frac{1}{\theta\omega_% {k+1}(D)}\right)}$$ $$\displaystyle\leq$$ $$\displaystyle|Q|\left(-2n\sum_{k\geq 0}{\frac{\ln\omega_{k+1}(D)}{2^{k}}}+2n% \ln\theta^{-1}\sum_{k\geq 0}{\frac{1}{2^{k}}}\right).$$ Since the familly $D$ is diophantine, we obtain $\eta_{Q}\leq c^{|Q|}$ for some positive constant $c$. ∎ For any positive integer $k$, for any $1\leq j\leq n$, let us consider the function defined on ${{N}}^{n}_{2}\setminus\hat{\cal I}$ to be $$\forall Q\in{N}_{2}^{n}\setminus\hat{\cal I},\;\;\;\psi^{(k)}_{j}(Q)=\left\{% \begin{array}[]{l}1\;\;\;\;\mbox{ if }\;\delta_{Q}=|\delta_{j,Q}|\neq 0\;\mbox% { and }\;|\delta_{j,Q}|<\theta\omega_{k}(D,{\cal I})\\ 0\;\;\;\;\mbox{ if }\;\delta_{Q}=0\;\mbox{ or }\;\delta_{Q}\neq|\delta_{j,Q}|% \;\mbox{ or }\;|\delta_{j,Q}|\geq\theta\omega_{k}(D,{\cal I})\\ \end{array}\right.$$ Then we have, $$0\leq\phi_{j}^{(k)}(Q)=\psi^{(k)}_{j}(Q)+\max_{{\begin{subarray}{c}Q_{j}\in{N}% ^{n}_{1},S\in{N}^{n}\\ Q_{1}+\cdots+Q_{p}+S=Q\end{subarray}}}{\left(\phi_{j}^{(k)}(Q_{1})+\cdots+\phi% _{j}^{(k)}(Q_{p})\right)}.$$ The proof of propositon 2.1 identitical to the proof of [Sto94][lemme 2.8] except that we have to use the following version of [Sto94][lemme 2.7]. Lemma 2.5. Let $Q\in{{N}}^{n}_{2}\setminus\hat{\cal I}$ be such that $\psi^{(k)}_{j}(Q)=1$. If $Q=P+P^{\prime}$ with $(P,P^{\prime})\in{N}^{n}_{1}\times{N}^{n}_{2}$ and $|P|\leq 2^{k}-1$, then $(P,P^{\prime})\in{N}^{n}_{1}\setminus\hat{\cal I}\times{N}^{n}_{2}\setminus% \hat{\cal I}$ and $\psi^{(k)}_{j}(P^{\prime})=0$. Proof. Clearly, if $Q=P+P^{\prime}\in{{N}}^{n}_{2}\setminus\hat{\cal I}$ then $(P,P^{\prime})\in{N}^{n}_{1}\setminus\hat{\cal I}\times{N}^{n}_{2}\setminus% \hat{\cal I}$. There are two cases to consider : 1. if $\delta_{P^{\prime}}\neq|\delta_{j,P^{\prime}}|$ or $\delta_{P^{\prime}}=0$ then $\psi^{(k)}_{j}(P^{\prime})=0$, by definition. 2. if $\delta_{P^{\prime}}=|\delta_{j,P^{\prime}}|\neq 0$, assume that $\delta_{P^{\prime}}<\theta\omega_{k}(D,{\cal I})$. Then, for all $1\leq i\leq l$, we have $$|\lambda_{i}^{P^{\prime}}|>|\lambda_{i,j}|-\theta\omega_{k}(D,{\cal I})\geq 4% \theta-2\theta=2\theta.$$ It follows that, for all $1\leq i\leq l$, $$\displaystyle 2\theta\omega_{k}(D,{\cal I})$$ $$\displaystyle>$$ $$\displaystyle|\lambda_{i}^{Q}-\lambda_{i,j}|+|\lambda_{i}^{P^{\prime}}-\lambda% _{i,j}|$$ $$\displaystyle>$$ $$\displaystyle|\lambda_{i}^{Q}-\lambda_{i}^{P^{\prime}}|=|\lambda_{i}^{P^{% \prime}}||\lambda_{i}^{P}-1|.$$ If ${\cal I}$ is properly embedded, for all $a\in{\cal S}$, we have $x_{a}x^{P}\not\in{\cal I}$. Therefore, for all $1\leq i\leq l$, we have $$\displaystyle 2\theta\omega_{k}(D,{\cal I})$$ $$\displaystyle>$$ $$\displaystyle 2\theta|\lambda_{i,a}|^{-1}|\lambda_{i}^{P+E_{a}}-\lambda_{i,a}|$$ $$\displaystyle>$$ $$\displaystyle 2\theta|\lambda_{i,a}|^{-1}\omega_{k}(D,{\cal I}).$$ This contradicts the facts that $\min_{1\leq i\leq l,a\in{\cal S}}|\lambda_{i,a}|\leq 1$. If ${\cal I}$ is not properly embedded, then we obtain $$2\theta\omega_{k}(D)>2\theta|\lambda_{i,a}|^{-1}\omega_{k}(D)$$ for all $1\leq a\leq n$. It is still a contradiction. Hence, we have shown that $\psi^{(k)}_{j}(P^{\prime})=0$. ∎ 3 Family of totally real $n$-manifolds in $({C}^{n},0)$ Let us consider a family $M:=\{M_{i}\}_{i=1,\ldots,m}$ of real analytic totally real $n$-submanifold of ${C}^{n}$ passing throught the origin. Locally, each $M_{i}$ is the fixed point set of an anti-holomorphic involution $\rho_{i}$ : $M_{i}=FP(\rho_{i})$ and $\rho_{i}\circ\rho_{i}=Id$. This means that $$\rho_{i}(z):=B_{i}\bar{z}+R_{i}(\bar{z})$$ where $R_{i}$ is a germ of holomorphic function at the origin with $R_{i}(0)=0$ and $DR_{i}(0)=0$. Each matrix $B_{i}$ is invertible and satisfies $B_{i}\bar{B}_{i}=Id$. The tangent space, at the origin, of $M_{i}$ is the totally real $n$-plane $$\{z=B_{i}\bar{z}\}$$ We assume that there are all distinct one from another. Their intersection at the origin is the set $$\left\{z\in{C}^{n}\;|\;B_{i}\bar{z}=z,\;i=1,\ldots,m\right\}\subset\left\{z\in% {C}^{n}\;|\;B_{i}\bar{B}_{j}z=z,\;i,j=1,\ldots,m\right\}.$$ It is contained in the common eigenspace of the $B_{i}\bar{B}_{j}$’s associated to the eigenvalue $1$. We shall not assume that this space is reduced to $0$. Let us consider the group $G$ generated by the germs of holomorphic diffeomorphisms of $({C}^{n},0)$ $F_{i,j}:=\rho_{i}\circ\rho_{j}$, $1\leq i,j\leq m$. Let $D_{i,j}:=B_{i}\bar{B}_{j}$ be the linear part at the origin of $F_{i,j}$. Let us set $$F_{i,j}:=D_{i,j}z+f_{i,j}(z)$$ where $f_{i,j}$ is a germ of holomorphic function at the origin with $f_{i,j}(0)=0$ and $Df_{i,j}(0)=0$. Let us write the relation $F_{i,j}=\rho_{i}\circ\rho_{j}$ and $\rho_{i}\circ\rho_{i}=Id$. We obtain $$\displaystyle f_{i,j}(z)$$ $$\displaystyle=$$ $$\displaystyle B_{i}\bar{R}_{j}(z)+R_{i}(\bar{\rho}_{j})$$ (2) $$\displaystyle 0$$ $$\displaystyle=$$ $$\displaystyle B_{i}\bar{R}_{i}(z)+R_{i}(\bar{\rho}_{i}).$$ (3) By multiplying the first equation by $\bar{B}_{i}$, we obtain $$\bar{R}_{j}(z)=\bar{B}_{i}f_{i,j}(z)-\bar{B}_{i}R_{i}(\bar{\rho}_{j}).$$ Hence,we have $$0=B_{j}\bar{B}_{i}f_{i,j}(z)-B_{j}\bar{B}_{i}R_{i}(\bar{\rho}_{j})+B_{i}\bar{f% }_{i,j}(\bar{\rho}_{j})-B_{i}\bar{R}_{i}(\rho_{j}\circ\rho_{j}).$$ Let us mupltiply by $\bar{B}_{i}$ on the left and take the conjugation. We obtain $$0=D_{i,j}B_{i}\bar{f}_{i,j}(\bar{z})-D_{i,j}B_{i}\bar{R}_{i}(\rho_{j})+f_{i,j}% (\rho_{j})-R_{i}(\bar{z}).$$ On the other hand, by evaluating equation $(\ref{equ2})$ at $\bar{\rho}_{j}$, we obtain $$0=B_{i}\bar{R}_{i}(\bar{\rho}_{j})+R_{i}(\bar{F}_{i,j}).$$ At the end,we obtain $$R_{i}(\bar{z})-D_{i,j}R_{i}(\bar{F}_{i,j})=D_{i,j}B_{i}\bar{f}_{i,j}(\bar{z})+% f_{i,j}(\rho_{j}).$$ (4) Definition 3.1. The $\rho_{i}$’s are simultaneously normalizable whenever $R_{i}(\bar{z})-D_{i,j}R_{i}(\bar{D}_{i,j}\bar{z})=0$ for all $1\leq i,j\leq l$. Remark 3.1. If the group $G$ is holomorphically linearizable at the origin then the $\rho_{i}$’s are simultaneously normalizable. Moreover, asssume the $D_{i,j}$’s are simultaneously diagonalizable and let us set $D_{i,j}=\text{diag}(\mu_{i,j,k})$. Then, for any $1\leq k\leq n$ and any $1\leq j\leq m$, the $k$-component $\rho_{i,k}$ of $\rho_{i}$ can be written as $$\left(\rho_{i}(z)-B_{i}\bar{z}\right)_{k}=\sum_{\begin{subarray}{c}Q\in{N}^{n}% _{2}\\ \forall j,\;\bar{\mu}_{i,j}^{Q}=\mu_{i,j,k}^{-1}\end{subarray}}\rho_{i,k,Q}% \bar{z}^{Q}.$$ Here, $(f)_{k}$ denotes the kth-component of $f$. As a consequence, we have Theorem 3.1. Let us assume that the group $G$ associated to the family of totally real submanifolds $M$ is a semi-simple Lie group. Then the $\rho_{i}$’s are simultaneously and holomorphically normalizable in a neighbourhood of the origin. Proof. It is classical [Kus67, GS68, CG97] that if the Lie group $G$ of germs of difféomorphisms at a common fixed point is semi-simple then it is holomorphically linearizable in a neighbourhood of the origin. Then, apply the previous remark 3.1. ∎ Definition 3.2. We shall say that such a family $M=\{M_{i}\}_{i=1,\ldots,m}$ of totally real $n$-submanifold of $({C}^{n},0)$ intersecting at the origin is commutative if the group $G$ is abelian. ¿From now on, we shall assume that $M$ is commutative and that the family $D$ of linear part of the group $G$ at the origin is diagonal. In other words, $D_{i,j}=\text{diag}(\mu_{i,j,k})$. Let ${\cal I}$ be a monomial ideal of ${\cal O}_{n}$. It is genrated by some monomials $x^{R_{1}},\ldots,x^{R_{p}}$. We shall denote $\bar{\cal I}$ the ideal of ${C}[[\bar{x}_{1},\ldots,\bar{x}_{n}]]$ generated by $\bar{x}^{R_{1}},\ldots,\bar{x}^{R_{p}}$. Definition 3.3. 1. We shall say that the family $M$ of manifolds is non-resonnant whenever, for all $1\leq i\leq m$, $1\leq k\leq n$ and for all $Q\in{N}_{2}^{n}$, there exists a $1\leq j\leq m$ such that $\bar{\mu}_{i,j}^{Q}\neq\mu_{i,j,k}^{-1}$. 2. We shall say that the family $M$ of manifolds non-resonnant on ${\cal I}$ whenever for all monomial $z^{Q}$ not belonging to ${\cal I}$ and for all couple $(i,k)$, there exists $j$ such that $\bar{\mu}_{i,j}^{Q}\neq\mu_{i,j,k}^{-1}$. Theorem 3.2. Assume that the group $G$ is abelian. Let ${\cal I}$ be a monomial ideal (resp. properly imbedded) left invariant by the family $D:=\{D_{i,j}\}$ and the $B_{i}$’s . Assume that $D$ is diophantine (resp. on ${\cal I}$) and that $M$ is non-resonnant on ${\cal I}$. Assume $G$ is formally linearizable on ${\cal I}$. Then, the family $F$ is holomorphically linearizable on ${\cal I}$. Moreover, in these coordinates, the $\rho_{i}$’s are anti-linearized on $\bar{\cal I}$. Proof. By theorem 2.1, the family $F$ is holomorphically linearized on ${\cal I}$. Let us show that, in these coordinates, the $\rho_{i}$’s are anti-linearized on $\bar{\cal I}$. Let us prove by induction on $|Q|\geq 2$ that $\{\rho_{i,k}\}_{Q}=0$ whenever $z^{Q}$ doesn’t belong to ${\cal I}$ and $\bar{\mu}_{i,j}^{Q}\neq\mu_{i,j,k}^{-1}$. We recall that $\{\rho_{i,k}\}_{Q}$ denotes the coefficient of $\bar{z}^{Q}$ in the Taylor expansion of $\rho_{i,k}$. Assume it is case up to order $k$. Let $Q\in{N}_{2}^{n}$ with $|Q|=k+1$. Let us compute $\{\rho_{i,k}\}_{Q}$. Using equation $(\ref{conj})$, we obtain $$\displaystyle R_{i}(\bar{z})-D_{i,j}R_{i}(\bar{D}_{i,j}\bar{z})$$ $$\displaystyle=$$ $$\displaystyle D_{i,j}B_{i}\bar{f}_{i,j}(\bar{z})+f_{i,j}(B_{j}\bar{z})$$ $$\displaystyle+D_{i,j}\left(R_{i}(\bar{D}_{i,j}\bar{z})-R_{i}(\bar{F}_{i,j}\bar% {z})\right)$$ $$\displaystyle+\left(f_{i,j}(\rho_{j})-f_{i,j}(B_{j}\bar{z})\right).$$ Moreover, $F$ is linearized on $V({\cal I})$. Hence, both $\{D_{i,j}B_{i}\bar{f}_{i,j}(\bar{z})+f_{i,j}(B_{j}\bar{z})\}_{Q}$ and $\{R_{i}(\bar{D}_{i,j}\bar{z})-R_{i}(\bar{F}_{i,j}\bar{z})\}_{Q}$ vanish when $z^{Q}$ doesn’t belong to ${\cal I}$. Hence, if $z^{Q}\not\in{\cal I}$, then we have $$(1-\mu_{i,j,k}\bar{\mu}^{Q}_{i,j})R_{Q,i,k}=\{\left(f_{i,j,k}(\rho_{j})-f_{i,j% ,k}(B_{j}\bar{z})\right\}_{Q}.$$ But by induction, we have $$\{\left(f_{i,j,k}(\rho_{j})-f_{i,j,k}(B_{j}\bar{z})\right\}_{Q}=\{Df_{i,j,k}(B% _{j}\bar{z})R_{j}+Df_{i,j,k}^{2}(B_{j}\bar{z})R_{j}^{2}+\cdots\}_{Q}=0.$$ Therefore, since $(1-\mu_{i,j,k}\bar{\mu}^{Q}_{i,j})\neq 0$, then we have $R_{Q,i,k}=0$. That is, $$\rho_{i}(z)=B_{i}\bar{z}\mod\bar{\cal I}.$$ ∎ Corollary 3.1. Under the assumptions of theorem 3.2, there exists a complex analytic subvariety ${\cal S}$ passing throught the origin and intersecting each totally real submanifold $M_{i}$. In good holomorphic coordinate system, ${\cal S}$ is a finite intersection of a finite union of complex hyperplane defined by complex coordinate subspaces : $${\cal S}=\cap_{i}\cup_{j}\{z_{i_{j}}=0\}.$$ The intersection $M_{k}\cap{\cal S}$ is then given by $$M_{k}\cap{\cal S}=\left\{z\in\cap_{i}\cup_{j}\{z_{i_{j}}=0\}\,|\;B_{k}\bar{z}=% z\right\}.$$ Proof. The complex analytic subvariety ${\cal S}$ is nothing but $V({\cal I})$. The trace of it on $M_{i}$ is the fixed points set of $\rho_{i}$ belonging to $V({\cal I})$. It is non void since it contains the origin. According to the previous theorem, the $\rho_{i}$’s are holomorphically and simultaneously linearizable on $V({\cal I})$. By assumptions, ${\cal I}$ is a monomial ideal so $V({\cal I})$ is a finite intersection of a finite union of hyperplane defined by coordinate subspaces : $${\cal S}=\cap_{i}\cup_{j}\{z_{i_{j}}=0\}.$$ ∎ Corollary 3.2. Assume that the family $M$ is non-resonnant, $G$ is formally linearizable and $D$ is diophantine. Then, in a good holomorphic coordinates system, $M$ is composed of linear totally real subspaces $$\bigcup_{i}\left\{z\in{C}^{n}\,|\;B_{i}\bar{z}=z\right\}.$$ Remark 3.2. If the family $M$ is non-resonnant and if for all $(i,k)$, one of the eigenvalues $\mu_{i,j,k}$’s belong to the unit circle, then $G$ is formally linearizable. In fact, for any $Q\in{N}_{2}^{n}$, any $1\leq i\leq m$, any $1\leq k\leq n$, there exists $1\leq j\leq m$ such that $$\bar{\mu}_{i,j}^{Q}\neq\mu_{i,j,k}^{-1}=\bar{\mu}_{i,j,k}.$$ This means precisely that $D$ is non-resonnant in the classical sense. There is no obstruction to formal linearization. Corollary 3.3. Let ${\cal I}$ be the ideal generated by the monomials $x^{R_{1}},\ldots,x^{R_{p}}$ generating the ring $\mathaccent 866{\cal O}_{n}^{D}$ of formal invariants of $D$. We assume that the non-linear centralizer of $D$ is generated by the same monomials. If $D$ is diophantine on ${\cal I}$ then, in a good holomorphic coordinate system, we have $$V({\cal I})=\{z\in({C}^{n},0)\;|\;z^{R_{1}}=\cdots=z^{R_{p}}=0\},$$ and $$\rho_{i|V({\cal I})}(z)=B_{i}\bar{z}.$$ Corollary 3.4. Let us consider two totally real $n$-manifolds of $({C}^{n},0)$ not intersecting transversally at the origin. Assume that the $l$ first eignevalues of $DF(0)$ are one. Let $\mu^{R_{1}}=1,\ldots,\mu^{R_{p}}=1$ be the other (i.e. $R_{i}\in{N}^{n}$ and $|R_{i}|>1$) generators of resonnant relations. Let $$V({\cal I})=\{z\in({C}^{n},0)\;|\;z_{1}=\cdots=z_{l}=z^{R_{1}}=\cdots=z^{R_{p}% }=0\}.$$ If $DF(0)$ is diophantine on $V({\cal I})$, then in good holomorphic coordinate system, $$M_{i}\cap V({\cal I})=\left\{z\in V({\cal I})\,|\;B_{i}\bar{z}=z\right\},\quad i% =1,2.$$ 4 Real analytic manifolds with CR singularities Les us consider a $(n+p-1)$-real analytic submanifold $M$ of ${C}^{n}$ of the form $$\begin{cases}z_{1}=x_{1}+iy_{1}\\ \vdots\\ z_{p}=x_{p}+iy_{p}\\ y_{p+1}=F_{p+1}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ \vdots\\ y_{n-1}=F_{n-1}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ z_{n}=G(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\end{cases}$$ (5) where we have set $z^{\prime}=(z_{1},\ldots,z_{p})$, $z^{\prime\prime}=(z_{p+1},\ldots,z_{n-1})$. The real analytic real (resp. complex) valued functions $F_{i}$ (resp. $G$) are assumed to vanish at the origin as well as its derivative. The tangent space at the origin contains the complex subspace defined by $z^{\prime}$. First of all, we shall show, under some assumptions, that there is a good holomorphic coordinate systems in which the $F_{i}$’s are of order greater than or equal to $3$ and the $2$-jet of the $G$’s depends only on $z^{\prime}$ and its conjugate. When $p=1$, this was done by E. Bishop ($n=2$)[Bis65], and also by J. Moser and S. Webster ($n\geq 2$)[MW83]. 4.1 Preparation Let us consider the $2$-jet $G^{2}$ of $G$. It can be written as a sum of a quadratic polynomial $$Q(z^{\prime},\bar{z}^{\prime}):=\sum d_{i,l}z^{\prime}_{i}\bar{z}^{\prime}_{l}% +\sum_{1\leq i,l\leq p}e_{i,l}z^{\prime}_{i}z^{\prime}_{l}+\sum_{1\leq i,l\leq p% }f_{i,l}\bar{z}^{\prime}_{i}\bar{z}^{\prime}_{l}$$ and $$\Sigma:=\sum a_{\alpha,\beta}x^{\prime\prime}_{\alpha}x^{\prime\prime}_{\beta}% +\sum b_{\alpha,i}x^{\prime\prime}_{\alpha}z^{\prime}_{i}+\sum c_{\alpha,i}x^{% \prime\prime}_{\alpha}\bar{z}^{\prime}_{i}.$$ Definition 4.1. Let $$F:=\left(\begin{matrix}f_{1,1}&\ldots&f_{p,1}\\ \vdots&&\vdots\\ f_{1,p}&\ldots&f_{p,p}\end{matrix}\right)$$ be the matrix associated to the so normalized $Q$. The symmetric matrix $S=\frac{1}{2}(F+^{t}F)$ will be called a Bishop matrix. First of all, by a linear change of the coordinates $z^{\prime}$, we can diagonalize the symmetric matrix $S$. Our first non-degeneracy condition is that the sesquilinear part $\tilde{H}:=\sum d_{i,l}z^{\prime}_{i}\bar{z}^{\prime}_{l}$ of $Q$ is not reduced to $0$. Let $$\|\tilde{H}\|:=\sup_{z\in{C}^{p}\,|\,\|z\|=1}\frac{|\sum d_{j,i,l}z^{\prime}_{% i}\bar{z}^{\prime}_{l}|}{\|z^{\prime}\|^{2}}\neq 0.$$ to its norm. Let us set $Z_{n}=z_{n}/\|\tilde{H}\|$. Then the sequilinear part $H$ of the new $Q$ is of norm $1$. By setting $$Z_{n}:=z_{n}+\sum(f_{j,i,l}-e_{j,i,l})z^{\prime}_{i}z^{\prime}_{l},$$ we transform $Q$ in the following form : $$Q(z^{\prime},\bar{z}^{\prime})=H(z,^{\prime}\bar{z}^{\prime})+\sum_{i=1}^{p}% \gamma_{i}((z^{\prime}_{i})^{2}+(\bar{z}^{\prime}_{i})^{2}).$$ (6) Definition 4.2. The eigenvalues $\gamma_{1},\ldots,\gamma_{n}$ of the so normalized Bishop matrix $S$ will be called the generalized Bishop invariants. Let us show that, by an holomorphic change of coordinates, we can get rid of $\Sigma$. First of all, let us get rid of the third member of $\Sigma$ by a change of the form $$\zeta^{\prime}_{i}\mapsto z^{\prime}_{i}:=\zeta^{\prime}_{i}+\sum_{\gamma=p+1}% ^{n-1}A_{i,\gamma}z^{\prime\prime}_{\gamma},\quad i=1,\ldots,p.$$ We have $$\displaystyle G(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})$$ $$\displaystyle=$$ $$\displaystyle\sum a_{\alpha,\beta}x^{\prime\prime}_{\alpha}x^{\prime\prime}_{% \beta}+\sum b_{\alpha,i}x^{\prime\prime}_{\alpha}\left(\zeta^{\prime}_{i}+\sum% _{\gamma=p+1}^{n-1}A_{i,\gamma}z^{\prime\prime}_{\gamma}\right)$$ $$\displaystyle+\sum c_{\alpha,i}x^{\prime\prime}_{\alpha}\left(\zeta^{\prime}_{% i}+\sum_{\gamma=p+1}^{n-1}\bar{A}_{i,\gamma}\bar{z}^{\prime\prime}_{\gamma}\right)$$ $$\displaystyle+\sum d_{i,l}\left(\zeta^{\prime}_{i}+\sum_{\gamma=p+1}^{n-1}A_{i% ,\gamma}z^{\prime\prime}_{\gamma}\right)\left(\bar{\zeta}^{\prime}_{l}+\sum_{% \gamma=p+1}^{n-1}\bar{A}_{l,\gamma}\bar{z}^{\prime\prime}_{\gamma}\right)$$ $$\displaystyle+\sum f_{i,l}\left(\zeta^{\prime}_{i}+\sum_{\gamma=p+1}^{n-1}A_{i% ,\gamma}z^{\prime\prime}_{\gamma}\right)\left(\zeta^{\prime}_{l}+\sum_{\gamma=% p+1}^{n-1}A_{l,\gamma}z^{\prime\prime}_{\gamma}\right)$$ $$\displaystyle+\sum f_{i,l}\left(\bar{\zeta}^{\prime}_{i}+\sum_{\gamma=p+1}^{n-% 1}\bar{A}_{i,\gamma}\bar{z}^{\prime\prime}_{\gamma}\right)\left(\bar{\zeta}^{% \prime}_{l}+\sum_{\gamma^{\prime}=p+1}^{n-1}\bar{A}_{l,\gamma^{\prime}}\bar{z}% ^{\prime\prime}_{\gamma^{\prime}}\right)$$ $$\displaystyle+\text{higher order terms.}$$ The coefficient of $x_{\alpha}^{\prime\prime}\bar{\zeta}_{i}$ is $$c_{\alpha,i}+\sum_{s}d_{s,i}A_{s,\alpha}+\sum_{s}(f_{i,s}+f_{s,i})\bar{A}_{s,\alpha}$$ We assume that we can solve the set of equations $$\displaystyle-c_{\alpha,i}$$ $$\displaystyle=$$ $$\displaystyle\sum_{s}d_{s,i}A_{s,\alpha}+\sum_{s}(f_{i,s}+f_{s,i})\bar{A}_{s,\alpha}$$ $$\displaystyle-\bar{c}_{\alpha,i}$$ $$\displaystyle=$$ $$\displaystyle\sum_{s}\bar{d}_{s,i}\bar{A}_{s,\alpha}+\sum_{s}(\bar{f}_{i,s}+% \bar{f}_{s,i})A_{s,\alpha}.$$ For each $p+1\leq\alpha\leq n-1$, let us set $$A_{\alpha}:=\left(\begin{matrix}A_{1,\alpha}\\ \vdots\\ A_{p,\alpha}\end{matrix}\right),\;C_{\alpha}:=\left(\begin{matrix}c_{\alpha,1}% \\ \vdots\\ c_{\alpha,p}\end{matrix}\right),\;D:=\left(\begin{matrix}d_{1,1}&\ldots&d_{p,1% }\\ \vdots&&\vdots\\ d_{1,p}&\ldots&d_{p,p}\end{matrix}\right).$$ The previous equations can be written as $$\begin{cases}-C_{\alpha}=DA_{\alpha}+(F+^{t}F)\bar{A}_{\alpha}\\ -\bar{C}_{\alpha}=\bar{D}\bar{A}_{\alpha}+(\bar{F}+^{t}\bar{F})A_{\alpha}\end{cases}$$ (7) These systems can be solved whenever $$\det\left(4^{-1}S^{-1}D\bar{S}^{-1}\bar{D}-I_{p}\right)\neq 0,$$ (8) where $S:=1/2(F+^{t}F)$ is the Bishop matrix. Remark 4.1. When $p=1$, this condition reads $4\gamma^{2}\neq 1$ where $\gamma$ is the Bishop invariant [Bis65]. Now let us remove the two first terms of the $\Sigma$’s (once we have done the previous change of coordinates, the coefficients appearing in $\Sigma$ may have changed). Let us set $$Z_{n}:=z_{n}-\sum a_{\alpha,\beta}z^{\prime\prime}_{\alpha}z^{\prime\prime}_{% \beta}-\sum b_{\alpha,i}z^{\prime\prime}_{\alpha}z^{\prime}_{i}.$$ Hence, we have $$\displaystyle Z_{n}$$ $$\displaystyle=$$ $$\displaystyle G(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})-\sum a_{\alpha,% \beta}z^{\prime\prime}_{\alpha}z^{\prime\prime}_{\beta}-\sum b_{\alpha,i}z^{% \prime\prime}_{\alpha}z^{\prime}_{i}$$ $$\displaystyle=$$ $$\displaystyle Q(z^{\prime},\bar{z}^{\prime})+\Sigma-\sum a_{\alpha,\beta}z^{% \prime\prime}_{\alpha}z^{\prime\prime}_{\beta}-\sum b_{\alpha,i}z^{\prime% \prime}_{\alpha}z^{\prime}_{i}$$ $$\displaystyle+\text{higher order terms}.$$ But we have $$z^{\prime\prime}_{\alpha}=x^{\prime\prime}_{\alpha}+iF_{\alpha}(z^{\prime},% \bar{z}^{\prime},x^{\prime\prime})$$ where $F_{\alpha}$ is of order greater than or equal to two. Hence, the the 2-jet of $Z_{n}$ is precisely $Q(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})$. Let us consider the $F_{\alpha}$’s. As above, the $2$-jet of $F_{\alpha}$ can be written as the sum of $$Q_{\alpha}(z^{\prime},\bar{z}^{\prime}):=\sum d_{\alpha,i,l}z^{\prime}_{i}\bar% {z}^{\prime}_{l}+\sum e_{\alpha,i,l}z^{\prime}_{i}z^{\prime}_{l}+\sum\bar{e}_{% \alpha,i,l}\bar{z}^{\prime}_{i}\bar{z}^{\prime}_{l}$$ and $$\Sigma_{\alpha}:=\sum a_{\alpha,\gamma,\beta}x^{\prime\prime}_{\gamma}x^{% \prime\prime}_{\beta}+2\text{Re}\sum b_{\alpha,\gamma,i}x^{\prime\prime}_{% \gamma}z^{\prime}_{i},$$ where $a_{\alpha,\gamma,\beta}$ is a real number, $d_{\alpha,i,l}=\bar{d}_{\alpha,l,i}$. Let us show that, under some other assumption, we can get rid of the $z_{i}^{\prime}\bar{z}_{j}^{\prime}$’s terms in the $Q_{\alpha}^{\prime}s$, $\alpha=p+1,\ldots,n-1$. In fact, let us set $$z_{\alpha}\mapsto Z_{\alpha}:=z_{\alpha}+ib_{\alpha}z_{n}^{\prime\prime\prime}% ,\quad\alpha=p+1,\ldots,n-1.$$ In the new coordinates, the $2$-jet of $Z_{\alpha}$ is $$iQ_{\alpha}(z^{\prime},\bar{z}^{\prime})+ib_{\alpha}Q(z^{\prime},\bar{z}^{% \prime})+i\Sigma_{\alpha}.$$ Thus, in order that the coefficients of the $z_{i}^{\prime}\bar{z}_{j}^{\prime}$’s vanish, one must satisfies the second non-degeneracy condition : each $$Q_{\alpha}$$ is proportional to $$Q$$. (9) Let us show that we can get rid of the quadratic part of $F_{\alpha}$ by the using following change of coordinates : $$z_{\alpha}\mapsto Z_{\alpha}:=z_{\alpha}-2i\sum b_{\alpha,\gamma,i}z^{\prime% \prime}_{\gamma}z^{\prime}_{i}-i\sum a_{\alpha,\gamma,\beta}z^{\prime\prime}_{% \gamma}z^{\prime\prime}_{\beta}-2i\sum e_{\alpha,i,l}z^{\prime}_{i}z^{\prime}_% {l}.$$ In fact, we have $$\text{Im}(Z_{\alpha})=\text{Im}\left(z_{\alpha}-2i\sum b_{\alpha,\gamma,i}z^{% \prime\prime}_{\gamma}z^{\prime}_{i}-i\sum a_{\alpha,\gamma,\beta}z^{\prime% \prime}_{\gamma}z^{\prime\prime}_{\beta}-2i\sum e_{\alpha,i,l}z^{\prime}_{i}z^% {\prime}_{l}\right)$$ The $2$-jet of the right hand side is $$Q_{\alpha}(z^{\prime},\bar{z}^{\prime},X^{\prime\prime})+\Sigma_{\alpha}(z^{% \prime},\bar{z}^{\prime},X^{\prime\prime})+\text{Im}\left(-2i\sum b_{\alpha,% \gamma,i}X^{\prime\prime}_{\gamma}z^{\prime}_{i}-i\sum a_{\alpha,\gamma,\beta}% X^{\prime\prime}_{\gamma}X^{\prime\prime}_{\beta}-2i\sum e_{\alpha,i,l}z^{% \prime}_{i}z^{\prime}_{l}\right).$$ It is equal to $$\sum d_{\alpha,i,l}z^{\prime}_{i}\bar{z}^{\prime}_{l}$$ since $$\text{Im}\left(2i\sum e_{\alpha,i,l}z^{\prime}_{i}z^{\prime}_{l}\right)=2\text% {Re}\sum e_{\alpha,i,l}z^{\prime}_{i}z^{\prime}_{l}.$$ We summarize these results in the following lemma. Lemma 4.1. Under assumptions $(\ref{cond1})$ and $(\ref{condition2})$, the manifold $(\ref{variete-orig})$ can be transformed, by an holomorphic change of variables, to $$\begin{cases}z_{1}=x_{1}+iy_{1}\\ \vdots\\ z_{p}=x_{p}+iy_{p}\\ y_{p+1}=f_{p+1}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ \vdots\\ y_{n-1}=f_{n-1}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ z_{n}=Q(z^{\prime},\bar{z}^{\prime})+g(z^{\prime},\bar{z}^{\prime},x^{\prime% \prime})\end{cases}$$ (10) where the $f_{i}$’s and $g$ are germs of real analytic functions at the origin and of order greater than or equal to $3$ there. The quadratic polynomial $Q$ is of the form $$Q(z^{\prime},\bar{z}^{\prime})=\sum d_{i,l}z^{\prime}_{i}\bar{z}^{\prime}_{l}+% \sum\gamma_{i}((z^{\prime}_{i})^{2}+(\bar{z}^{\prime}_{i})^{2}),$$ the norm of the sesquilinear part of $Q$ being $1$. In the next two sections, we shall adapt the constuction of Moser and Webster to our context. 4.2 Complexification Let us complexify such a manifold $M$ by replacing $\bar{z}_{i}$ by $w_{i}$ in order to obtain a complex analytic $(n+p-1)$-manifold ${\cal M}$ of ${C}^{2n}$: $$\begin{cases}2x_{\alpha}=z_{\alpha}+w_{\alpha},\\ z_{\alpha}-w_{\alpha}=2iF_{\alpha}(z^{\prime},w^{\prime},x^{\prime\prime})=2i% \bar{F}_{\alpha}(w^{\prime},z^{\prime},x^{\prime\prime}),\\ z_{n}=G(z^{\prime},w^{\prime},x^{\prime\prime}),\\ w_{n}=\bar{G}(w^{\prime},z^{\prime},x^{\prime\prime}),\\ \end{cases}$$ (11) where $\alpha$ ranges from $p+1$ to $n-1$. In this situation, we have $G(z^{\prime},w^{\prime},x")=Q(z^{\prime},w^{\prime})+g(z^{\prime},w^{\prime},x")$ where $g$ as well as the $F_{\alpha}$’s are of order greater than or equal to $3$. As usual if $G(y)=\sum_{Q}g_{Q}y^{Q}$ is a formal power series in ${C}^{k}$, then $\bar{G}$ denotes the formal power series $\sum_{Q}\bar{g}_{Q}y^{Q}$. These equations imply that $$z_{\alpha}=x_{\alpha}+iF_{\alpha}(z^{\prime},w^{\prime},x^{\prime\prime});% \quad w_{\alpha}=x_{\alpha}-iF_{\alpha}(z^{\prime},w^{\prime},x^{\prime\prime}).$$ The variables $(z^{\prime},w^{\prime},x^{\prime\prime})$ may be used as complex coordinates. Let us define the anti-holomorphic involution $\rho$ of ${C}^{2n}$ by $$\rho(z,w)=(\bar{w},\bar{z}).$$ A complex anaytic manifold ${\cal M}$ of ${C}^{2n}$ comes from a manifold $M$ whenever it is preserved by $\rho$. In this case, $M={\cal M}\cap\text{Fix}(\rho)$. The restriction to ${\cal M}$ of this map is the anti-holomorphic involution $\rho(z^{\prime},w^{\prime},x^{\prime\prime})=(\bar{w}^{\prime},\bar{z}^{\prime% },\bar{x}^{\prime\prime})$. The two projections $\pi_{1}(z,w)=z$ and $\pi_{2}(z,w)=w$, when restricted to ${\cal M}$ have the form $$\displaystyle\pi_{1}(z,w)$$ $$\displaystyle=$$ $$\displaystyle(z^{\prime},x_{\alpha}+iF_{\alpha}(z^{\prime},w^{\prime},x^{% \prime\prime}),G(z^{\prime},w^{\prime},x^{\prime\prime}))$$ $$\displaystyle\pi_{2}(z,w)$$ $$\displaystyle=$$ $$\displaystyle(w^{\prime},x_{\alpha}-iF_{\alpha}(z^{\prime},w^{\prime},x^{% \prime\prime}),\bar{G}(w^{\prime},z^{\prime},x^{\prime\prime})).$$ Let us define the holomorphic involution $\tau_{1}(z,w)=(\tilde{z},\tilde{w})$ (resp.$\tau_{2}(z,w)=(\tilde{z},\tilde{w})$) on ${\cal M}$ by $w=\tilde{w}$ (resp. $z=\tilde{z}$). This leads to the following equations : $$\displaystyle\tilde{w}^{\prime}$$ $$\displaystyle=$$ $$\displaystyle w^{\prime}$$ $$\displaystyle\tilde{x}_{\alpha}+iF_{\alpha}(\tilde{z}^{\prime},w^{\prime},% \tilde{x}^{\prime\prime})$$ $$\displaystyle=$$ $$\displaystyle x_{\alpha}+iF_{\alpha}(z^{\prime},w^{\prime},x^{\prime\prime}),% \quad\alpha=p+1,\ldots,n-1$$ $$\displaystyle\bar{G}(w^{\prime},\tilde{z}^{\prime},\tilde{x}^{\prime\prime})$$ $$\displaystyle=$$ $$\displaystyle\bar{G}(w^{\prime},z^{\prime},x^{\prime\prime}).$$ By the implicit function theorem, there exist an holomorphic function $\Gamma_{\alpha}$ such that $$\tilde{x}_{\alpha}=\Gamma_{\alpha}(z^{\prime},\tilde{z}^{\prime},w^{\prime},% \tilde{x}^{\prime\prime})=x_{\alpha}\;\text{mod}\;{\cal M}^{2}\quad\alpha=p+1,% \ldots,n-1.$$ Here, ${\cal M}$ denotes the maximal ideal of germ of holomorphic functions in ${C}^{n+2p-1}$ at $0$. Hence, we have $$\bar{G}(w^{\prime},\tilde{z}^{\prime},\Gamma^{\prime\prime}(z^{\prime},\tilde{% z}^{\prime},w^{\prime},\tilde{x}^{\prime\prime}))=\bar{G}(w^{\prime},z^{\prime% },x^{\prime\prime}).$$ (12) Assume that the Bishop matrix is invertible. Let us solve, in $\tilde{z}^{\prime}$, the following equation : $$Q(z^{\prime},w^{\prime})=Q(\tilde{z}^{\prime},w^{\prime}),$$ where $Q$ is the quadratic part of $G$. Therefore, we have $$Q(\tilde{z}^{\prime},w^{\prime})-Q(z^{\prime},w^{\prime})=D_{z}Q(z^{\prime},w^% {\prime})(\tilde{z}^{\prime}-z^{\prime})+\frac{1}{2}D^{2}_{z}Q(z^{\prime},w^{% \prime})(\tilde{z}^{\prime}-z^{\prime})^{2}.$$ Since $\frac{\partial^{2}Q}{\partial z_{j}\partial z_{k}}=0$ if $j\neq k$, we obtain that, for any $i_{0}$, $$\frac{\partial Q}{\partial z_{i_{0}}}(z^{\prime},w^{\prime})+\frac{1}{2}\frac{% \partial^{2}Q}{\partial z_{i_{0}}^{2}}(z^{\prime},w^{\prime})(\tilde{z}_{i_{0}% }-z_{i_{0}})=0.$$ But, for any $1\leq i_{0}\leq p$, we $1/2\frac{\partial^{2}Q}{\partial z_{i_{0}}^{2}}(w^{\prime},z^{\prime})=\gamma_% {i_{0}}\neq 0$. Hence, we have $$\tilde{z}_{i_{0}}=-z_{i_{0}}-\frac{1}{\gamma_{i_{0}}}\sum_{l=1}^{p}d_{i_{0},l}% w_{l}.$$ Hence, by the implicit function theorem, equation $(\ref{equation-involution})$ admit an holomorphic solution $\tau_{1}(z^{\prime},w^{\prime},x^{\prime\prime})$ which linear part at the origin is precisely $T_{1}z^{\prime}$ : $$\tilde{z}^{\prime}=H(z^{\prime},w^{\prime},x^{\prime\prime})=T_{1}z^{\prime}\;% \text{mod}\;{\cal M}^{2},\quad\tilde{w}^{\prime}=w^{\prime},\quad\tilde{x}^{% \prime\prime}=\Gamma(z^{\prime},w^{\prime},x^{\prime\prime}).$$ Here ${\cal M}$ denotes the maximal ideal of germs of holomorphic functions of $({C}^{n+p-1},0)$ and the linear part $T_{1}$ is $$T_{1}:=D\tau_{1}(0)=\left(\begin{matrix}-Id_{p}&-S^{-1}D&0\\ 0&Id_{p}&0\\ 0&0&Id_{q}\end{matrix}\right),$$ where $D$ is the matrix $(d_{i,j})_{1\leq i,j\leq p}$ and $I_{q}$ stands for the identity matrix of dimension $q:=n-p-1$. The map $\pi_{2}$ is a two-fold covering with covering transformation $\tau_{1}$. Since $\tau_{2}=\rho\circ\tau_{1}\circ\rho$, $\tau_{2}$ restricted to ${\cal M}$ is defined by $$\tilde{z}^{\prime}=z^{\prime},\quad\tilde{w}^{\prime}=\bar{H}(w^{\prime},z^{% \prime},x^{\prime\prime}),\quad\tilde{x}^{\prime\prime}=\bar{\Gamma}(w^{\prime% },z^{\prime},x^{\prime\prime}).$$ The linear part at the origin of $\tau_{2}$ is $$T_{2}:=\left(\begin{matrix}Id_{p}&0&0\\ -\bar{S}^{-1}\bar{D}&-Id_{p}&0\\ 0&0&Id_{q}\end{matrix}\right).$$ The map $\pi_{1}$ is a two-fold covering with covering transformation $\tau_{2}$. Let us consider the germ of holomorphic diffeomorphism $g$ of $({C}^{n+p-1},0)$ defined to be $g:=\tau_{1}\circ\tau_{2}$. It fixes the origin. The linear part of the diffeomorphism $g$ at the origin is $$\Phi:=Dg(0)=T_{1}T_{2}=\left(\begin{matrix}-Id_{p}+S^{-1}D\overline{S^{-1}D}&S% ^{-1}D&0\\ -\overline{S^{-1}D}&-I_{p}&0\\ 0&0&I_{q}\end{matrix}\right).$$ 4.3 Quadrics and linear involutions In this section will shall study the relation between the linear part of the involutions, the linear anti-holomorphic involution and the quadric. Under our assumptions, the set of fixed points of $T_{1}$ and $T_{2}$ is a $q$-dimensional vector space ($q=n-p-1$) and these are the only common eigenvectors. In fact, let us try to solve $T_{1}v=av$ and $T_{2}v=bv$. Let us write $v=(v_{1},v_{2},v_{3})$ with $v_{1},v_{2}$ belong to ${C}^{p}$ while $v_{3}$ belongs to ${C}^{q}$. This leads to $$(*)\left\{\begin{array}[]{l}-v_{1}-S^{-1}Dv_{2}=av_{1}\\ v_{2}=av_{2}\\ v_{3}=av_{3}\end{array}\right.\quad(**)\left\{\begin{array}[]{l}v_{1}=bv_{1}\\ -\bar{S}^{-1}\bar{D}v_{1}-v_{2}=bv_{2}\\ v_{3}=bv_{3}\end{array}\right.$$ Assume $v_{3}=0$. If $v_{2}=0$ then $a=-1$ (otherwise $v_{1}=0$ too). The second equation of $(**)$ gives $\bar{S}^{-1}\bar{D}v_{1}=0$; that is $v_{1}=0$. This is not possible. Thus $v_{2}\neq 0$ and $a=1$. The first equation of $(*)$ gives $-S^{-1}Dv_{2}=2v_{1}$. Moreover, we have $b=1$ since otherwise we would have $v_{1}=0$ and $v_{2}=0$ by the second equation of $(**)$. Therefore, using the same equation we obtain $$\bar{S}^{-1}\bar{D}S^{-1}Dv_{2}=4v_{2}.$$ According to condition $(\ref{cond1})$, we have $v_{2}=0$ and then $v_{1}=0$. Now, if $v_{3}\neq 0$ then $a=b=1$, then we can apply the previous resonning to obtain $v_{1}=v_{2}=0$. Let $V_{i}$ be the $(-1)$-eigenspace of $T_{i}$, $i=1,2$. Let $E$ be their common $(+1)$-eigenspace. We assume that $V_{1},V_{2}$ and $E$ span ${C}^{n+p-1}$. Let $F=V_{1}+V_{2}$. We have ${C}^{n+p-1}=F\oplus E$. Let us show that it is invariant under both $T_{1}$ and $T_{2}$. In fact, let $v_{2}\in V_{2}$. A priori, we have $T_{1}v_{2}=\tilde{v}_{1}+\tilde{v}_{2}+e$ where $\tilde{v}_{1}$ (resp. $\tilde{v}_{2}$, $e$) belongs to $V_{1}$ (resp. $V_{2}$, $E$). Since $T_{1}^{2}=Id$ and $T_{1}v_{1}=-v_{1}$, we have $v_{2}=-\tilde{v}_{1}+T_{1}\tilde{v}_{2}+e$. But, $T_{1}\tilde{v}_{2}=\tilde{v}^{\prime}_{1}+\tilde{v}^{\prime}_{2}+e^{\prime}$. Hence, we have $v_{2}=-\tilde{v}_{1}+\tilde{v}^{\prime}_{1}+\tilde{v}^{\prime}_{2}+e^{\prime}+e$. Therefore, $-\tilde{v}_{1}+\tilde{v}^{\prime}_{1}=0$, $\tilde{v}^{\prime}_{2}-v_{2}=0$ and $e^{\prime}+e=0$. It comes $$T_{1}(v_{2}-\tilde{v}_{2}-e)=-(v_{2}-\tilde{v}_{2}-e).$$ This means that $v_{2}-\tilde{v}_{2}-e$ belongs to $V_{1}$ and leads to $e=0$; that is $F$ is left invariant by $T_{1}$. The same argument applies to $T_{2}$. Let $T^{\prime}_{1}$ (resp. $T^{\prime}_{2}$, $\Phi^{\prime}$) be the restriction $T_{1}$ (resp. $T_{2}$, $\Phi$) to $F$. Let $\mu$ be an eigenvalue of $\Phi^{\prime}$ corresponding to an eigenvector $f$. Then, $T^{\prime}_{2}f=\mu T^{\prime}_{1}f=\mu\Phi^{\prime}T^{\prime}_{2}f$. Hence, $\mu^{-1}$ is another eigenvalue of $\Phi^{\prime}$ associated to $T^{\prime}_{2}f$. The vectors $f$ and $T^{\prime}_{2}f$ are independant since otherwise we would have $cf=T^{\prime}_{2}f=\mu T^{\prime}_{1}f$ and $T^{\prime}_{1}$ and $T^{\prime}_{2}$ would have a common eigenvector. If $\mu=\mu^{-1}$, then the restriction of $\Phi^{\prime}$ to the span $V_{f}$ of $f$ and $T^{\prime}_{2}f$ is $\pm Id$. This implies a common eigenvector to $T^{\prime}_{2}$ and $T^{\prime}_{1}$. As in [MW83][p.269], we can choose $\{f,T^{\prime}_{2}f\}$ as a basis of $V_{f}$. In this basis, the matrix of $T^{\prime}_{1}$, $T^{\prime}_{2}$ and $\Phi^{\prime}$ are $$\Phi^{\prime}_{|V_{f}}=\left(\begin{matrix}\mu&0\\ 0&\mu^{-1}\end{matrix}\right)\quad T^{\prime}_{i|V_{f}}=\left(\begin{matrix}0&% \lambda_{i}\\ \lambda_{i}^{-1}&0\end{matrix}\right),$$ where $\lambda_{1}=\mu$ and $\lambda_{2}=1$. Let us show that $E$ is invariant under the linear anti-holomorphic involution $\rho$. Let $e$ belongs to $E$. By definition, it is left invariant by both $T_{1}$ and $T_{2}$. Since, $T_{1}\rho=\rho T_{2}$, we obtain $T_{1}\rho(e)=\rho(T_{2}e)=\rho(e)$ and $T_{2}\rho(e)=\rho(e)$. Hence, $\rho(E)=E$. Let us show that $F$ is invariant under the linear anti-holomorphic involution $\rho$. Let $N$ be the totally real fixed point set of $\rho$ on $E$. This means that $E=N+iN$ and that we may choose coordinates $\zeta$ on $E$ so that $\rho:\zeta\mapsto\bar{\zeta}$. Let us show that $\rho(F)$ is also invariant by both $T_{1}$ and $T_{2}$. In fact, we have $T_{1}\rho(F)=\rho T_{2}(F)=\rho(F)$ and $T_{2}\rho(F)=\rho(F)$ as well. Hence, $E\oplus\rho(F)={C}^{n+p-1}$ is a decomposition preserved by both $T_{1}$ and $T_{2}$. The space $\rho(F)$ has to contain the $(-1)$-eigenspace of both $T_{1}$ and $T_{2}$, that is $F$. Let $f$ be a $\mu$-eigenvector of $\Phi$. As above, $T_{2}f$ is a $\mu^{-1}$-eigenvector of $\Phi$. Since $\Phi\rho\Phi=T_{1}\rho T_{2}=\rho$, we have $$\rho(f)=\bar{\mu}\Phi(\rho(f)).$$ Hence, $\rho(f)$ is a ${\bar{\mu}}^{-1}$-eigenvector of $\Phi$. Moreover, $T_{2}\rho(f)=\rho(T_{1}f)=\bar{\mu}^{-1}\rho(T_{2}f)$ is a ${\bar{\mu}}$-eigenvector of $\Phi$. This means that $\rho(V_{f})=V_{\rho(f)}$. Let us assume that the set ${\bar{\mu}}^{-1}$ is different than ${\mu}^{-1}$ and $\mu$. Then, $\rho(T_{2}\rho(f))=T_{1}f$ is a ${\mu}^{-1}$-eigenvector of $\Phi$. So, $T_{2}\rho(T_{2}\rho(f))=T_{2}T_{1}f=\Phi^{-1}(f)=\mu^{-1}f$ is a ${\mu}$-eigenvector of $\Phi$. In this case, the matrices of $\rho$ and $\Phi$ restricted to $V_{f}\oplus V_{\rho(f)}$ are $$\rho_{|V_{f}\oplus V_{\rho(f)}}=\left(\begin{matrix}0&0&1&0\\ 0&0&0&\mu^{-1}\\ 1&0&0&0\\ 0&\bar{\mu}&0&0\end{matrix}\right)\quad\Phi_{|V_{f}\oplus V_{\rho(f)}}=\left(% \begin{matrix}\mu&0&0&0\\ 0&\mu^{-1}&0&0\\ 0&0&\bar{\mu}^{-1}&0\\ 0&0&0&\bar{\mu}\end{matrix}\right).$$ We recall that $V_{f}$ denotes the span of $f$ and $T_{2}f$. We can do a similar analysis as in Moser-Webster article [MW83][p.269] : let $\mu_{i}$ be an eigenvalue of $\Phi$ of multiplicity $n(i)$ such that $\mu_{i}=\bar{\mu}_{i}^{-1}$ or $\mu_{i}=\bar{\mu}_{i}$. Let $\{f_{1},\ldots,f_{n(i)}\}$ be an associated basis. Let $E_{i}$ be the span of the basis $\{f_{1},\ldots,f_{n(i)},T_{2}f_{1},\ldots,T_{2}f_{n(i)}\}$. We have • if $\mu_{i}=\bar{\mu}_{i}^{-1}$, then $\rho(f_{k})=\sum a_{j,k}f_{k}$ so that $$\rho(T_{2}f_{k})=T_{1}\rho(f_{k})=T_{1}(\sum a_{j,k}f_{k})=\sum a_{j,k}T_{1}f_% {k}=\bar{\mu}_{i}\sum a_{j,k}T_{2}(f_{k}).$$ • if $\mu_{i}=\bar{\mu}_{i}$, then $\rho(f_{k})=\sum a_{j,k}T_{2}f_{k}$ so that $$\rho(T_{2}f_{k})=T_{1}\rho(f_{k})=T_{1}(\sum a_{j,k}T_{2}f_{k})=\mu_{i}\sum a_% {j,k}f_{k}.$$ We have used the property that, if $\Phi(f)=\mu f$ then $T_{2}f=\mu T_{1}f$. Hence we have proved the following Lemma 4.2. Let $\Phi$, $T_{1}$, $T_{2}$ and $\rho$ as above. Then there exists a decomposition $${C}^{n+p-1}=E_{1}\oplus\cdots\oplus E_{r}\oplus E_{r+1}\oplus\cdots\oplus E_{s% }\oplus G$$ left invariant by $\Phi$, $T_{1}$, $T_{2}$ and $\rho$ such that • The $E_{i}$’s are complex vectorspaces of dimension $2n(i)$ if $1\leq i\leq r$ and $4n(i)$ otherwise. $G$ is a $n-p-1$-dimensional vectorspace. • The restrictions of $\Phi$, $T_{1}$, $T_{2}$ to $G$ is the identity • If $1\leq i\leq r$, then $\mu_{i}\in\{\bar{\mu}_{i},\bar{\mu}_{i}^{-1}\}$ and there exists coordinates $(\zeta_{i},\eta_{i})$ of $E_{i}$ ($\zeta_{i}=(\zeta_{i,1},\ldots,\zeta_{i,n(i)})$) such that $$T_{k|E_{i}}=\left(\begin{matrix}0&\lambda_{k,i}I_{n(i)}\\ \lambda_{k,i}^{-1}I_{n(i)}&0\end{matrix}\right),\;k=1,2\quad\Phi_{|E_{i}}=% \left(\begin{matrix}\mu_{i}I_{n(i)}&0\\ 0&\mu_{i}^{-1}I_{n(i)}0\end{matrix}\right),$$ where $\lambda_{1,i}=\mu_{i}$ and $\lambda_{2,i}=1$. • If $0<j\leq s-r$, then $\mu_{r+2j-1}\not\in\{\bar{\mu}_{r+2j-1},\bar{\mu}_{r+2j-1}^{-1}\}$ and there exists coordinates $\theta_{j}:=(\zeta_{r+2j-1},\eta_{r+2j-1},\zeta_{r+2j},\eta_{r+2j})$ of $E_{r+j}$ such that $$\Phi_{|E_{r+j}}=\left(\begin{matrix}\mu_{r+2j-1}I_{n(r+j)}&0&0&0\\ 0&\mu_{r+2j-1}^{-1}I_{n(r+j)}&0&0\\ 0&0&\bar{\mu}_{r+2j-1}^{-1}I_{n(r+j)}&0\\ 0&0&0&\bar{\mu}_{r+2j-1}I_{n(r+j)}\end{matrix}\right).$$ and $$T_{k|E_{r+j}}=\left(\begin{matrix}0&\lambda_{k,j+2r-1}I_{n(j+r)}&0&0\\ \lambda_{k,j+2r-1}^{-1}I_{m(j+r)}&0&0&0\\ 0&0&0&\bar{\lambda}_{k,j+2r-1}^{-1}I_{m(j+r)}\\ 0&0&\bar{\lambda}_{k,j+2r-1}I_{m(j+r)}&0\end{matrix}\right),$$ where $k=1,2$, $\lambda_{1,r+2j-1}=\mu_{r+2j-1}$, $\mu_{r+2j}=\bar{\mu}_{r+2j-1}^{-1}$ and $\lambda_{2,r+2j-1}=1$. • If ${\bar{\mu}_{i}}={\mu_{i}}^{-1}$ (we say hyperbolic), $\rho_{|E_{i}}(\zeta_{i},\eta_{i})=\left(\begin{matrix}A_{i}&0\\ 0&\bar{\mu}_{i}A_{i}\end{matrix}\right)\left(\begin{matrix}\bar{\zeta}_{i}\\ \bar{\eta}_{i}\end{matrix}\right)$ for some $n(i)$-square matrix $A_{i}$ such that $A_{i}\bar{A}_{i}=I_{n(i)}$. • If ${\bar{\mu}_{i}}={\mu_{i}}$ (we say elliptic), $\rho_{|E_{i}}(\zeta_{i},\eta_{i})=\left(\begin{matrix}0&\mu_{i}A_{i}\\ A_{i}&0\end{matrix}\right)\left(\begin{matrix}\bar{\zeta}_{i}\\ \bar{\eta}_{i}\end{matrix}\right)$ for some $n(i)$-square matrix $A_{i}$ such that $\mu_{i}A_{i}\bar{A}_{i}=I_{n(i)}$. • If $\mu_{r+2j-1}\not\in\{\bar{\mu}_{r+2j-1},\bar{\mu}_{r+2j-1}^{-1}\}$, that is $0<j\leq s-r$, (we say complex), $$\rho_{|E_{r+2j-1}}(\theta_{j})=\left(\begin{matrix}0&0&I_{n(j+r)}&0\\ 0&0&0&\mu_{r+2j-1}^{-1}I_{n(j+r)}\\ I_{n(j+r)}&0&0&0\\ 0&\bar{\mu}_{r+2j-1}I_{n(j+r)}&0&0\end{matrix}\right)\left(\begin{matrix}\bar{% \zeta}_{r+2j-1}\\ \bar{\eta}_{r+2j-1}\\ \bar{\zeta}_{r+2j}\\ \bar{\eta}_{r+2j}\end{matrix}\right)$$ • $\rho_{|G}(\upsilon)=\bar{\upsilon}$. Definition 4.3. A coordinate or its associated eigenvalue will be called hyperbolic (resp. elliptic, complex) if $\mu_{i}=\bar{\mu}_{i}^{-1}$, (resp. $\mu_{i}=\bar{\mu}_{i}$, $\mu_{i}\not\in\{\bar{\mu}_{i},\bar{\mu}_{i}^{-1}\}$). Remark 4.2. When $p=1$, the complex case doesn’t appear. In fact, in this case, the eigenspace associated to an eigenvalue $\mu\neq 1$ is a $2$-dimensional vector space. Hence, we must have $\mu\in\{\bar{\mu},\bar{\mu}^{-1}\}$. 4.4 Submanifolds with CR-singularities Let us consider a real analytic $(n+p-1)$-submanifolds $M$ of ${C}^{n}$ passing through the origin of the form $$\begin{cases}z_{1}=x_{1}+iy_{1}\\ \vdots\\ z_{p}=x_{p}+iy_{p}\\ y_{p+1}=F_{p+1}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ \vdots\\ y_{n-1}=F_{n-1}(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\\ z_{n}=G(z^{\prime},\bar{z}^{\prime},x^{\prime\prime}):=Q(z^{\prime},\bar{z}^{% \prime})+g(z^{\prime},\bar{z}^{\prime},x^{\prime\prime})\end{cases}$$ (13) where the $F_{i}$’s and $g$ are real analytic in a neighbourhood of the origin and of order greater than or equal to $3$ at $0$. We assume that the quadratic polynomial $Q$ is normalized in the sense developped in the previous section. Hence, we have $$Q=\sum_{i,j}g_{i,j}z^{\prime}_{i}\bar{z}^{\prime}_{j}+\sum_{i=1}^{p}\gamma_{i}% ((z^{\prime}_{i})^{2}+(\bar{z}^{\prime}_{i})^{2}),$$ where the sesquilinear part is of norm $1$, and the $\gamma_{i}$’s are the generalized Bishop invariants. The CR-structure is singular at the origin. For a $n$-variety, we have $Q=z_{1}\bar{z}_{1}+\gamma(z_{1}^{2}+\bar{z}_{1}^{2})$. Let us complexify the equations as in $(\ref{variete-complex})$: $${\cal M}:\begin{cases}2x_{\alpha}=z_{\alpha}+w_{\alpha},\\ z_{\alpha}-w_{\alpha}=2iF_{\alpha}(z^{\prime},w^{\prime},x^{\prime\prime})=2i% \bar{F}_{\alpha}(w^{\prime},z^{\prime},x^{\prime\prime}),\\ z_{n}=G(z^{\prime},w^{\prime},x^{\prime\prime}),\\ w_{n}=\bar{G}(w^{\prime},z^{\prime},x^{\prime\prime}),\\ \end{cases}$$ where $\alpha$ ranges from $p+1$ to $n-1$. Then there exists a pair $(\tau_{1},\tau_{2})$ of holomorphic involutions as defined above. It defines a germ of holomorphic diffeomorphism $\Phi:=\tau_{1}\circ\tau_{2}$ of $({C}^{n+p-1},0)$. We assume that it is not tangent to the identity at the origin. Definition 4.4. Let ${\cal I}$ be monomial ideal of ${\cal O}_{n+p-1}$. We shall say that ${\cal I}$ is compatible with $T_{1}$ and $T_{2}$ (resp. with the anti-linear involution $\rho$) if the maps $T_{j}^{*}:\mathaccent 866{\cal O}_{n+p-1}\rightarrow\mathaccent 866{\cal O}_{n% +p-1}$ (resp. $\rho^{*}:\mathaccent 866{\cal O}_{n+p-1}\rightarrow\overline{\mathaccent 866{% \cal O}_{n+p-1}}$) defined by $T_{j}^{*}(f)=f\circ T_{j}$ (resp. $\rho^{*}(f)=f\circ\rho$ ) preserve the splitting $\mathaccent 866{\cal O}_{n+p-1}=\mathaccent 866{\cal I}\oplus\mathaccent 866{CI}$ (resp. maps $\mathaccent 866{\cal I}$ to $\overline{\mathaccent 866{\cal I}}$ and $\mathaccent 866{CI}$ to $\overline{\mathaccent 866{CI}}$). The main result of this section is the following. Theorem 4.1. Let $M$ be a third order analytic perturbation of the quadric $\{y_{p+1}=\ldots=y_{n-1}=0,z_{n}=Q(z^{\prime},\bar{z}^{\prime})\}$, $z^{\prime}=(z_{1},\ldots,z_{p})$ as above. Let $\tau_{1}$, $\tau_{2}$ be the holomorphic involutions of $({C}^{n+p-1},0)$ associated to the complexified submanifold ${\cal M}$ of ${C}^{2n}$. Let ${\cal I}$ be a monomial ideal of ${\cal O}_{n+p-1}$ compatible with $D\tau_{1}(0)$, $D\tau_{2}(0)$ and $\rho$. Assume that the involutions $\tau_{1}$ and $\tau_{2}$ are formally linearizable on the ideal ${\cal I}$. Assume furthermore that $D\Phi(0)$ is a diagonal matrix and is diophantine (resp. on ${\cal I}$ whenever ${\cal I}$ is properly embedded). Then the $\tau_{j}$’s are simultaneously and holomorphically linearizable on ${\cal I}$. Moreover, the linearizing diffeomorphism can be chosen so that it commutes with the linear anti-holomorphic involution $\rho$. Proof. We can apply lemma 4.2 to the $(D\Phi(0),D\tau_{1}(0),D\tau_{2}(0))$. We reorder the vectorspaces so that $${C}^{n+p-1}=E_{1}\oplus\cdots\oplus E_{r}\oplus E_{r+1}\oplus\cdots\oplus E_{s% }\oplus G$$ with $E_{1},\ldots,E_{h}$ are hyperbolic, $E_{h+1},\ldots,E_{r}$ are elliptic and $E_{r+1},\ldots,E_{s}$ are complex. This means that $\mu_{i}=\bar{\mu}_{i}^{-1}$ for $1\leq i\leq h$, $\mu_{i}=\bar{\mu}_{i}$ for $h+1\leq i\leq r$ and $\mu_{i}\not\in\{\bar{\mu}_{,}\bar{\mu}_{i}^{-1}\}$ for $r+1\leq i\leq s$. Let us consider the ring $\mathaccent 866{\cal O}_{n+p-1}^{D\Phi(0)}$ of formal invariants of $D\Phi(0)$. Since $\mu_{i}\mu_{i}^{-1}=1$, the monomials $\zeta_{i,u}\eta_{i,v}$, $1\leq u,v\leq n(i)$ are invariants for $1\leq i\leq r$ as well as $\zeta_{i,k,u}\eta_{i,k,v}$, $k=1,2$, $1\leq u,v\leq n(i)$ and $r+1\leq i\leq s$ and the $\upsilon_{j}$’s with $j=1,\ldots,q$ (their associated eigenvalues are $1$). More generally, let $(p,q)\in{N}^{r+2s}\times{N}^{r+2s}$ such that $$\Pi_{i=1}^{r}\mu_{i}^{p_{i}-q_{i}}\Pi_{j=r+1}^{s}\mu_{j}^{p_{j}-q_{r}}\bar{\mu% }_{j}^{-p_{j+1}+q_{j+1}}=1.$$ then all the monomials $$\Pi_{i=1}^{r}\left(\Pi_{k=1}^{n(i)}\zeta_{i,k}^{p_{i,k}}\eta_{i,k}^{q_{i,k}}% \right)\Pi_{j=r+1}^{s}\left(\Pi_{k=1}^{n(j)}\zeta_{j,k}^{p_{j,k}}\eta_{j,k}^{q% _{r,k}}\zeta_{j+1,k}^{p_{j+1,k}}\eta_{j+1,k}^{q_{r+1,k}}\right)$$ (14) for which $\sum_{k=1}^{n(i)}p_{i,k}=p_{i}$ and $\sum_{k=1}^{n(i)}q_{i,k}=q_{i}$ ($1\leq i\leq r+2s$) are invariants (the $p_{i,k}$’s and $q_{i,k}$’s are non-negative integers). Such a monomial will be written as $\zeta^{P}\eta^{Q}$ where $P=(\{p_{i,k}\}_{1\leq k\leq n(i)})_{1\leq i\leq r+2s}$ and $Q=(\{q_{i,k}\}_{1\leq k\leq n(i)})_{1\leq i\leq r+2s}$. Let ${\cal R}$ denotes the set of such $(P,Q)$ which generates, together with the $\upsilon_{j}$’s, the ring of invariants (in fact, it is a module of finite type; so just the generators can be selected). Let $ResIdeal$ denote the ideal of ${\cal O}_{n-p+1}$ generated by the monomials of the set ${\cal R}$. Let us write that the $\tau_{i}$’s are simultaneously formally linearizable on ${\cal I}$. To be more specific, let us write, for $j=1,2$ $$\tau_{j}(\zeta^{\prime},\eta^{\prime},\upsilon^{\prime})=\begin{cases}\zeta^{% \prime\prime}_{i}=\lambda_{j,i}\eta^{\prime}_{i}+f_{j,i}(\zeta^{\prime},\eta^{% \prime},\upsilon^{\prime})\quad i=1,\ldots,r+2s\\ \eta^{\prime\prime}_{i}=\lambda_{j,i}^{-1}\zeta^{\prime}_{i}+g_{j,i}(\zeta^{% \prime},\eta^{\prime},\upsilon^{\prime})\quad i=1,\ldots,r+2s\\ \upsilon^{\prime\prime}=\upsilon^{\prime}+h_{j}(\zeta^{\prime},\eta^{\prime},% \upsilon^{\prime}).\end{cases},$$ and $$\hat{\Psi}(\zeta,\eta,\upsilon)=\begin{cases}\zeta^{\prime}_{i}=\zeta_{i}+U_{i% }(\zeta,\eta,\upsilon)\quad i=1,\ldots,r+2s\\ \eta^{\prime}_{i}=\eta_{i}+V_{i}(\zeta,\eta,\upsilon)\quad i=1,\ldots,r+2s\\ \upsilon^{\prime}=\upsilon+W(\zeta,\eta,\upsilon).\end{cases}.$$ Here we have used to following convention : if $j\geq 1$ then $\mu_{r+2j}=\bar{\mu}_{r+2j-1}^{-1}$ and $\theta_{j}:=(\zeta_{r+2j-1},\eta_{r+2j-1},\zeta_{r+2j},\eta_{r+2j})$ are coordinates on $E_{r+j}$. Let us write that $\hat{\Psi}$ conjugates $\tau_{j}$ to $\tilde{\tau}_{j}$ with $$\tilde{\tau}_{j}(\zeta,\eta,\upsilon)=\begin{cases}\zeta^{\prime}_{i}=\lambda_% {j,i}\eta_{i}+\tilde{f}_{j,i}(\zeta,\eta,\upsilon)\quad i=1,\ldots,r+2s\\ \eta^{\prime}_{i}=\lambda_{j,i}^{-1}\zeta_{i}+\tilde{g}_{j,i}(\zeta,\eta,% \upsilon)\quad i=1,\ldots,r+2s\\ \upsilon^{\prime}=\upsilon+\tilde{h}_{j}(\zeta,\eta,\upsilon).\end{cases},$$ where the $\tilde{f}_{j,i}$’s, the $\tilde{g}_{j,i}$’s and the components of $\tilde{h}_{j}$ belong to the ideal ${\cal I}$. We have $\hat{\Psi}\circ\tilde{\tau}_{j}=\tau_{j}\circ\hat{\Psi}$; that is $$(*)\begin{cases}\lambda_{j,i}V_{i}-U_{i}\circ\tilde{\tau}_{j}=\tilde{f}_{j,i}-% f_{j,i}\circ\hat{\Psi}(\zeta,\eta,\upsilon)\quad i=1,\ldots,r+2s\\ \lambda_{j,i}^{-1}U_{i}-V_{i}\circ\tilde{\tau}_{j}=\tilde{g}_{j,i}-g_{j,i}% \circ\hat{\Psi}(\zeta,\eta,\upsilon)\quad i=1,\ldots,r+2s\\ W-W\circ\tilde{\tau}_{j}=\tilde{h}_{j}-h_{j}\circ\hat{\Psi}.\end{cases}$$ Let us find an equation involving only the unkown $U_{i}$ (resr. $V_{i}$, $W$). We recall that $\mu_{i}:=\lambda_{1,i}\lambda_{2,i}^{-1}$. Since we have $\lambda_{1,i}V_{i}-U_{i}\circ\tilde{\tau}_{1}=\tilde{f}_{1,i}-f_{1,i}\circ\hat% {\Psi}$, we have $\lambda_{1,i}V_{i}\circ\tilde{\tau}_{2}-U_{i}\circ\tilde{\Phi}=\tilde{f}_{1,i}% \circ\tilde{\tau}_{2}-f_{1,i}\circ\hat{\Psi}\circ\tilde{\tau}_{2}$. Here, $\tilde{\Phi}$ denotes $\tilde{\tau}_{1}\circ\tilde{\tau}_{2}$. According to the second equation of $(*)$ for $j=2$, we have $V_{i}\circ\tilde{\tau}_{2}=\lambda_{2,i}^{-1}U_{i}-\tilde{g}_{2,i}+g_{2,i}% \circ\hat{\Psi}$. Therefore, we obtain $$\mu_{i}U_{i}-U_{i}\circ\tilde{\Phi}=\left(\tilde{f}_{1,i}-f_{1,i}\circ\hat{% \Psi}\right)\circ\tilde{\tau}_{2}+\lambda_{1,i}\left(\tilde{g}_{2,i}-g_{2,i}% \circ\hat{\Psi}\right).$$ Since the $\tau_{j}$’s are involutions, we have the following relations : $$\displaystyle\lambda_{j,i}g_{j,i}+f_{j,i}\circ\tau_{j}$$ $$\displaystyle=$$ $$\displaystyle 0$$ (15) $$\displaystyle\lambda_{j,i}^{-1}f_{j,i}+g_{j,i}\circ\tau_{j}$$ $$\displaystyle=$$ $$\displaystyle 0$$ (16) $$\displaystyle h_{j}\circ\tau_{j}+h_{j}$$ $$\displaystyle=$$ $$\displaystyle 0.$$ (17) We have the following relations among the $f$’s and the $g$’s : $$g_{1,i}\circ\tau_{2}-\mu_{i}^{-1}g_{2,i}\circ\tau_{2}=-\lambda_{1,i}^{-1}\left% (f_{1,i}\circ\tau_{1}-f_{2,i}\circ\tau_{2}\right)\circ\tau_{2}.$$ (18) Using the fact that $f_{1,i}\circ\hat{\Psi}\circ\tilde{\tau}_{2}=f_{1,i}\circ\tau_{2}\circ\hat{\Psi}$, we find the following relations : $$\mu_{i}U_{i}-U_{i}\circ\tilde{\Phi}=\left(\tilde{f}_{1,i}-\mu_{i}\tilde{f}_{2,% i}\right)\circ\tilde{\tau}_{2}-\left(f_{1,i}-\mu_{i}f_{2,i}\right)\circ\tau_{2% }\circ\hat{\Psi}=:\alpha_{i}.$$ (19) Similarly, we obtain $$\mu_{i}^{-1}V_{i}-V_{i}\circ\tilde{\Phi}=\left(\tilde{g}_{1,i}-\mu_{i}^{-1}% \tilde{g}_{2,i}\right)\circ\tilde{\tau}_{2}-\left(g_{1,i}-\mu_{i}^{-1}g_{2,i}% \right)\circ\tau_{2}\circ\hat{\Psi}=:\beta_{i},$$ (20) as well as $$W-W\circ\tilde{\Phi}=\left(\tilde{h}_{1}\circ\tilde{\tau}_{2}+\tilde{h}_{2}% \right)-\left(h_{1}\circ\tau_{2}+h_{2}\right)\circ\hat{\Psi}=:\gamma.$$ (21) The germ of diffeomorphism $\Phi$ is formally linearizable on ${\cal I}$. If its linear part $D\Phi(0)$ is diophantine (resp. on ${\cal I}$ if ${\cal I}$ is properly embedded), then, according to theorem 2.1, $\Phi$ is actually holomorphically linearizable on ${\cal I}$ and by a unique normalizing transformation which projection on the vector space spanned by $\mathaccent 866{\cal I}$ and the formal centralizer of $D\Phi(0)x$ is zero. Let $\Psi^{\prime}$ be precisely this germ of holomorphic diffeomorphism of $({C}^{n+p-1},0)$ : $$\Psi^{\prime}(\zeta,\eta,\upsilon)=\begin{cases}\zeta^{\prime}_{i}=\zeta_{i}+U% ^{\prime}_{i}(\zeta,\eta,\upsilon)\quad i=1,\ldots,r+2s\\ \eta^{\prime}_{i}=\eta_{i}+V^{\prime}_{i}(\zeta,\eta,\upsilon)\quad i=1,\ldots% ,r+2s\\ \upsilon^{\prime}=\upsilon+W^{\prime}(\zeta,\eta,\upsilon).\end{cases}$$ Let us show that it commutes with $\rho$. First of all, we have $\rho\Phi\rho=\Phi^{-1}$. Then, according to the second part of theorem 2.1, it is sufficient to show that $\rho{\cal C}_{D\Phi(0)x}\rho={\cal C}_{D\Phi(0)x}$. Let $R\in{\cal C}_{D\Phi(0)x}$. We have $D\Phi(0)R(x)=RD\Phi(0)(x)$. We recall that $D\Phi(0)x=T_{1}T_{2}x$. Therefore, we have $$T_{1}T_{2}\rho R\rho=T_{1}\rho T_{1}R\rho=\rho T_{2}T_{1}R\rho=\rho RT_{2}T_{1% }\rho=\rho R\rho T_{1}T_{2}.$$ Let us modify $\Psi^{\prime}$ in such a way that it linearizes simultaneously $\tau_{1},\tau_{2}$ on ${\cal I}$. Let $pr(f)$ (resp. $pr_{\cal I}(f)$) denote the projection of the formal power series $f$ on the vector space $\mathaccent 866{CI}$ spanned by the $x^{Q}$’s which doesn’t belong to $\mathaccent 866{\cal I}$ (resp. on $\mathaccent 866{\cal I}$). Let us project the equations $(*)$ onto this space. We find $$(**)\begin{cases}\lambda_{j,i}pr(V_{i})-pr(U_{i}\circ T_{j})=-pr(f_{j,i}\circ% \hat{\Psi})\quad i=1,\ldots,r+2s\\ \lambda_{j,i}^{-1}pr(U_{i})-pr(V_{i}\circ T_{j})=-pr(g_{j,i}\circ\hat{\Psi})% \quad i=1,\ldots,r+2s\\ pr(W)-pr(W\circ T_{j})=-pr(h_{j}\circ\hat{\Psi}).\end{cases}$$ On the other hand, projecting the equations $(\ref{u})$,$(\ref{v})$ and $(\ref{w})$ on $CI$ we obtain $$\displaystyle\mu_{i}pr(U_{i})-pr(U_{i})\circ D$$ $$\displaystyle=$$ $$\displaystyle-pr\left(\left(f_{1,i}-\mu_{i}f_{2,i}\right)\circ\tau_{2}\circ% \hat{\Psi}\right)$$ $$\displaystyle\mu_{i}^{-1}pr(V_{i})-pr(V_{i})\circ D$$ $$\displaystyle=$$ $$\displaystyle-pr\left(\left(g_{1,i}-\mu_{i}^{-1}g_{2,i}\right)\circ\tau_{2}% \circ\hat{\Psi}\right)$$ $$\displaystyle pr(W)-pr(W\circ D)$$ $$\displaystyle=$$ $$\displaystyle-pr\left(\left(h_{1}\circ\tau_{2}+h_{2}\right)\circ\hat{\Psi}% \right).$$ Let ${\cal V}_{\pm i}$ (resp. ${\cal V}_{0}$), $1\leq i\leq r$, be the closed subspace of ${C}[[\zeta,\eta,\upsilon]]$ generated by monomials $\zeta^{Q_{1}}\eta^{Q_{2}}\upsilon^{Q_{3}}$ for which $\mu^{Q_{1}-Q_{2}}=\mu_{i}^{\pm 1}$ (resp. $\mu^{Q_{1}-Q_{2}}=1$). Let $P_{i}$ be the associated projection. Applying these various projections to the previous equations leads to the following compatibility equations : $$\displaystyle P_{i}\left(pr\left(\left(f_{1,i}-\mu_{i}f_{2,i}\right)\circ\tau_% {2}\circ\hat{\Psi}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle 0$$ $$\displaystyle P_{-i}\left(pr\left(\left(g_{1,i}-\mu_{i}^{-1}g_{2,i}\right)% \circ\tau_{2}\circ\hat{\Psi}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle 0$$ (22) $$\displaystyle P_{0}\left(pr\left(\left(h_{1}\circ\tau_{2}+h_{2}\right)\circ% \hat{\Psi}\right)\right)$$ $$\displaystyle=$$ $$\displaystyle 0.$$ According to the properties of the eigenvalues of $D\Phi(0)$ and its invariants, we have the following : • If $Q:=(Q_{1},Q_{2},Q_{3})\in{N}^{p+p+q}$ is such that $\mu^{Q_{1}-Q_{2}}=1$ then $\zeta^{Q_{1}}\eta^{Q_{2}}\upsilon^{Q_{3}}\circ T_{j}$ is a monomial $\zeta^{Q^{\prime}_{1}}\eta^{Q^{\prime}_{2}}\upsilon^{Q^{\prime}_{3}}$ such that $\mu^{Q^{\prime}_{1}-Q^{\prime}_{2}}=1$. • If $Q:=(Q_{1},Q_{2},Q_{3})\in{N}^{2p+q}$ and $1\leq i\leq p$ is such that $\mu^{Q_{1}-Q_{2}}=\mu_{i}^{\pm 1}$ then $\zeta^{Q_{1}}\eta^{Q_{2}}\upsilon^{Q_{3}}\circ T_{j}$ is a monomial $\zeta^{Q^{\prime}_{1}}\eta^{Q^{\prime}_{2}}\upsilon^{Q^{\prime}_{3}}$ such that $\mu_{k}^{Q^{\prime}_{1}-Q^{\prime}_{2}}=\mu_{i}^{\mp 1}$. Since ${\cal I}$ is compatible with the $T_{j}$’s, we have $pr(f\circ T_{j})=pr(f)\circ T_{j}$ for any formal power series $f$. Moreover, we have $$P_{\pm i}(f\circ T_{j})=P_{\mp i}(f)\circ T_{j},\quad P_{0}(f\circ T_{j})=P_{0% }(f).$$ We have some freedom in the choice of the normalizing transformation of the $\tau_{j}$’s. Let us show that there is a unique solution $U_{i}$, $V_{i}$ and $W$ such that $$pr_{\cal I}U_{i}=pr_{\cal I}V_{i}=pr_{\cal I}W=0,$$ and $$P_{-i}(pr(V_{i}))=0;\quad P_{0}(pr(W))=0.$$ (23) Remark 4.3. If the formal centralizer $\mathaccent 866{\cal C}_{D\phi(0)x}$ is contained in $\mathaccent 866{\cal I}$ then the last condition is always satisfied. In fact, let apply $P_{-i}$ (resp. $P_{i}$, $P_{0}$) to the first (resp. second, last) equation of $(**)$ and let us show that system obtained is solvable. We have $$\displaystyle P_{i}(pr(U_{i}))\circ T_{j}$$ $$\displaystyle=$$ $$\displaystyle P_{-i}(pr(f_{j,i}\circ\hat{\Psi}))$$ $$\displaystyle\lambda_{j,i}^{-1}P_{i}(pr(U_{i}))$$ $$\displaystyle=$$ $$\displaystyle-P_{i}(pr(g_{j,i}\circ\hat{\Psi}))$$ $$\displaystyle P_{0}(pr(h_{j}\circ\hat{\Psi}))$$ $$\displaystyle=$$ $$\displaystyle 0.$$ The last equation is obtained from equation $(\ref{rel2})$. In fact, we have $$0=h_{j}\circ\tau_{j}\circ\hat{\Psi}+h_{j}\circ\hat{\Psi}=(h_{j}\circ\hat{\Psi}% )\circ\tilde{\tau}_{j}+(h_{j}\circ\hat{\Psi}).$$ Since $\tilde{\tau}_{j}$ is linear on ${\cal I}$, we have $$pr((h_{j}\circ\hat{\Psi})\circ\tilde{\tau}_{j})=pr((h_{j}\circ\hat{\Psi})\circ T% _{j})=pr(h_{j}\circ\hat{\Psi})\circ T_{j}.$$ Therefore, we have $P_{0}(pr(h_{j}\circ\hat{\Psi}))=0$. The compatibility condition of the the first equation is $$P_{-i}(pr(f_{j,i}\circ\hat{\Psi}))\circ T_{j}+\lambda_{j,i}P_{i}(pr(g_{j,i}% \circ\hat{\Psi}))=0.$$ Let us show that it can be obtained from equation $(\ref{rel1})$. In fact, let us first compose on the right by $\hat{\Psi}$ and then project onto $CI$. We have $$\lambda_{j,i}pr(g_{j,i}\circ\hat{\Psi})+pr(f_{i,j}\circ\hat{\Psi}\circ\tilde{% \tau}_{j})=0.$$ As above, this leads to $$\lambda_{j,i}pr(g_{j,i}\circ\hat{\Psi})+pr(f_{i,j}\circ\hat{\Psi}\circ T_{j})=0,$$ and then to $$\lambda_{j,i}P_{i}\left(pr(g_{j,i}\circ\hat{\Psi})\right)+P_{-i}\left(pr(f_{i,% j}\circ\hat{\Psi})\right)\circ T_{j}=0.$$ Let us consider the following equations $$u_{i}=-\lambda_{j,i}P_{i}(pr(g_{j,i}(\zeta+U^{\prime}+u,\eta+V^{\prime},% \upsilon+W^{\prime}))),\quad i=1,\ldots,r+2s,$$ (24) where the $u_{i}$’s are the unknowns and which components belong to the range of $P_{i}\circ pr$. Here, the maps $U^{\prime}$, $V^{\prime}$ and $W^{\prime}$ are holomorphic maps defined by the $\Psi^{\prime}$, the normalized transformation which linearizes $\Phi$ on ${\cal I}$. Since the $g_{j,i}$’s are non-linear holomorphic functions, we can apply the implicit function theorem to obtain holomorphic $u_{i}$’s in a neighbourhood of the origin satisfying equation $(\ref{equ-u})$. The map $u$ is alspo solution of the equations $u_{i}\circ T_{1}=P_{-i}(pr(f_{1,i}\circ(\Psi^{\prime}+u)))$. Then, the germ of holomorphic diffeomorphism at the origin of ${C}^{r+n-1}$ defined to be $$\Psi(\zeta,\eta,\upsilon)=\begin{cases}\zeta^{\prime}_{i}=\zeta_{i}+U^{\prime}% _{i}(\zeta,\eta,\upsilon)+u_{i}(\zeta,\eta,\upsilon),\quad i=1,\ldots,r+2s\\ \eta^{\prime}_{i}=\eta_{i}+V^{\prime}_{i}(\zeta,\eta,\upsilon),\quad i=1,% \ldots,r+2s\\ \upsilon^{\prime}=\upsilon+W^{\prime}(\zeta,\eta,\upsilon)\end{cases}$$ linearizes simultaneously and holomorphically the $\tau_{j}$’s on ${\cal I}$. It is the unique germ of diffeomorphism which linearizes the $\tau_{j}$’s and satisfies to $(\ref{pnormal})$ and $(\ref{equ-u})$. It remains to show that$\Psi$ commutes whith the anti-holomorphic involution $\rho$. In fact, we have $\hat{\Psi}\circ\tilde{\tau}_{2}=\tau_{2}\circ\hat{\Psi}$. By composition on the right and on the left by $\rho$, we obtain $$\left(\Psi^{\prime}+(\rho\circ u\circ\rho)\right)\circ(\rho\circ\tilde{\tau}_{% 2}\circ\rho)=\tau_{1}\circ\left(\Psi^{\prime}+(\rho\circ u\circ\rho)\right).$$ This is due to the fact that $\rho$ is an involution which commutes with $\Psi^{\prime}$. It is to be noticed that the map $\rho\circ\tilde{\tau}_{2}\circ\rho$ is equal to $T_{1}$ modulo the ideal ${\cal I}$. Therfore, if we project outside ${\cal I}$, we obtain $$pr(\Psi^{\prime})\circ T_{1}+pr(\rho\circ u\circ\rho)\circ T_{1}=pr\left(\tau_% {1}\circ\left(\Psi^{\prime}+(\rho\circ u\circ\rho)\right)\right).$$ According to the properties of $\Psi^{\prime}$, we have obtain $$P_{i}(pr(\rho\circ u\circ\rho)_{i})\circ T_{1}=P_{-i}(pr\left(f_{1,i}\circ% \left(\Psi^{\prime}+(\rho\circ u\circ\rho)\right)\right)).$$ Therefore, $\rho\circ u\circ\rho$ is solution of the same equation as $u$. By uniqueness, we obtain $\rho\circ u\circ\rho=u$. Hence, $\Psi$ commutes with $\rho$. This end the proof of the theorem. ∎ Let us express a direct geometric implication. Corollary 4.1. Under the assumptions of the theorem and in the new holomorphic coordinates system, the complexified submanifold ${\cal M}$ intersects each irreducible component of the zero locus $V(\cal I)$ which is invariant under $\rho$, $T_{1}$ and $T_{2}$ along a complex submanifold which is a complexified quadric of ${C}^{k}$, $k\leq n$ (if the $T_{j}$’s are still involutions) . Proof. Under the assumptions of the theorem, the $\tau_{j}$’s are holomorphically linearizable on ${\cal I}$. Hence, in the new coordinates, their restriction to any irreducible component of $V(\cal I)$ is equal to their restriction of their linear part at the origin. Assume that $V_{i}$ is an irreductible component of $V({\cal I})$ which is invariant under $\rho$, $T_{1}$ and $T_{2}$. Since ${\cal I}$ is a monomial ideal, its zero locus is an intersection of unions of complex hyperplanes $\{\zeta_{i_{j}}=0\}$, $\{\eta_{i_{j}}=0\}$ and $\{\upsilon_{k_{l}}=0\}$. Hence it is a linear complex manifold. Then, the submanifold $V_{i}\cap Fix(\rho)$ of $M$ is a quadric associated to the linear involutions $T_{j|V_{j}}$. ∎ 4.5 Holomorphic equivalence to quadrics As a direct consequence, we have the following result : Theorem 4.2. Assume that $M$ is formally equivalent to the quadric defined by its quadratic part. Let $\Phi$ be the associated germ of holomorphic diffeomorphism. Assume that its linear part $D\Phi(0)$ is semi-simple and diophantine, then $M$ is biholomophic to the quadric $$(Q)\begin{cases}z_{1}=x_{1}+iy_{1}\\ \vdots\\ z_{p}=x_{p}+iy_{p}\\ y_{p+1}=0\\ \vdots\\ y_{n-1}=0\\ z_{n}=Q(z^{\prime},\bar{z}^{\prime})\end{cases}$$ (25) When $p=1$, $n=2$, this result is due to X. Gong [Gon94] in the hyperbolic case. Proof. Let us apply the main theorem with the zero ideal ${\cal I}:=(0)$. In fact, it is compatible with the $D\tau_{j}(0)$’s and $\rho$. That $M$ is formally equivalent to the quadric $(Q)$ means precisely that the $\tau_{j}$’s are simultaneously formally linearizable. According to the assumption, the $\tau_{j}$’s are simultaneously and holomorphically linearizable. Since the linearizing diffeomorphism $\Psi$ commutes with the linear anti-holomorphic involution $\rho$, $\Psi$ is the complexified of an holomorphic diffeomorphism $\psi$ of $({C}^{n},0)$ which maps $M$ to the quadric $(Q)$. ∎ 4.6 Cutting varieties We want to apply the theorem to the ideal ${\cal I}:=ResIdeal$, the ideal generated by the set ${\cal R}$ of resonnant monomials $(\ref{resonnances})$. Almost all pair of involution $(\tau_{1},\tau_{2})$ are formally linearizable along the resonant ideal ${\cal I}$. In fact, the normal form theory for the diffeomorphism $\Phi$ tells us that all but a finite number of the monomials appearing in the Taylor expansion of a normal form belongs to the resonannt ideal. Lemma 4.3. Assume that the non-linear centralizer ${\cal C}_{D\Phi(0)}$ is included in the resonnant ideal $ResIdeal$ and that $D\Phi(0)$ has distinct eigenvalues. Then the $\tau_{j}$’s are formally and simultaneously linearizable on ${\cal I}=ResIdeal$. Moreover, ${\cal I}$ is compatible with the $D\tau_{j}(0)$’s and $\rho$. Proof. Let us write $$\Phi=\begin{cases}\mu_{i}\zeta_{i}+\phi_{i},\quad i=1,\ldots,r+2s\\ \mu_{i}^{-1}\eta_{i}+\psi_{i},\quad i=1,\ldots,r+2s\\ \upsilon+\theta.\end{cases}$$ Since $\Phi=\tau_{1}\circ\tau_{2}$, we have $$\begin{cases}\lambda_{1,i}g_{2,i}+f_{1,i}\circ\tau_{2}=\phi_{i}\\ \lambda_{1,i}^{-1}f_{2,i}+g_{1,i}\circ\tau_{2}=\psi_{i}\\ h_{2}+h_{1}\circ\tau_{2}=\theta.\end{cases}$$ But according to relations $(\ref{rel1})$, $(\ref{rel3})$ and $(\ref{rel2})$, we have $$\begin{cases}g_{2,i}=-\lambda_{2,i}^{-1}f_{2,i}\circ\tau_{2}\\ g_{1,i}=-\lambda_{1,i}^{-1}f_{1,i}\circ\tau_{2}\\ h_{2}+h_{2}\circ\tau_{2}=0.\end{cases}$$ (26) Therefore, we have $$\begin{cases}-(\lambda_{1,i}\lambda_{2,i}^{-1})f_{2,i}\circ\tau_{2}+f_{1,i}% \circ\tau_{2}=\phi_{i}\\ \lambda_{1,i}^{-1}f_{2,i}-\lambda_{1,i}^{-1}f_{1,i}\circ\Phi=\psi_{i}\\ -h_{2}\circ\tau_{1}+h_{1}\circ\tau_{2}=\theta.\end{cases}$$ We find that $$\begin{cases}\mu_{i}^{-1}f_{1,i}-f_{1,i}\circ\Phi=\mu_{i}\psi_{i}+\mu_{i}^{-1}% \phi_{i}\circ\tau_{2}\\ f_{2,i}=f_{1,i}\circ\Phi+\mu_{i}\psi_{i}\end{cases}$$ (27) Assume that the $\phi_{i}$’s, $\psi_{i}$’s and $\theta$ belong to ${\cal I}$. Assume that the $k$-jet of $\tau_{1}-T_{1}$ and $\tau_{2}-T_{2}$ belong to also the ideal ${\cal I}$. Then, by equalities $(\ref{rel4})$ and $(\ref{rel5})$, we have that $\mu_{i}^{-1}f_{1,i,k+1}-f_{1,i,k+1}\circ D\Phi(0)$ belong to ${\cal I}$. Here, $f_{1,i,k+1}$ denotes the homogenous polynomial of order $k+1$ in the Taylor expansion of $f_{1,i}$ at the origin. Since the centralizer of $D\Phi(0)$ is contained in ${\cal I}$ then $f_{1,i,k+1}$ belongs to ${\cal I}$. It follows that $f_{2,i,k+1}$ also belongs to ${\cal I}$. So, are the $g_{j,i,k+1}$’s and the $h_{j,k+1}$’s. Therefore, under the assuptions, the $\tau_{i}$’s are formally linearizable on the ideal ${\cal I}$ since $\Phi$ is. About the second point, the $D\tau_{j}(0)$’s leave each monomial $\zeta_{i}\eta_{i}$ invariant. Moreover, if a monomial $\zeta^{Q}\eta^{R}$ is first integral of $D\Phi(0)$, then so is $\eta^{Q}\zeta^{R}$. this is due to the fact that if $\mu^{Q}(\mu^{-1})^{R}=1$ then $(\mu^{-1})^{Q}(\mu)^{R}=1$. As a consequence, each first integral of $D\Phi(0)$ is sent to a first integral of $D\Phi(0)$ by the $D\tau_{j}(0)$’s. Furthermore, $\bar{\rho}^{*}$ maps a monomial first integral to another. This is due to the fact that the eigenvalues are distinct. ∎ The assumption of the lemma means that the resonnances are generated by the generators of the ring of invariants of the linear part of $\Phi$ at the origin. Theorem 4.3. Assume that the non-linear centralizer ${\cal C}_{D\Phi(0)x}$ is included in the resonnant ideal $ResIdeal$. Moreover let us assume that $D\Phi(0)$ has distinct hyperbolic eigenvalues and is diophantine (on $ResIdeal$ if the latter is properly embedded). Then, there are good holomorphic coordinates $(v_{1},\ldots,v_{n})$ of ${C}^{n}$, in which the submanifold $M$ intersects the complex linear manifold $\{v_{p+1}=\ldots=v_{n}=0\}$ along a real analytic subset $V$ whose complexified is nothing but the zero locus of the resonnant ideal $ResIdeal$ : $V=V(ResIdeal)\cap Fix(\rho)$, where $Fix(\rho)$ denotes the fixed point set of $\rho$. Proof. First of all, we make a change of coordinates as in lemma 4.2. Then, the quadratic functions which are invariant under both $T_{1}$ and $T_{2}$ are linear combinations of monomials of the form $\zeta_{i}\eta_{i}$ and $\upsilon_{j}\upsilon_{k}$. Therefore, in these coordinates, the functions defining the complexified quadric are functions $f,g$ of the $\zeta_{i}\eta_{i}$’s and $\upsilon_{j}\upsilon_{k}$’s. Since these monomials vanish on the zero locus of $ResIdeal$, so does $f$ and $g$. We apply theorem 4.1 : $V(ResIdeal)$ is invariant under the $\tau_{j}$’s and their restriction to it are equal to $T_{j|V(ResIdeal)}$. Because of hyperbolicity, each irreductible component of $V(ResIdeal)$ is invariant under $\rho$. Hence, the functions which are invariants by the $\tau_{j}$’s vanish on zero on the zero locus of $Resideal$. ∎ When $p=1$, $n\geq 2$ and in the hyperbolic case, this result is due to Wilhelm Klingenberg Jr. [Kli85]. He found, for instance in the case $n=2$, that, in some suitable holomorphic coordinates, the complex analytic set $\{v_{2}=0\}$ intersects $M$ along $\{(\zeta_{1},\eta_{1})\in{R}^{2},\;\zeta_{1}\eta_{1}=0\}$. In fact, in this case, the ring of formal invariants of $\Phi$ is generated by only one element: $\zeta_{1}\eta_{1}$. Remark 4.4. When the spectrum of $D\Phi(0)$ contains elliptic eigenvalues and complex eigenvalues, then one has to use the set $$\tilde{V}=V({\cal I})\cap\left(\bigcap_{i\in{\cal E}}\{\zeta_{i}=\eta_{i}=0\}% \right))\cap\left(\bigcap_{j\in{\cal C}}\{\zeta_{r+2j-1}=\eta_{r+2j-1}=\zeta_{% r+2j}=\eta_{r+2j}=0\}\right)$$ which is invariant by $\rho$. Then, the conclusion of the previous theorem holds with $\tilde{V}\cap Fix(\rho)$. References [AG05] P. Ahern and X. Gong. A complete classification for pairs of real analytic curves in the complex plane with tangential intersection. J. Dyn. Control Syst., 11(1):1–71, 2005. [BEPR00] M. S. Baouendi, P. Ebenfelt, and L. Preiss Rothschild. Local geometric properties of real submanifolds in complex space. Bull. Amer. Math. Soc. (N.S.), 37(3):309–336, 2000. [Bis65] E. Bishop. Differentiable manifolds in complex Euclidean space. Duke Math. J., 32:1–21, 1965. [CG97] G. Cairns and E. Ghys. The local linearization problem for smooth $sl(n)$-actions. Enseignement Math., 43, 1997. [Cha86] M. Chaperon. Géométrie différentielle et singularités de systèmes dynamiques. Astérisque, 138-139, 1986. [Cof04a] A. Coffman. Analytic normal form for CR singular surfaces in ${C}^{3}$. Houston J. Math., 30(4):969–996 (electronic), 2004. [Cof04b] A. Coffman. Analytic stability of the CR cross-cap. Technical report, September 2004. 1-35. [Gon94] X. Gong. On the convergence of normalizations of real analytic surfaces near hyperbolic complex tangents. Comment. Math. Helv., 69(4):549–574, 1994. [GS68] V.V. Guillemin and S. Sternberg. Remarks on a paper of hermann. Trans. Amer. Math. Soc., 130:110–116, 1968. [GY99] T. Gramchev and M. Yoshino. Rapidly convergent iteration method for simultaneous normal forms fo commuting maps. Math. Z., 231:p. 745–770, 1999. [Har84] Gary A. Harris. Function theory and geometry of real submanifolds of ${\bf C}^{n}$ near a CR singularity. In Complex analysis of several variables (Madison, Wis., 1982), volume 41 of Proc. Sympos. Pure Math., pages 95–115. Amer. Math. Soc., 1984. [HK95] X. Huang and S. G. Krantz. On a problem of Moser. Duke Math. J., 78(1):213–228, 1995. [Hua04] X. Huang. Local equivalence problems for real submanifolds in complex spaces. In Real methods in complex and CR geometry, volume 1848 of Lecture Notes in Math., pages 109–163. Springer, Berlin, 2004. [Kas13] E. Kasner. Conformal geometry. In Proc. 5. Intern. Math. Congr., pages 81–87, 1913. [Kli85] W. Klingenberg, Jr. Asymptotic curves on real analytic surfaces in ${\bf C}^{2}$. Math. Ann., 273(1):149–162, 1985. [Kus67] A.G. Kushnirenko. Linear-equivalent action of a semi-simple lie group in the neighbourhood of a stationary point. Funct. Anal. Appl., 1:89–90, 1967. [Mos85] J. Moser. Analytic surfaces in ${\bf C}^{2}$ and their local hull of holomorphy. Ann. Acad. Sci. Fenn. Ser. A I Math., 10:397–410, 1985. [MW83] J. Moser and S. M. Webster. Normal forms for real surfaces in ${\bf C}^{2}$ near complex tangents and hyperbolic surface transformations. Acta Math., 150(3-4):255–296, 1983. [Nak98] I. Nakai. The classification of curvilinear angles in the complex plane and the groups of $\pm$ holomorphic diffeomorphisms. Ann. Fac. Sci. Toulouse Math. (6), 7(2):313–334, 1998. [Pfe15] G. A. Pfeiffer. On the conformal geometry of analytic arcs. American J.Math., 37:395–430, 1915. [Pös86] J. Pöschel. On invariant manifolds of complex analytic mappings near fixed points. Expo. Math., 4,(1986),97-109, 1986. [Rüs77] H. Rüssmann. On the convergence of power series transformations of analytic mappings near a fixed point into a normal form. Preprint I.H.E.S. M/77/178, p.1-44, 1977. [Rüs02] H. Rüssmann. Stability of elliptic fixed points of analytic area-preserving mappings under the Bruno condition. Ergodic Theory Dynam. Systems, 22(5):1551–1573, 2002. [Sto94] L. Stolovitch. Sur un théorème de Dulac. Ann. Inst. Fourier, 44(5):1397–1433, 1994. [Sto00] L. Stolovitch. Singular complete integrabilty. Publ. Math. I.H.E.S., 91:p.133–210, 2000. [Tré03] J.-M. Trépreau. Discrimination analytique des difféomorphismes résonnants de $({C},0)$ et réflexion de Schwarz. Astérisque, 284:271–319, 2003. Autour de l’analyse microlocale. [Web03] S. M. Webster. Pair of intersecting real manifolds in complex space. Asian J. Math., 7(4):449–462, 2003.
The VLA Galactic Plane Survey J. M. Stil11affiliation: Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada and A. R. Taylor11affiliation: Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary, AB, T2N 1N4, Canada J. M. Dickey22affiliation: School of Mathematics and Physics - Private Bag 37, University of Tasmania, Hobart, TAS 7001, Australia 33affiliation: Department of Astronomy, University of Minnesota, 116 Church Street, SE, Minneapolis, MN 55455, USA and D. W. Kavars33affiliation: Department of Astronomy, University of Minnesota, 116 Church Street, SE, Minneapolis, MN 55455, USA P. G. Martin44affiliation: Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto ON, M5S 3H8, Canada 55affiliation: Department of Astronomy and Astrophysics, University of Toronto, 60 St. George Street, Toronto ON, M5S 3H8, Canada , T. A. Rothwell55affiliation: Department of Astronomy and Astrophysics, University of Toronto, 60 St. George Street, Toronto ON, M5S 3H8, Canada , and A. I. Boothroyd44affiliation: Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto ON, M5S 3H8, Canada Felix J. Lockman66affiliation: National Radio Astronomical Observatory, P.O. Box 2, Green Bank, West Virginia 24944, USA N. M. McClure-Griffiths77affiliation: Australia Telescope National Facility, CSIRO, P.O. Box 76, Epping NSW 1710, Australia Abstract The VLA Galactic Plane Survey (VGPS) is a survey of H i and 21-cm continuum emission in the Galactic plane between longitude $18\arcdeg$ and $67\arcdeg$ with latitude coverage from $|b|<1\fdg 3$ to $|b|<2\fdg 3$. The survey area was observed with the Very Large Array (VLA) in 990 pointings. Short-spacing information for the H i line emission was obtained by additional observations with the Green Bank Telescope (GBT). H i spectral line images are presented with a resolution of $1\arcmin\times 1\arcmin\times 1.56\ \rm km\ s^{-1}$ (FWHM) and rms noise of $2\ \rm K$ per $0.824\ \rm km\ s^{-1}$ channel. Continuum images made from channels without H i line emission have $1\arcmin$ (FWHM) resolution. The VGPS images reveal structures of atomic hydrogen and 21-cm continuum as large as several degrees with unprecedented resolution in this part of the Galaxy. With the completion of the VGPS, it is now possible for the first time to assess the consistency between arcminute-resolution surveys of Galactic H i emission. VGPS images are compared with images from the Canadian Galactic Plane Survey (CGPS) and the Southern Galactic Plane Survey (SGPS). In general, the agreement between these surveys is impressive, considering the differences in instrumentation and image processing techniques used for each survey. The differences between VGPS and CGPS images are small, $\lesssim 6\ \rm K$ (rms) in channels where the mean H i brightness temperature in the field exceeds $80\ \rm K$. A similar degree of consistency is found between the VGPS and SGPS. The agreement we find between arcminute resolution surveys of the Galactic plane is a crucial step towards combining these surveys into a single uniform dataset which covers 90% of the Galactic disk: the International Galactic Plane Survey (IGPS). The VGPS data will be made available on the World Wide Web through the Canadian Astronomy Data Centre (CADC). ISM: atoms — Galaxy: disk — Surveys 1 Introduction The physical processes in the feedback cycle of matter between stars and the interstellar medium play an important role in the evolution of galaxies, and in the way products of stellar nucleosynthesis are dispersed. A quantitative description of these processes requires knowledge of the poorly known topology of different phases of the interstellar medium and the timescales and locations of transitions between these phases. Atomic hydrogen is the link between the gas heated or expelled by massive stars and cold molecular gas from which new stars form. H i is widely distributed and, within certain limits, easily observable across the Galaxy through the 21-cm line. As such, atomic hydrogen has been used to study the dynamics of the Galaxy and physical conditions in the diffuse interstellar medium. To study the interstellar medium in transition, from the cold phase to the warm phase or vice versa, or the dynamical effect of stars on the interstellar medium, requires observations which reveal parsec-scale structures. For objects outside the solar neighborhood, resolving parsec-scale structures requires arcminute-resolution images. The distribution of cold atomic gas may be revealed by absorption of continuum emission or absorption of the H i line emission itself (H i self absorption). Results from such observations depend strongly on the resolution of the data (Dickey & Lockman, 1990). To place these processes into a Galactic context, the high resolution image should also show structure on large scales. At 21 cm wavelength, images with arcminute resolution and a large spatial dynamic range can be obtained by an interferometer in combination with a large single dish telescope to fill in large scale structure not detected by the interferometer. Previously the Canadian Galactic Plane Survey (CGPS) (Taylor et al., 2003) and the Southern Galactic Plane Survey (McClure-Griffiths et al., 2001, 2005) have provided high-resolution H i images in the northern sky (mainly the second Galactic quadrant) and the southern sky (third and fourth Galactic quadrants). The CGPS will be extended to longitude $50\arcdeg$, while the SGPS has been extended to longitude $20\arcdeg$. A large part of the first Galactic quadrant is located near the celestial equator. This area of the sky cannot be observed at sufficient angular resolution by the interferometers used in the CGPS and SGPS, because these interferometers have exclusively or mostly east-west baselines. At the extremes of the CGPS and SGPS, the angular resolution of these surveys is degraded by a factor $\sim 3$ in declination. The Very Large Array (VLA) can observe the equatorial part of the Galactic plane with adequate resolution. In this paper we present the VLA Galactic Plane Survey (VGPS). The VGPS is an H i spectral line and 21-cm continuum survey of a large part of the first Galactic quadrant. This survey was done with the Very Large Array (VLA) and the 100 m Robert C. Byrd Green Bank Telescope (GBT) of the National Radio Astronomical Observatory (NRAO). Short spacing information for the VGPS continuum images was provided by a continuum survey with the 100 m Effelsberg telescope (Reich & Reich, 1986; Reich et al., 1990). The CGPS, SGPS, and VGPS will be combined into a single data set which provides arcminute-resolution images of Galactic H i for 90% of the Galactic plane as part of the International Galactic Plane Survey (IGPS). With the completion of the VGPS, different parts of the IGPS overlap for the first time, allowing a detailed comparison of the results from each survey. This paper describes the VGPS data, and compares the VGPS spectral line images with those of the CGPS and SGPS in the areas where the surveys overlap. 2 Observations and data reduction 2.1 VLA observations The main set of observations of this survey was done with the Very Large Array (VLA) of the NRAO. The technical specifications of the array are described in detail by Taylor, Ulvestad & Perley (2003). The VLA is an interferometer with 27 elements, each 25 m in diameter ($32^{\prime}$ FWHM primary beam size at 21 cm). Several VLA fields must be combined in a mosaic to image an appreciable part of the Galaxy. The most compact configuration of VLA antennas, the D-configuration, has baselines between 35 m and 1.03 km, and is the most suitable for imaging of widespread Galactic H i emission. For short observations (snapshots), the largest angular scale that can be observed reasonably well is $\sim$450 arcseconds at 21 cm. The synthesized beam size is about 45 arcseconds (FWHM) at 21 cm. The total amount of observing time at the VLA allocated to this survey was 260 hours in the period July to September 2000. In addition to the VLA observations, a fast survey with the 100 m GBT of the NRAO was done to obtain the necessary short-spacing information for the H i spectral line images. Table 1 lists basic parameters of the VGPS. 2.1.1 Mosaic Strategy An interferometer survey using the mosaicking technique to combine many primary beam areas into a large map begins with the choice of the area to cover and the telescope time available. These two numbers set the overall sensitivity or noise level of the survey, but this sensitivity is not uniform over the area. The gain in a single VLA field varies over the $32^{\prime}$ (FWHM) primary beam of the VLA antennas. When multiple fields are combined in a mosaic, the spacing between pointing centers determines the corrugation or spatial variation of the sensitivity function. A centered hexagonal geometry is optimum for flattening the sensitivity function, for a given number of pointings, but the function can always be flattened further by decreasing the spacing between adjacent beam centers and so increasing the total number of pointings observed. The overhead costs in telescope drive time and the complexity of the data reduction are increased as the number of pointings increases, so there is a compromise required between flattening the sensitivity function and reducing the number of pointings. For this survey we have used the relatively wide spacing of $25\arcmin$ between points, which is not much less than the full-width to half-power point of the VLA beam at $\lambda$ 21-cm of $32\arcmin$. This results in the sensitivity function shown in Figure 1. The $25\arcmin$ spacing choice is driven by the 20 second minimum lag time between the end of data taking on one scan and the beginning of the next, which is exacerbated by the need to delete the first 10 second data average of each scan. These features of the current VLA data acquisition system make it a relatively slow telescope for mosaicking. There are other options available for data taking while driving the telescope (mode OF), but these proved to be impractical for this survey. The D configuration of the VLA is subject to shadowing, i.e. blockage of one antenna by another, at low elevations. For a mosaic survey, it is even more important than usual to avoid shadowing, because of its effect on the primary beam shape of the blocked antenna, which compromises the estimate of the telescope response to a model source that is needed for the maximum entropy deconvolution. This survey includes many fields at negative declinations, for which the available hour angle range is minimal. Avoiding shadowing becomes the first driver of the observing strategy. Figure 2 shows a rough guide for the hour angle and declination range which is safe from shadowing for the D configuration. All scans for the primary survey area ($|b|<1\arcdeg$, $18\arcdeg<l<65\arcdeg$) were taken without shadowing, as were most in the secondary area ($|b|>1\arcdeg$). Besides avoiding shadowing and minimizing telescope drive times, the most important consideration for scheduling was obtaining multiple scans on each field at widely spaced hour angles. This is the best way to minimize sidelobes due to large unsampled regions of the uv plane that compromise the dynamic range of the resulting maps. The scheduling process was driven by the need to spread the observations of each field over the available hour angle range at the declination of that field, and still fit everything into about 25 sessions, each typically covering $14^{\rm h}<{\rm LST}<24^{\rm h}$. The low longitude end, which is observable only from $\sim 16^{\rm h}$ to $\sim 20^{\rm h}$ LST, was hard to fit into approximately 25 sessions. The area north of $+10\arcdeg$ declination ($l>45\arcdeg$) is relatively easy to schedule. For simplification, each row of pointings at constant longitude (six beam areas, 20 minutes of integration time total) was scheduled in a block. The final observing sequence gave hour angle coverages as shown in Figure 3. Top priority for scheduling was given to the primary area of the survey, latitude $-1\arcdeg$ to $+1\arcdeg$, longitude $18\arcdeg$ to $65\arcdeg$. The fields at higher positive and negative latitudes were observed with second priority, so their uv coverage is less evenly distributed over the available hour angle range. Generally the strongest continuum sources that cause the worst dynamic range problems are located in the primary area, so this strategy is appropriate. However it should be kept in mind that the quality of the imaging in the area of the survey with $|b|>1\arcdeg$ is degraded relative to the lower latitudes. Most fields (93%) were observed at least three times at different hour angles, and 38% of the fields were observed four or more times. About 7% of the fields were observed only 2 times. The theoretical sensitivity of the spectral line mosaics is 8.4 mJy (1 $\sigma$) or 1.4 K per $0.824\ \rm km\ s^{-1}$ channel for the beam size of $60\arcsec$. The rms noise amplitude in the final VGPS images is $1.8\ \rm K$ per channel on average. The actual spatial variation of the noise in the final VGPS images deviates somewhat from the regular pattern shown in Figure 1 because of differences in the integration time per field. Such differences exist because of variation in the number of visits to a field and because of a longer dwell time on the first field of a block of six. 2.1.2 Spectrometer This survey pushes the limits of the VLA correlator in that we need high resolution in velocity ($0.824\ \rm km\ s^{-1}$ = 3.90 kHz channel spacing) and broad bandwidth ($\sim 300\ \rm km\ s^{-1}$ = 1.4 MHz total). This was impossible to obtain with the existing spectrometer for two polarizations at once. To sacrifice one of the two circular polarizations would be equivalent to giving up half the integration time of the survey, so we chose a strategy that keeps both polarizations but with the coarser velocity channel spacing of $1.28\ \rm km\ s^{-1}$. We then stagger the placement of the channels between the two polarizations by half a channel spacing, $0.64\ \rm km\ s^{-1}$, so that the sampling on the spectral axis may be increased when all data are combined (Figure 4). The VGPS spectral line data are sampled on the same $0.824\ \rm km\ s^{-1}$ spectral channels as the CGPS for maximum consistency between the two datasets. The spectral resolution of the data ($1.21\times 1.28\ \rm km\ s^{-1}=1.56\ \rm km\ s^{-1}$) is determined by the size of the time lag window in the spectrometer. It is not changed by resampling to narrower spectral channels. The centers of the two polarizations are offset by $+32.304\ \rm km\ s^{-1}$ and $-31.460\ \rm km\ s^{-1}$ from the nominal center velocity, which is set at $v_{c}(l)=+80-(1.6\times l)\ \rm km\ s^{-1}$ with $l$ the longitude in degrees. Combining these gives spectra with velocity width $341\ \rm km\ s^{-1}$ after dropping 20 channels on either side of the band due to the baseband filter shape. The H i line emission is unpolarized except for tiny amounts due to the Zeeman effect, which are far below our sensitivity limit. But to avoid spurious spectral features arising from H i absorption of linearly polarized continuum, which is common in the synchrotron emission at low latitudes, we alternate the frequency settings between the two polarizations every 100 seconds. So a single observation of a survey field consists of two short integrations (100s each) with complementary spectrometer settings. This gives enough bandwidth to cover the range of velocities in the first Galactic quadrant, +150 to $-80\ \rm km\ s^{-1}$ at the lower longitudes, with $0.824\ \rm km\ s^{-1}$ channels throughout. The local oscillator settings for the survey were $-3.2$, $3590$ with bandwidth code 5 (1.5625 MHz, of which the inner $\sim$ 85% is usable) and correlator mode 2AD. No on-line Hanning smoothing was performed, and the single dish bandpass shape was not used to normalize the spectra, as is sometimes done as part of the correlation step. 2.2 VLA Calibration Calibration of the VLA data was carried out using standard procedures within AIPS. The primary calibrators 3C286 and 3C48 were used for flux and bandpass calibration. After calibration, the visibility data were imported into MIRIAD for further processing. Editing out glitches in the large volume of visibility data for the 990 VLA fields was done with an automated flagging routine. The visibility data for the VGPS fields were searched for high-amplitude points relative to the median visibility amplitude for a particular baseline in a particular channel and for a particular spectrometer/polarization combination. The thresholds applied in this procedure were chosen after careful inspection of the data. Amplitudes more than 20 Jy above the median amplitude in the duration of the snapshot were flagged. Also, scans with an overall median amplitude above 50 Jy were flagged to eliminate saturated antennas. Special care was taken not to label legitimate signal on the shortest baselines as bad data. The snapshots are sufficiently short that a constant visibility amplitude can be assumed for each baseline in a single channel in this search. The flagging procedure allowed streamlined visual inspection of identified bad data and, if necessary, a human veto before the actual flagging. No false rejections were found because the rejection criteria were sufficiently conservative. If visibilities for a particular combination of time, baseline and polarization were rejected in one channel, all channels were rejected so as to keep the uv sampling identical for all channels. Lower amplitude glitches often had higher amplitude counterparts in other channels. The policy to flag all channels was found to be effective in eliminating low-amplitude glitches as well. After the automated flagging procedure, only a few incidental manual flagging operations were required to make spectral line and continuum images free from noticeable effects of bad data. Preliminary continuum mosaics were constructed from visibility data averaged over channels outside the velocity range of Galactic H i line emission. These mosaics were analyzed with an automated source extraction algorithm to compare the fluxes of compact continuum sources with fluxes in the NVSS survey (Condon et al., 1998). Sources were labeled as suspected variables and removed from consideration if the ratio of the absolute value of the difference between the NVSS and VGPS fluxes and the mean of these fluxes was larger than 10 times the formal error. This comparison between the NVSS and VGPS showed that fluxes of compact sources in the VGPS were on average 30% less than fluxes listed in the NVSS. The origin of this discrepancy is not understood. It is believed to be the result of the higher system temperature in the VGPS observations, which is in part the result of bright H i emission in the Galactic plane. The VLA has an automatic gain control (AGC) system that scales the signal with the system temperature, but visibility amplitudes are corrected for this scaling. First we discuss various factors that affect the system temperature in the VGPS. Later we derive a correction to the flux scale of the VGPS to make it consistent with the NVSS. 2.2.1 Contributions to the system temperature Compared with the NVSS, there are two important enhanced contributions to the system temperature. One contribution is from bright Galactic H i and continuum emission, which can double the system temperature averaged over the inner 75% of the frequency band. The system temperature changes across the spectral band because the brightness of the H i line changes with velocity. The other contribution is from spillover to the ground when a field is observed at low elevation. Emission from the atmosphere also depends on elevation, adding 2 to 4 K to the system temperature. The effect of spillover is an order of magnitude larger than this. The average system temperature of the VLA antennas in the zenith is $T_{\rm sys}\approx 35\ \rm K$. The system temperature increases rapidly at low elevation, to approximately $70\ \rm K$ at elevation $30\arcdeg$. VGPS observations were made over a wide range of hour angles to obtain adequate sampling in the uv plane. As a result, fields were regularly observed far from the meridian at low elevation. In contrast, when the NVSS was made, its fields were observed close to the meridian in order to minimize ground noise. This strategy is more suitable for a continuum survey which targets compact sources. The system temperature for each antenna of the VLA is recorded and stored with the visibility data. However, for the present purpose it is more convenient to adopt a different measure of $T_{\rm sys}$ derived directly from the visibility data. The scalar average amplitude of the continuum-subtracted visibilities (abbreviated here as ampscalar) gives a spectrum that is proportional to the rms visibility amplitude in each sample; it is noise dominated (proportional to $T_{\rm sys}$). It takes into account the editing of data rejected by the automated flagging routine, and it is not necessarily averaged in frequency as is the system temperature recorded with the data. After continuum subtraction, only the shortest baselines contain some correlated signal because of the Galactic H i line. When averaging the ampscalar data over all antennas, this remaining signal has a negligible effect. This was verified by comparing the result with ampscalar values averaged over baselines longer than $1k\lambda$. The ampscalar values are proportional to the recorded system temperature averaged over the array. We write the total system temperature as the sum of the receiver temperature $T_{\rm rec}\approx 35\ \rm K$, an elevation-dependent term $T_{\rm earth}(h)$ which includes atmospheric emission but is usually dominated by spillover to the ground, and the brightness temperature of cosmic radio emission $T_{b}(v)$ which depends on velocity because of the bright Galactic H i line, $$T_{\rm sys}(v,h)=T_{b}(v)+T_{\rm earth}(h)+T_{\rm rec}$$ (1)1( 1 ) The spectral line data of the VGPS allows separation of the contribution of Galactic emission to $T_{\rm sys}$ from other contributions. This in turn allows us to derive a new functional form for the elevation dependence of $T_{\rm earth}(h)$. The brightness temperature in each velocity channel, $T_{b}(v)$, averaged over the VLA primary beam, was determined by smoothing the GBT maps and the Effelsberg maps to the resolution of the VLA primary beam. Figure 5 shows the relation between sky brightness temperature (line + continuum) averaged over the primary beam of the VLA, with the scalar averaged amplitude per channel. Three visits to the same field on three different days are shown. The main difference between the snapshots in Figure 5 is the elevation of the field at the time of observation. Some fields were observed at nearly the same elevation on different days. Such observations have nearly indistinguishable values of ampscalar. The relation between ampscalar and brightness temperature was fitted with a linear relation to allow extrapolation to $T_{b}=0\ \rm K$. The contribution of Galactic emission to the system temperature is eliminated by this extrapolation. We refer to this extrapolation as the scalar-averaged amplitude at $T_{b}=0$ or ampscalar at $T_{b}=0$, which is proportional to $T_{\rm earth}(h)+T_{\rm rec}$ according to Equation (1). We find that the slope of the relation in Figure 5 increases as the ampscalar at $T_{b}=0$ increases. Figure 6 shows the relation between the scalar-averaged visibility amplitude and elevation of the field at the time of observation for all snapshots taken on 2000 September 15. A poor correlation is found between the raw band-averaged ampscalar and elevation. After correction for the contribution by Galactic emission, a very tight relation is found between the ampscalar at $T_{b}=0$ and elevation. The scatter in this relation is consistent with the estimated errors in the extrapolation to $T_{b}=0$. This extrapolation is less accurate towards the brightest continuum sources because the relation as shown in Figure 5 is not well defined. The points which do not fit on the relation represent snapshots with very bright continuum. Similar results were obtained for each day of VGPS observations. Figure 6 illustrates the relative importance of factors which raise the system temperature. At high elevations, Galactic emission roughly doubles the system temperature in the Galactic plane. This increase is mostly due to the bright H i line, but the continuum also contributes 5 to 20 K to the system temperature, depending on longitude. The brightest continuum sources (W49, and W51) actually contribute more to the system temperature than the H i line when averaged over the spectral band. The tight correlation between ampscalar at $T_{b}=0$ and elevation resembles the increase in $T_{\rm sys}$ with elevation from spillover to the ground measured by Taylor, Ulvestad & Perley (2003), who applied a second-order polynomial fit to the data. It is difficult to interpret the results of a polynomial fit when comparing different days of observation, because some days covered more fields at low elevation than others. Inspection of all the data for each day separately showed that the elevation dependence of ampscalar at $T_{b}=0$ is also described well with two free parameters by the form $$A=a\cos^{4}h+b$$ (2)2( 2 ) Equation (2) was fitted to each day and polarization separately. Three fields centered on ($l$,$b$) = ($43.3$,$-0.1$), ($49.0$,$-0.5$), and ($49.4$,$-0.3$) were excluded from these fits. These fields are most affected by W49 and W51. Results of the fits are listed in Table 2. Equation (2) provides an accurate fit of the elevation-dependent noise contribution for each day. Note that the scatter around the fits in Figure 6 is much smaller than the difference between the L and R polarization. The mean difference between the two polarizations is $\langle b_{L}-b_{R}\rangle=0.13\pm 0.05$. The difference in ampscalar between the polarizations is equivalent to a system temperature that is approximately 12% higher in L than in R. The rms residuals of the fits ($\sigma_{L}$ and $\sigma_{R}$ in Table 2) are 0.06 in the mean. Inspection of the fits indicated that larger values of $\sigma_{L}$ and $\sigma_{R}$ indicate variation in the system temperature during the day. Such variation may be related to solar activity, in particular in September when the angular distance of the Sun was smaller. Inspection of the fits showed that variations in $a_{L}$ and $a_{R}$ in Table 2 appear to be related mainly to intra-day variation. However, the range of values of $b_{L}$ and $b_{R}$ represents real variations in the receiver temperature $T_{rec}$ during the period of observations. The fits in Table 2 allow us to make an elevation correction for the noise amplitude for each snapshot to obtain a prediction of the noise level if the field had been observed at the zenith. Figure 7 shows the distribution per VGPS field of the smallest value of the band averaged ampscalar of all snapshots contributing to a field. This map typically shows the ampscalar for the observation of each field at the highest elevation. Most of the structure in the distribution of the band averaged ampscalar corresponds to the VGPS observing sequences of six fields in a row. A few bright continuum sources near the Galactic plane can be identified as well. The observing pattern visible in the distribution of the band-averaged ampscalar shows that almost everywhere in the survey area elevation-dependent contributions to $T_{\rm sys}$ are important. Elevation-dependent effects and sky emission contribute roughly equal amounts to the total system temperature (Figure 6). The minimum ampscalar per field tends to be higher at low longitudes, in part because these fields transit the meridian at a lower elevation. Equation (2) allows a correction to be made for the elevation of the field at the time of observation. The parameters $a$ and $b$ were determined for each day and for each polarization separately by fitting Equation (2) to the available data as illustrated in Figure 6. The elevation-corrected band-averaged ampscalar was calculated by subtracting the excess in the band-averaged ampscalar value resulting from the elevation of the field. The map of the elevation-corrected ampscalar in Figure 7 shows a higher noise level in the Galactic plane, and toward bright continuum sources. Some residuals of the elevation correction remain in this map. These are likely the result of intra-day variations that are not traced by the daily average of the elevation correction applied here. The distribution of the elevation-corrected ampscalar shows a strong resemblance to the VGPS continuum image. This resemblance is not inconsistent with the fact that the H i line usually contributes more to the level of $T_{\rm sys}$ than the continuum. The visualization in Figure 7 tends to emphasize the variations between adjacent fields. Field-to-field variations in the velocity-integrated H i brightness are fairly small compared with field-to-field variations in the continuum level. 2.2.2 Correction of the VGPS flux scale The remainder of this section describes a method to adjust the flux scale of the VGPS to the flux scale of the NVSS. The Canadian Galactic Plane Survey also corrects its flux scale to the NVSS by comparing the fluxes of continuum sources in each CGPS field as part of the field registration process described in Taylor et al. (2003). A direct comparison of source fluxes per VGPS field is not possible because, on average, only four compact continuum sources are available per VGPS field. Bright extended continuum emission from H ii regions and supernova remnants also inhibits a comparison with NVSS sources. A subset of the VGPS fields contains one or more isolated compact continuum sources that can be compared with corresponding entries in the NVSS catalog. We obtain an empirical relation between the NVSS to VGPS flux ratio and the system temperature, including all contributions, as measured by the band-averaged ampscalar. This relation is used to predict a specific correction factor for each snapshot. To this end, each continuum snapshot was imaged and cleaned individually. Cleaning VLA snapshot images is difficult because of the high level of sidelobes of the synthesized beam. This is a particular concern for faint sources. Therefore, only sources with a flux density larger than 100 mJy were considered. Sources identified in the snapshot images were matched with sources in the NVSS catalog. A source was accepted if its position was within $15\arcsec$ of the NVSS position, and if the deconvolved size of the source in the NVSS catalog was less than $60\arcsec$. These requirements are aimed to avoid misidentification while retaining a sufficient number of sources to determine a relationship between the band averaged ampscalar and the flux ratio NVSS/VGPS. The snapshot images were sorted into narrow ranges of the band averaged ampscalar. For each narrow range, a list of identified sources was compiled. Each of these source lists contains continuum sources throughout the VGPS survey area detected in snapshots with nearly the same band-averaged ampscalar. Figure 8 shows the ratio of NVSS flux to VGPS flux as a function of the band averaged ampscalar. A tight relation is found between the band averaged ampscalar and the flux ratio NVSS/VGPS. This relation defines the correction factor for the flux scale of each snapshot based on its band averaged ampscalar. The scatter in the relation shown in Figure 8 is dominated by source number statistics. The effect of the sample size was investigated by regenerating the relation for fifty randomly selected subsamples, each half the size of the total number of snapshots. The error bars in Figure 8 show the rms variation for each bin over the fifty subsamples. Values of the band-averaged ampscalar are always larger than 1.4 in the VGPS. A second-order polynomial was fitted to the data to provide a prescription for the correction factor, as a first-order polynomial fit was deemed insufficient. A small number of fields that include very strong continuum emission have a band averaged ampscalar outside the range where the relation is defined. We assume the relation to be constant for band averaged ampscalar more than 3.8. All VGPS continuum mosaics were made again with this correction to the flux scale. These new mosaics were searched for compact continuum sources and the fluxes of these sources were compared with fluxes listed in the NVSS catalog. This comparison included many faint sources that were not included in the derivation of the flux correction. The flux scale of the VGPS mosaics was found to be consistent with the NVSS flux scale within 5%. The success of applying the well-defined relation in Figure 8 adds confidence to the initial assumption that the difference in the flux scale between VGPS and NVSS continuum images was related to the higher system temperature in the VGPS. However, the negative slope in this relation implies that the largest correction to the flux scale is necessary for VGPS snapshots with the lowest system temperature. This result remains unexplained. However, we note that increased contributions to the system temperature by bright continuum emission and by spillover to the ground at smaller Galactic longitudes must also have occurred to some extent in the NVSS observations. The response of the VLA to increased system temperature has been tested by following a flux calibrator during a time span of several hours as it approaches the horizon. These experiments indicate that the raw correlator output is adequately corrected for the scaling of the signal by the AGC (R. A. Perley, private communication). Calibrators observed for the VGPS are not well suited to repeat this experiment because they cover a very limited range in elevation. In particular, the primary flux calibrators were consistently observed at high elevation. With limited significance, we confirmed that the raw correlator output for the secondary calibrators did not change with elevation. Our analysis shows that a calibrator must be followed to very low elevations to probe the system temperature regime of VGPS observations. The behavior of ampscalar at $T_{b}=0$ in Figure 6 is representative for a calibrator, because the $T_{b}(v)$ term in Equation (1) is very small for a calibrator. A comparison of the two panels of Figure 6 shows that the system temperature in VGPS target fields almost always exceeds the system temperature of a calibrator at an elevation of only $25\arcdeg$. However, observations of a calibrator as it sets to elevations as low as $10\arcdeg$ (Taylor, Ulvestad & Perley, 2003) cover the system temperature range of most VGPS observations. The relation in Figure 8 compares flux measurements of fairly bright, relatively isolated, compact continuum sources. These flux measurements should not be affected by modest differences in the uv coverage between the VGPS and the NVSS. Since ampscalar values do not change if the minimum baseline is taken as long as $1~{}k\lambda$, the ampscalar values are also not affected by differences in uv coverage. Therefore, we believe that the behavior in Figure 8 is not related to differences in uv coverage between the VGPS and the NVSS. In summary, the VGPS calibration follows these steps: 1. Standard gain and phase calibration was done in AIPS for all snapshots. The gain and phase corrections were applied to the data before importing the visibilities into MIRIAD for imaging. 2. Visibilities were channel-averaged (continuum images) or continuum-subtracted (spectral line images) as appropriate. 3. The gain for each snapshot was adjusted by a single factor derived from its band averaged ampscalar and the relation shown in Figure 8. A small number of fields was selected for self-calibration after construction of the mosaics (Section 2.4.1). 2.3 GBT observations Short-spacing information for the H i line emission was obtained using the Green Bank Telescope (GBT) of the National Radio Astronomy Observatory (NRAO). The GBT is a 100 m dish with an off-axis feed arm for an unblocked aperture which reduces the radio sidelobes, radio frequency interference, spectral standing waves, and the effects of stray radiation. The extent of the GBT survey varies with longitude, covering $|b|\leq 1\fdg 3$ for longitudes $18\arcdeg\leq l\leq 45\arcdeg$ and $|b|\leq 2\fdg 3$ for longitudes $45\arcdeg\leq l\leq 67\arcdeg$. Observations of these regions began on 2002 November 21$-$25 and continued on 2003 March 6$-$9, 2003 May 26$-$27, and 2003 August 29$-$30. The observing strategy was to make small, $\Delta l=2\arcdeg-5\arcdeg$, H i maps ‘on the fly.’ Using this technique, the telescope was driven at a rate of $3\arcdeg$ per minute with a sample written every second. After driving through the full latitude range, the telescope was stepped $3\arcmin$ in longitude. This process was continued until the particular map was completed. The data were taken in frequency-switching mode using the GBT spectral processor with a total bandwidth of $5\ \rm MHz$ across 1024 channels. The resulting channel spacing is $1.03\ \rm km\ s^{-1}$ and the spectral resolution is $1.25\ \rm km\ s^{-1}$ (FWHM). Data were taken by in-band frequency switching yielding a total velocity coverage of $530\ \rm km\ s^{-1}$ centered at $+50\ \rm km\ s^{-1}$ LSR. IAU standard regions S6 and S8 (Williams, 1973) were observed and used for absolute brightness temperature calibration. The angular resolution of the GBT data is $\sim 9\arcmin$. The final GBT H i spectra have an rms noise of $\sim 0.3\ \rm K$ in emission-free channels. Imaging and data calibration were done with the AIPS++ data reduction package. A first order polynomial was fitted to the off-line channels to remove residual instrumental baseline structure. Polynomial fits of higher order were also attempted, but these were found to be no better than a first order polynomial fit. Gridding of the data was done with the task IMAGEMS, using a BOX gridding function. The latitude and longitude size of each cell was set to $3\arcmin$, equal to the longitude spacing of the observations. Each of the small H i maps had a narrow overlap in longitude with the adjacent maps. After applying the absolute brightness temperature calibration, the overlap regions were compared for consistency. A majority of the overlap regions showed small discrepancies of order 1$-$3 K, but a few regions were found to have brightness temperature differences of up to 6$-$8 K. The overlap regions were used to scale the cubes observed on different days to a common calibration, which is consistent to within $3\%$. As a second check of the observed brightness temperature consistency, a small portion of the galactic plane ($65\arcdeg\leq l\leq 67\arcdeg$; $-1.3\arcdeg\leq b\leq+1.3\arcdeg$) was observed twice, once during the 2002 November 21$-$25 observing session and again during the 2003 August 29$-$30 session. Only small differences of a few Kelvin were found between these maps. 2.4 Imaging The definition of VGPS mosaic images is a direct extension of the set of mosaic images of the CGPS. VGPS images are sampled in position and velocity on the same grid as CGPS images. Mosaics of $1024\times 1024$ pixels ($5\fdg 12\times 5\fdg 12$) are centered at intervals of $4\arcdeg$ in longitude. This provides significant overlap between mosaics for coverage of objects near the mosaic boundary. The velocity axis of VGPS spectral line cubes is sampled with the same channel definitions as the spectral line cubes of the CGPS, but the velocity range covered by the VGPS is different. There are three VGPS data products as described below: continuum images which include short-spacing data, continuum-subtracted H i spectral line images which include short-spacing data, and continuum-included spectral line images which do not include short-spacing data. All VGPS data products will be made available on the World Wide Web through the Canadian Astronomy Data Centre (CADC). 2.4.1 Continuum VGPS line or continuum images are mosaics of several VLA pointings. As it is impractical to process the entire survey area at the same time, only fields with their central longitudes within $4\arcdeg$ of the center of the mosaic were included in the construction of each mosaic. Each VLA field was imaged to the 10% level of the primary beam, and field images were combined with the VLA primary beam as weighting function. The primary beam model for the VLA is the same as used in the AIPS task LTESS. Figure 9 shows the sampling in the uv plane and the synthesized beam for a representative field near the center of the survey, composed of snapshots taken at three different hour angles. A Gaussian weighting function in the uv plane was applied to obtain a $60\arcsec$ (FWHM) synthesized beam throughout the survey area. The strongest sidelobes of the synthesized beam are spokes running radially outward from the main lobe in several position angles. The pattern of the spokes depends on the hour angles at which the field was observed. The pattern is different for each field, but fields that were observed within the same sequence of six fields (Section 2.1), have similar but slightly rotated patterns. The amplitude of the spokes in the dirty beam is typically 10% of the main lobe, but peaks up to 17% are usually present. The VGPS images must be deconvolved to remove artifacts resulting from these sidelobes. Emission in the Galactic plane usually fills the field of view. In this case, deconvolution is preferred after mosaicking the fields. Continuum images were constructed from visibility data averaged over channels without discernible line emission, but avoiding noisy channels near the edges of the frequency band. Deconvolution of the VGPS images was done with the MIRIAD program MOSMEM which uses the maximum entropy method described by Cornwell & Evans (1985) and Sault et al. (1996). Experiments showed that no significant improvement was made after about 20 iterations of the algorithm, even if no formal convergence could be reached. A maximum of 50 iterations was adopted in the deconvolution of the continuum mosaics. The criteria that define the convergence of the deconvolution, the entropy function and the $\chi^{2}$ criterium, include a summation over the entire image. Therefore, the result of the deconvolution depends somewhat on the area that was imaged in the construction of the mosaic. This causes small differences between neighboring mosaics in the area where they overlap, with rms amplitude at or below the level of the noise. Standard calibration alone results in images with a maximal dynamic range of $\sim$100, which is not sufficient to image bright continuum sources in the survey area without discernible image artifacts. The continuum mosaics were inspected for residual sidelobes around bright sources after the deconvolution step. Self-calibration was attempted for selected fields with strong artifacts, beginning with the field in which the source is located closest to the field center. A small mosaic, typically including all fields within $\sim 1\arcdeg$ of the source, was made and deconvolved. If a particular visit to the field could be identified as the prime origin of the artifacts, the visibility data from this visit were excluded from the initial imaging step. The deconvolution of this small mosaic produces a sky model of “clean components”, which is used for phase self-calibration on the central field only, after multiplication with the primary beam model of the VLA. The small mosaic is then made with the improved calibration solution for the central field. If any data were left out of the first imaging step, those data were included after the first round of self calibration. If necessary, adjacent fields are self-calibrated subsequently. The central field is usually self-calibrated a second time after self-calibration of the surrounding fields. After self-calibration, the final VLA continuum mosaic was constructed and regridded to Galactic coordinates. The results of the self-calibration are usually satisfactory. A dynamic range of $\sim$200 was obtained in the continuum images after self calibration on sources that mostly affect a single field. However, self-calibration does not provide a good solution if the source is too far from the field center. The brightest continuum sources in the VGPS, in particular W49 and W51, generate artifacts even at large distances from the field center. These artifacts remain in the continuum images, mostly affecting fields surrounding these sources. The VLA is not sensitive to emission on angular scales larger than $\sim 30^{\prime}$ because structures on these scales are resolved even by the shortest projected baselines. The missing continuum short-spacing information was provided by the continuum survey of Reich & Reich (1986) and Reich et al. (1990) with the 100 m Effelsberg telescope. The deconvolved VLA mosaic was combined with the Effelsberg image with the MIRIAD program IMMERGE. The interferometer and single-dish images are both Fourier transformed and added in the uv plane with baseline-dependent weights such that the sum represents the visibilities for all angular scales weighted by a single Gaussian weighting function defined by the $1^{\prime}$ synthesized beam. More details of this procedure were given by McClure-Griffiths et al. (2005). The single-dish data may be multiplied with a factor close to 1 to compensate for a potential difference in the flux calibration between the two surveys. This factor can be determined by comparing the Fourier transforms of the single-dish and interferometer images in a common annulus in the uv plane, taking into account the difference in resolution. This flux scale factor was found to be equal to 1 within the uncertainties of a few percent, so it was set to 1 for all VGPS continuum mosaics. 2.4.2 H i spectral line Continuum emission was subtracted in the uv plane by subtracting the average of visibilities in the channels used to create the continuum images. The continuum-subtracted visibility data were resampled from their original $1.28\ \rm km\ s^{-1}$ channels to $0.824\ \rm km\ s^{-1}$ channels before imaging with the velocity resampling method included in MIRIAD. The resampled visibility data in an output channel are the weighted mean of visibilities of all input channels that overlap with the output channel. The weights are proportional to the amount of overlap between each input velocity channel with the output velocity channel. VGPS velocity channels were defined to match with velocity channels of the CGPS. The resulting velocity sampling was tested by comparing VGPS continuum absorption profiles with corresponding profiles from other sources, in particular the CGPS. These tests showed that the strategy of staggering velocity channels (Figure 4) and subsequent resampling of the data in MIRIAD was successful in recovering a fully-sampled H i line profile over the required wide velocity range. The resampled visibility data were imaged and mosaicked in the same way as the continuum images. Continuum sources in absorption appear as a negative imprint in the H i image in the form of the continuum source convolved with the dirty beam. A continuum source brighter than $T_{\rm b}\approx 20\ \rm K$ displays visible sidelobes in channels where the optical depth of the H i line is significantly more than 1. The VGPS area contains many brighter continuum sources which display significant sidelobes in absorption, even if the optical depth of the H i line is not that high. The maximum entropy deconvolution would consider the sidelobes associated with absorbed continuum sources as structure in the H i emission and try to deconvolve these structures. Before deconvolution of the H i line emission, the sidelobes of bright continuum sources are removed using the clean (Högbom, 1974) algorithm with the MIRIAD program MOSSDI. A temporary mosaic with (projected) baselines longer than $0.3\ \rm k\lambda$ was made for this purpose. The exclusion of short baselines in this mosaic eliminates most H i emission and allows proper cleaning of (negative) absorbed continuum sources without interference from negative sidelobes associated with H i emission. The resulting clean component model for absorbed sources was subtracted from the H i image and restored with a Gaussian beam before deconvolution of the line emission. This procedure is effective in removing sidelobes associated with absorbed continuum sources, in particular the radial spokes that occur in the synthesized beam of VLA snapshots (Figure 9). The H i line emission was then deconvolved with the same maximum entropy algorithm as used for the continuum images. The deconvolved model of “clean components” of the H i emission was restored with a $1\arcmin$ (FWHM) circular Gaussian beam. The synthesized beam in the VGPS H i images is therefore independent of position and identical to the synthesized beam in the continuum images. The clean H i images were regridded to Galactic coordinates and combined with the H i zero spacing data set obtained with the GBT. The process of combining the single-dish survey with the interferometer survey allows rescaling of the single-dish data to match the flux scale of the interferometer. The accuracy with which this scale factor can be determined from the H i spectral line data was estimated at 5% to 10%. Within the uncertainties, no scaling of the GBT flux scale was required. However, after a detailed comparison of H i images from the VGPS and the CGPS (see Section 3.1), it was decided to rescale the GBT data with a factor 0.973 to obtain a single consistent calibration for the CGPS and the VGPS. This correction factor is well within the uncertainty of the absolute flux calibration. A consistent flux scale for these surveys facilitates the comparison of properties of H i emission in the inner and the outer Galaxy. 2.4.3 Continuum-included H i cubes H i cubes were also constructed with both line and continuum emission included, for use in 21-cm absorption line studies. Instead of subtracting the continuum (as was done for the continuum-subtracted line cubes described in the previous section), the visibility data were imaged directly with the same velocity channels as the spectral line cubes. The weighting function in the uv plane was superuniform weighting, which optimizes the angular resolution of the images. Maximum angular resolution is more important for continuum absorption experiments than brightness sensitivity. A “dirty” continuum-included mosaic cube was created, and deconvolved using the maximum entropy method; the deconvolved model of “clean components” was restored with a $50^{\prime\prime}$ circular Gaussian beam (slightly smaller than the $60^{\prime\prime}$ Gaussian beam used to restore the line cubes and continuum maps). The deconvolved cubes were then regridded to Galactic coordinates. Note that no zero-spacing data were added to these continuum-included H I cubes, since no zero-spacing data set was available that included both line and continuum emission. 3 Comparing the VGPS with CGPS and SGPS The three individual Galactic plane surveys will be combined into a uniform high-fidelity multi-wavelength dataset which covers the Galactic plane from $l=-107\arcdeg$ to $l=+190\arcdeg$ at least within the latitude range $|b|<1\arcdeg$ with arcminute angular resolution. This combined dataset is called the International Galactic Plane Survey (IGPS). It will cover 90% of the area of the Galactic disk. With the completion of the VGPS, different IGPS components overlap in narrow portions of the Galactic plane. This allows for the first time a comparison of these large mosaicking surveys of the H i sky. The success of an integrated IGPS dataset depends on the extent to which the different surveys can produce consistent images when observing the same part of the sky. This is a significant challenge given the different telescopes and imaging techniques used, in particular for the H i line where extended bright emission is present throughout the field of view. Comparing the VGPS with the CGPS is of particular interest because the image deconvolution step does not exist in the CGPS. 3.1 VGPS and CGPS The CGPS and the VGPS are each a combination of two separate surveys. Low spatial frequencies are sampled by a single-dish survey, and high spatial frequencies are sampled by an interferometer survey. The single-dish survey dominates the signal at spatial frequencies smaller than those sampled by the shortest projected baselines of the interferometer. Most of the power in Galactic H i emission occurs on these large angular scales. The low-resolution survey for the CGPS was done with the DRAO 26 meter telescope. The low-resolution survey for the VGPS was done with the 100 m GBT. These telescopes have different resolution and stray radiation characteristics. Stray radiation correction was done for the low-resolution DRAO survey (Higgs & Tapping, 2000), but not for the GBT survey, as the sidelobes of the GBT are much lower. The shortest (unprojected) baseline of the DRAO synthesis telescope is $60\lambda$, compared to $170\lambda$ for the VLA. The largest angular scale sampled by the interferometer is therefore approximately 3 times larger in the CGPS. On average, the VLA D-array also samples smaller angular scales than the DRAO ST. The longest baselines of the VLA D-array are $\sim 50\%$ longer than the longest baseline of the DRAO ST. At the declination of the area of overlap ($\delta\lesssim 30\arcdeg$) the array configurations and difference in latitude between DRAO and the VLA further increase the difference between the interferometers in projected baselines. The most significant difference between the VGPS and the CGPS is the density of the sampling of the aperture plane by the interferometer. A full synthesis with the DRAO ST samples the aperture plane completely between an inner and an outer boundary (Landecker et al., 2000). This complete sampling allows direct imaging of the visibility data without deconvolution. The sampling of the aperture plane by VGPS snapshots is less complete. The necessary deconvolution of the VGPS images is a priori the most likely source of differences between CGPS and VGPS H i images. In order to compare the CGPS and VGPS spectral line data, a special VGPS spectral line cube was made with a resolution that matches the $122\arcsec\times 58\arcsec$ synthesized beam of the DRAO ST in the overlap area. The major axis of the DRAO synthesized beam changes by 17% across the area where the VGPS and CGPS overlap. Assuming an average beam shape for the comparison leads to a maximum mismatch of the beam size of less than 9% in the declination direction in the extreme corners of the area where VGPS and CGPS overlap. The spectral resolution of the CGPS is $1.32\ \rm km\ s^{-1}$ (FWHM), marginally smaller than the $1.56\ \rm km\ s^{-1}$ (FWHM) resolution of the VGPS. The difference is less than half a frequency channel. This small difference in velocity resolution is not expected to produce a noticeable difference because of the modest dynamic range of the spectral line data. Figure 10 compares H i line profiles from the CGPS and the VGPS. The CGPS and VGPS H i emission profiles agree very well, but comparing H i emission profiles is not a sensitive test of the consistency for small-scale structure, because the signal is dominated by the single-dish surveys. Line profiles affected by continuum absorption are an exception, because the depth of the absorption profile is very sensitive to resolution. Figure 10 therefore focuses on continuum absorption profiles. Judging the significance of the differences between the two datasets involves the noise in the images at the resolution of $2^{\prime}\times 1^{\prime}$. The rms noise in channels free of line emission in the overlap region, avoiding noisy edges of the CGPS mosaic, is 1.8 K for the CGPS, and 1.2 K for the VGPS. These noise amplitudes are consistent with the theoretical noise levels in DRAO spectral line images at the declination of the overlap region (Landecker et al., 2000), and for VGPS data convolved to the $2^{\prime}\times 1^{\prime}$ resolution of the DRAO images. The theoretical off-line noise in the difference profile is therefore 2.2 K (rms). In channels where the H i brightness is high ($\sim$100 K), the noise is approximately twice the off-line noise (see Section 2.2). Differences between the H i profiles should therefore be compared to an expected rms noise amplitude 4.4 K. The measured rms amplitude of the difference image per channel is $2.2\ \rm K$ per velocity channel in the absence of line emission (as expected), and $5.8\ \rm K$ in channels where the H i emission is brightest. The rms amplitude of the difference image over the overlap area is $\sim 30\%$ higher than the theoretical noise amplitude of the difference image. Locally, the differences in Figure 10 are usually less than 10 K, but occasional differences as large as $\pm 20\ \rm K$ (4 to 5$\sigma$) exist. Figure 11 shows the difference VGPS$-$CGPS in a single velocity channel at $+9\ \rm km\ s^{-1}$. Inspection of the difference images shows that residuals on small angular scales usually coincide with emission features in the original images. Some ripple-like artifacts in the highest rows of VGPS fields ($b=\pm 2\arcdeg$) are the result of imperfect convergence of the maximum entropy deconvolution (e.g., Figure 11 near $l=66\fdg 25$, $b=-2\arcdeg$). Differences associated with mainly small-scale H i emission are also believed to originate from the deconvolution. In rare cases these differences appear as a measurable velocity difference between features seen in the CGPS and in the VGPS. However, an analysis of narrow continuum absorption features (Figure 10) showed that in general the velocities in the surveys agree very well. However, the most conspicuous structure in the difference image is a wave pattern with a wavelength of order $1\fdg 5$ which does not resemble any structure in the H i emission. The pattern is seen with the same phase and orientation in channels where the average brightness of the H i line is high, but its amplitude varies with the brightness of the H i line. The pattern may be isolated by filtering in the Fourier domain. Subtracting the wave pattern in Figure 10 decreases the rms residuals to $5.2\ \rm K$. This is more than the expected value of 4.4 K. If the remaining difference is entirely the result of the image deconvolution, then the inconsistencies because of the image deconvolution are less than 3 K (rms). The crests of the wave pattern are aligned with the field centers of the CGPS or possibly the direction of right ascension and the wavelength corresponds within the errors with the separation of the CGPS fields. The wavelength of the pattern is more than three times the field separation of the VGPS. This excludes the VLA data as a possible origin. Also, the zero-spacing datasets of the CGPS and VGPS were taken with a primary beam of $30^{\prime}$ and $9^{\prime}$, excluding a possible problem with the primary beam of these telescopes as the origin. Both zerospacing datasets were observed scanning across the Galactic plane at constant longitude. It is not likely this observing pattern would produce a wave pattern with the orientation seen in Figure 11. This leaves the DRAO ST as the most likely origin of the wave pattern. The sign of the fluctuations in Figure 11 suggests that the CGPS intensity is systematically higher at larger distances from the field center in the declination direction. This could indicate that the primary beam of the DRAO ST is slightly elliptical, with a larger size in the declination direction than predicted by the axially symmetric model adopted in the mosaicking (Taylor et al., 2003). This hypothesis was confirmed by a reanalysis of point sources fluxes measured for the CGPS field registration (Taylor et al., 2003) provided by S. J. Gibson. In the field registration process, fluxes of compact continuum sources are measured in individual CGPS fields and compared with fluxes listed in the NVSS. This process has accumulated $\sim 17,000$ flux measurements over the years of observations for the CGPS. This dataset was analyzed for systematic changes in the ratio of CGPS flux to NVSS flux as a function of position angle in the field. It was found that the flux ratio CGPS/NVSS was on average a few percent higher along the declination axis. This systematic effect can be removed by applying a slightly elliptical primary beam model for the DRAO ST, with axial ratio $0.975\pm 0.005$. This value is within the uncertainties of a previous determination of the primary beam shape of individual DRAO antennas (Dougherty, 1995), but the comparison with the VGPS shows that it has a significant effect in wide-field imaging of the Galactic H i line. This comparison shows that despite the critical deconvolution step in the VGPS image processing, the VGPS images are in general very consistent with CGPS images which did not require image deconvolution. We estimated that inconsistencies in the convergence of the VGPS image deconvolution result in differences less than $\sim$3 K (rms). We have found evidence for a small ellipticity in the primary beam of the DRAO ST, which creates a measurable effect in wide-field images of bright Galactic H i emission. It should be noted that the overlap area between the CGPS and the VGPS is in a part of the sky which is most favorable for the VGPS. 3.2 VGPS and SGPS The Southern Galactic Plane Survey (SGPS) covers a large piece of the Galactic plane in the southern hemisphere between $l=-107\arcdeg$ and $l=+20\arcdeg$ and $|b|<1\arcdeg$. The interferometer used is the ATNF Australia Telescope Compact Array (ATCA). Zero spacing data is provided by the multi-beam $64\ \rm m$ Parkes telescope. The ATCA is mainly an east-west array but it has a north-south arm which was used for the northernmost part of the SGPS (phase II), including the overlap region with the VGPS, to obtain more complete sampling in the uv plane. The ATCA elements are similar in size to the VLA antennas, so individual SGPS fields are similar in size to VGPS fields. Mosaicking speed is improved at the expense of continuity of antenna tracks in the uv plane, in particular at longer baselines. The density of sampling of the uv plane by the ATCA decreases faster with baseline length than for the VLA. The longer baselines of the ATCA are included in SGPS datasets used to study continuum absorption, but not in the H i spectral line cubes. This limits the resolution of the SGPS H i spectral line images to $\sim 2^{\prime}$ (FWHM). The SGPS and VGPS use the same software for mosaicking and image deconvolution, but there are some differences. The SGPS data acquisition and image processing was described by McClure-Griffiths et al. (2001) and McClure-Griffiths et al. (2005). The area of overlap between the VGPS and SGPS is smaller than the overlap between VGPS and CGPS. Also, SGPS images have $3\arcmin\times 2\arcmin$ resolution in the overlap area, compared to $2\arcmin\times 1\arcmin$ for the CGPS. The shape of the synthesized beam of the SGPS changes rapidly across the overlap area. This is relevant for comparing absorption profiles, because continuum sources and their associated absorption are not cleaned separately in the SGPS. Absorbed continuum sources are convolved with the local dirty beam which may have a different shape than the global restoring beam adopted for the line emission. The comparison between SGPS and VGPS images presented here is therefore less detailed. However, this comparison is useful because it is done in the part of the IGPS survey area which is most difficult to observe from the point of view of both the VGPS and the SGPS. It is therefore a comparison of images made under the least favorable conditions. Figure 12 compares SGPS and VGPS line profiles with the same spatial resolution at two locations. The upper panel shows a representative H i emission line profile. The correspondence between the VGPS and SGPS profiles is generally as good as between the CGPS and VGPS. The bottom panel in Figure 12 shows a continuum absorption profile. Here, the quality of the correspondence between the profiles depends critically on our ability to match the beam of the SGPS at the location of the source. For comparison, the full-resolution VGPS profile is also shown. It is found that the VGPS and SGPS H i images (not shown) are consistent to a similar degree as the consistency between the CGPS and VGPS. The comparison between VGPS and SGPS is somewhat limited because of the smaller area of overlap and the varying beam of the SGPS data. 4 Panoramic images of the Galactic plane The high-resolution images of the VGPS open the opportunity for a variety of research on the interstellar medium in the first Galactic quadrant. The images also reveal objects which would not be noticed in previous surveys with lower resolution. Stil et al. (2004) reported the discovery of a small H i shell in the inner Galaxy. The upper limit to any thermal emission from this shell derived from the VGPS continuum image, imposed strong constraints on the interpretation of this shell as a stellar wind bubble. Lockman & Stil (2003) and Stil et al. (2006) reported the discovery of small H i clouds with large forbidden velocities from the VGPS data. These clouds with masses of tens of solar masses and radii of a few parsec may be the disk analogy of similar clouds seen at large distances from the Galactic plane (Lockman, 2002). Stil et al. (2006) demonstrate the importance of arcminute-resolution images in the discovery of these clouds. The VGPS data are presented in Figure 13 as panoramic images of the Galactic plane in 21-cm continuum and H i line emission. The continuum images display a strong increase in complexity of structure as Galactic longitude decreases, reflecting the higher rate of star formation in the inner Galaxy. At lower longitudes, the Galactic plane is traced by a diffuse layer of emission approximately $1\arcdeg$ thick centered on $b=0\arcdeg$. This layer may be the thin disk identified at 408 MHz by Beuermann et al. (1985). A large number of shells, filaments and center-filled diffuse emission regions is seen concentrated toward the Galactic plane, with occasional objects located a degree or more from $b=0\arcdeg$. Many compact continuum sources are also present. At longitudes $l\gtrsim 55\arcdeg$, the Galactic plane is much less discernible in the continuum images. Some small diffuse emission regions and numerous point sources are seen on top of an extended otherwise featureless background of continuum emission. The structures in the VGPS continuum images, with angular scales ranging from several degrees down to the resolution limit of the VGPS, represent processes that operate on a wide range in physical size, time scale, and energy. Many unresolved sources are distant radio galaxies, but some compact Galactic H ii regions and supernova remnants may also appear unresolved in the VGPS. The brightest continuum sources in the VGPS are W49 ($l=43\arcdeg$) and W51 ($l=49\arcdeg$). The VGPS continuum images contain artifacts within a distance $\sim 1\arcdeg$ around these sources. The dynamic range is limited because self-calibration for a source far from the field center is not possible (see Section 2.4.1). This affects mainly fields surrounding the bright source. Sidelobes of a bright continuum source outside the survey area are present at $l\approx 35\fdg 5$, $b\approx-1\fdg 3$. The bottom panels in Figure 13 show H i emission at $+3.5\ \rm km\ s^{-1}$. H i at this velocity arises predominantly near the solar circle (Galactocentric radius 8.5 kpc), and thus both locally and from gas 6.6 to 16 kpc distant, depending on longitude. The VGPS covers only a small fraction of the scale height of local gas, but at the far side the latitude range $b=\pm 1\fdg 3$ spans more than 700 pc (almost twice the FWHM thickness of the H i layer) perpendicular to the Galactic plane at $l=20\arcdeg$. Any tendency of H i emission to be concentrated toward the Galactic plane is therefore most likely the result of emission on the far side. At longitudes $l\lesssim 30\arcdeg$, H i emission in Figure 13 is brighter around $b=0\arcdeg$ than at $b=\pm 1\fdg 3$. Here we see distant H i at the far side on the solar circle through local H i. This distinction disappears for $l\gtrsim 30\arcdeg$ because the FWHM thickness of the H i layer on the far side becomes comparable to or larger than the latitude range covered by the VGPS. The local H i is widely distributed in latitude, but it is far from featureless. Most of the structure in the local H i is difficult to separate from structure in emission from the far side, but there are exceptions. A band of decreased H i brightness temperature runs approximately perpendicular to the Galactic plane between $l=20\arcdeg$ and $l=22\arcdeg$. This dark band is part of a large cloud of cold H i discovered by Heeschen (1955) and mapped by Riegel & Crutcher (1972) seen in ’self-absorption’ against brighter background H i emission. H i self-absorption features may represent a transition or a boundary between molecular and atomic hydrogen. The high-resolution VGPS images reveal delicate substructure in this large cloud of cold hydrogen. Many H i self-absorption features are seen in VGPS images at positive velocities on a wide range of angular scales. Measurements of the opacity of continuum absorption provide an estimate of the spin temperature of H i and therefore the thermal structure of the gas. Continuum absorption profiles can also help resolve the near-far distance ambiguity of kinematic distances for sources of continuum emission in the inner Galaxy. Many resolved and unresolved continuum sources are seen in absorption in the H i images. These appear as a negative imprint of the continuum image onto the H i images in Figure 13. However, this imprint is not a perfect copy of the continuum image. A good example is the area between $l=18\arcdeg$ and $l=20\arcdeg$, where some continuum sources are seen in absorption, whereas other sources of similar brightness are not. This clearly illustrates the potential of the VGPS data to study the location and the properties of cold H i clouds in the Galaxy. Any structural resemblance between H i emission and continuum emission would be unusual. Continuum emission originates mainly from ionized gas or relativistic electrons. The 21-cm line of H i traces neutral atomic gas. Apart from the negative imprint of the continuum image because of continuum absorption, there is almost no similarity between the H i and continuum emission in Figure 13. A peculiar exception is a thin filament of continuum emission at $l=38\fdg 35$, $0\arcdeg\leq b\leq 0\fdg 7$. An H i filament with the same thickness and center line is observed in the VGPS. It is visible in Figure 13, but the velocity of the H i filament is actually near $0\ \rm km\ s^{-1}$, where it is much more conspicuous against the other emission. 5 Conclusions H i spectral line and 21-cm continuum images of the Galactic plane between $l=18\arcdeg$ and $l=67\arcdeg$ from the VLA Galactic Plane Survey (VGPS) are presented. The VGPS data will be made available on the World Wide Web through the Canadian Astronomy Data Centre (CADC). The calibration of the VGPS images was made consistent with the calibration of the NVSS (Condon et al., 1998). Initially fluxes of compact sources in VGPS images were found to be systematically below the fluxes listed in the NVSS by up to 30%. A correction to the flux scale of the VGPS was developed, based on the noise amplitude in the visibility data. This procedure can be applied to any VGPS field, including those without sufficient point sources to compare the flux directly with the NVSS, and those fields which are filled with complex extended emission. After this correction, the fluxes of compact continuum sources in the VGPS were found to be consistent with NVSS fluxes to within 5%. VGPS H i images were compared with those from the CGPS and SGPS in regions of overlap and show good agreement between the three surveys, although the rms amplitude of the difference image (VGPS$-$CGPS) is approximately 30% higher than the fluctuations expected from the theoretical noise levels alone. Small systematic differences between the VGPS and CGPS originate from imperfections of the image deconvolution applied in the VGPS image processing, and from a wave pattern which resembles the effect of ellipticity in the primary beam of the DRAO ST. Acknowledgements The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. The VGPS is supported by a grant to A.R.T. from the Natural Sciences and Engineering Council of Canada. This research was supported in part by the National Science Foundation through grants AST 97-32695 and AST 03-07603 to the University of Minnesota. JMS thanks Richard Gooch for adding code to the KARMA software which reads miriad uv datasets used in our automated flagging program, and Steven J. Gibson for providing the CGPS field registration data. The authors thank the anonymous referee for useful comments on the manuscript. References Beuermann et al. (1985) Beuermann, K., Kanbach, G., and Berkhuijsen, E. M. 1985, A&A, 153, 17 Condon et al. (1998) Condon, J. J., Cotton, W. D., Greisen, E. W., Yin, Q. F., Perley, R. A., Taylor, G. B., & Broderick, J. J. 1998, AJ, 115, 1693 Cornwell & Evans (1985) Cornwell, T. J., & Evans, K. F. 1985, A&A, 143, 77 Dickey & Lockman (1990) Dickey, J. M., & Lockman, F. J. 1990, ARA&A, 28, 215 Dougherty (1995) Dougherty, S. M. 1995, DRAO technical report, October 1995 Heeschen (1955) Heeschen, D. S. 1955, ApJ, 121, 569 Higgs & Tapping (2000) Higgs, L. A., & Tapping, K. F. 2000, AJ, 120, 2471 Högbom (1974) Hogbom, J. A. 1974, A&AS, 15, 417 Landecker et al. (2000) Landecker, T. l., Dewdney, P. E., Burgess, T. A., Gray, A. D., Higgs, L. A., Hoffmann, A. P., Hovey, G. J., Karpa, D. R., Lacey, J. D., Prowse, N., Purton. C. R., Roger R. S., Willis, A. G., Wyslouzil, W., Routledge, D., & Vaneldik, J. F. 2000, A&AS, 145, 509 Lockman (2002) Lockman, F. J. 2002, ApJ, 580, L47 Lockman & Stil (2003) Lockman, F. J., & Stil, J. M. 2004, in ASP Conf. Ser. 317, Milky Way Surveys: The Structure and Evolution of Our Galaxy, ed. D. Clemens, R. Y. Shah, & T. Brainerd, (San Francisco: ASP), 20 McClure-Griffiths et al. (2001) McClure-Griffiths, N. M., Green A. J., Dickey J. M., Gaensler, B. M., Haynes, R. F., & Wieringa, M. H. 2001, ApJ, 551, 394 McClure-Griffiths et al. (2005) McClure-Griffiths, N. M., Dickey, J. M., Gaensler, B. M., Green, A. J., Haverkorn, M., & Strasser, S. 2005, ApJS, 158, 178 Reich & Reich (1986) Reich, P.,& Reich, W. 1986, A&AS, 63, 205 Reich et al. (1990) Reich, W., Reich, P., & Fürst 1990, A&AS, 83, 539 Riegel & Crutcher (1972) Riegel, K. W., & Crutcher, R. M. 1972, A&A, 18, 55 Sault et al. (1996) Sault, R. J., Staveley-Smith, L., & Brouw, W. N. 1996, A&AS, 120, 375 Stil et al. (2004) Stil, J. M., Taylor, A. R., Martin, P. G., Rothwell, T. A., Dickey, J. M., McClure-Griffiths, N. M. 2004, ApJ, 608, 297 Stil et al. (2006) Stil, J. M., Lockman, F.J., Taylor, A. R., Dickey, J. M., Kavars, D. W., Martin, P. G., Rothwell, T. A., Boothroyd, A. I., McClure-Griffiths, N. M. 2006, ApJ, 637, 366 Taylor et al. (2002) Taylor, A. R., Stil, J. M., Dickey, J. M., McClure-Griffiths, N. M., Martin, P. G., Rothwell, T., Lockman, F. J. 2002, in ASP Conf. Ser. 276, Seeing Through The Dust: The Detection Of H iAnd The Exploration Of The ISM In Galaxies, ed. A. R. Taylor, T. L. Landecker, & A. G. Wills (San Francisco: ASP), 68 Taylor et al. (2003) Taylor, A. R., Gibson, S. J., Peracaula, M., Martin, P. G., Landecker, T. L., Brunt, C. M., Dewdney, P. E., Dougherty, S. M., Gray, A. D., Higgs, L. A., Kerton, C. R., Knee, L. B. G., Kothes, R., Purton, C. R., Uyaniker, B., Wallace, B. J., Willis, A. G., & Durand, D. 2003, AJ, 125, 3145 Taylor, Ulvestad & Perley (2003) Taylor, G. B., Ulvestad, J. S., & Perley, R. A. 2003, on-line document: The Very Large Array Observational Status Summary, 26 May 2003, http://www.vla.nrao.edu/astro/guides/vlas/current/vlas.html Williams (1973) Williams, D. R. W. 1973, A&AS, 8, 505
IRB$-$TH$-$2/03 Reduction method for dimensionally regulated one-loop N-point Feynman integrals [1cm] G. Duplančić 111e-mail: [email protected] and B. Nižić 222e-mail: [email protected] [1cm] Theoretical Physics Division, Rudjer Bošković Institute, P.O. Box 180, HR-10002 Zagreb, Croatia [.5cm] We present a systematic method for reducing an arbitrary one$-$loop $N-$point massless Feynman integral with generic 4$-$dimensional momenta to a set comprised of eight fundamental scalar integrals: six box integrals in $D=6$, a triangle integral in $D=4$, and a general two$-$point integral in $D$ space time dimensions. All the divergences present in the original integral are contained in the general two$-$point integral and associated coefficients. The problem of vanishing of the kinematic determinants has been solved in an elegant and transparent manner. Being derived with no restrictions regarding the external momenta, the method is completely general and applicable for arbitrary kinematics. In particular, it applies to the integrals in which the set of external momenta contains subsets comprised of two or more collinear momenta, which are unavoidable when calculating one$-$loop contributions to the hard$-$scattering amplitude for exclusive hadronic processes at large momentum transfer in PQCD. The iterative structure makes it easy to implement the formalism in an algebraic computer program. 1 Introduction Scattering processes have played a crucial role in establishing the fundamental interactions of nature. They represent the most important source of information on short-distance physics. With increasing energy, multiparticle events are becoming more and more dominant. Thus, in testing various aspects of QCD, the high-energy scattering processes, both exclusive and inclusive, in which the total number of particles (partons) in the initial and final states is $N\geq 5$, have recently become increasingly important. Owing to the well-known fact that the LO predictions in perturbative QCD (PQCD) do not have much predictive power, the inclusion of higher-order corrections is essential for many reasons. In general, higher-order corrections have a stabilizing effect, reducing the dependence of the LO predictions on the renormalization and factorization scales and the renormalization scheme. Therefore, to achieve a complete confrontation between theoretical predictions and experimental data, it is very important to know the size of radiative corrections to the LO predictions. Obtaining radiative corrections requires the evaluation of one-loop integrals arising from the Feynman diagram approach. With the increasing complexity of the process under consideration, the calculation of radiative corrections becomes more and more tedious. Therefore, it is extremely useful to have an algorithmic procedure for these calculations, which is computerizable and leads to results which can be easily and safely evaluated numerically. The case of Feynman integrals with massless internal lines is of special interest, because one often deals with either really massless particles (gluons) or particles whose masses can be neglected in high$-$energy processes (quarks). Owing to the fact that these integrals contain IR divergences (both soft and collinear), they need to be evaluated in an arbitrary number of space-time dimensions. As it is well known, in calculating Feynman diagrams mainly three difficulties arise: tensor decomposition of integrals, reduction of scalar integrals to several basic scalar integrals and the evaluation of a set of basic scalar integrals. Considerable progress has recently been made in developing efficient approaches for calculating one$-$loop Feynman integrals with a large number ($N\geq 5$) of external lines [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Various approaches have been proposed for reducing the dimensionally regulated ($N\geq 5$)$-$point tensor integrals to a linear combination of $N-$ and lower$-$point scalar integrals multiplied by tensor structures made from the metric tensor $g^{\mu\nu}$ and external momenta [1, 2, 5, 7, 10]. It has also been shown that the general $(N>5)-$point scalar one$-$loop integral can recursively be represented as a linear combination of $(N-1)$$-$point integrals provided the external momenta are kept in four dimensions [3, 4, 5, 6, 7, 8]. Consequently, all scalar integrals occurring in the computation of an arbitrary one$-$loop $(N\geq 5)$$-$point integral can be reduced to a sum over a set of basic scalar box $(N=4)$ integrals with rational coefficients depending on the external momenta and the dimensionality of space$-$time. Despite, the considerable progress, the developed methods still cannot be applied to all cases of practical interest. The problem is related to vanishing of various relevant kinematic determinants. As far as the calculation of one-loop $(N>5)$-point massless integrals is concerned, the most complete and systematic method is presented in [7]. It does not, however, apply to all cases of practical interest. Namely, being obtained for the non-exceptional external momenta it cannot be, for example, applied to the integrals in which the set of external momenta contains subsets comprised of two or three collinear on-shell momenta. The integrals of this type arise when performing the leading-twist NLO analysis of hadronic exclusive processes at large$-$momentum transfer in PQCD. With no restrictions regarding the external kinematics, in this paper we formulate an efficient, systematic and completely general method for reducing an arbitrary one$-$loop $N-$point massless integral to a set of basic integrals. Although the method is presented for massless case, the generalization on massive case is straightforward. The main difference between the massive and massless cases manifests itself in the basic set of integrals, which in former case is far more complex. Among the one$-$loop Faynman integrals there exist both massive and massless integrals for which the existing reduction methods break down. The massless integrals belonging to this category are of more practical interest at the moment, so in this paper we concentrate on massless case. The paper is organized as follows. Section 2 is devoted to introducing notation and to some preliminary considerations. In Sec. 3, for the sake of completenes, we briefly review a tensor decomposition method for $N-$point tensor integrals which was originally obtained in Ref. [2]. In Sec. 4 we present a procedure for reducing one$-$loop $N$$-$point massless scalar integrals with generic $4$$-$dimensional external momenta to a fundamental set of integrals. Since the method is closely related to the one given in [5, 6], similarities and differences between the two are pointed out. Being derived with no restrictions to the external momenta, the method is completely general and applicable for arbitrary kinematics. Section 5 contains considerations regarding the fundamental set of integrals which is comprised of eight integrals. Section 6 is devoted to some concluding remarks. In the Appendix A we give explicit expressions for the relevant basic massless box integrals in $D=6$ space$-$time dimensions. These integrals constitute a subset of the fundamental set of scalar integrals. As an illustration of the tensor decomposition and scalar reduction methods, in the Appendix B we evaluate an one$-$loop 6$-$point Feynman diagram with massless internal lines, contributing to the NLO hard$-$scattering amplitude for $\gamma\,\gamma\rightarrow\pi^{+}\,\pi^{-}$ exclusive reaction at large momentum transfer in PQCD. 2 Definitions and general properties In order to obtain one$-$loop radiative coorections to physical processes in massless gauge theory, the integrals of the following type are required: $$I^{N}_{\mu_{1}\cdots\mu_{P}}(D;\{p_{i}\})\equiv(\mu^{2})^{2-D/2}\int\frac{{\rm d% }^{D}l}{(2\pi)^{D}}\frac{l_{\mu_{1}}\cdots l_{\mu_{P}}}{A_{1}A_{2}\cdots A_{N}% }~{}\cdot$$ (1) This is a rank $P$ tensor one$-$loop $N$$-$point Feynman integral with massless internal lines in $D$$-$dimensional space$-$time, where $p_{i}$, $(i=1,2,...,N)$ are the external momenta, $l$ is the loop momentum, and $\mu$ is the usual dimensional regularization scale. The Feynman diagram with $N$ external lines, which corresponds to the above integral, is shown in Fig. 1. For the momentum assignments as shown, i.e. with all external momenta taken to be incoming, the massless propagators have the form $$A_{i}\equiv(l+r_{i})^{2}+{\rm i}\epsilon\qquad i=1,\cdots,N~{},$$ (2) where the momenta $r_{i}$ are given by $r_{i}=p_{i}+r_{i-1}$ for $i$ from 1 to $N$, and $r_{0}=r_{N}$. The quantity ${\rm i}\epsilon$($\epsilon>0$) represents an infinitesimal imaginary part, it ensures causality and after the integration determines the correct sign of the imaginary part of the logarithms and dilogarithms. It is customary to choose the loop momentum in a such a way that one of the momenta $r_{i}$ vanishes. However, for general considerations, such a choice is not convenient, since by doing so, one loses the useful symmetry of the integral with respect to the indices $1,\cdots,N$. The corresponding scalar integral is $$I^{N}_{0}(D;\{p_{i}\})\equiv(\mu^{2})^{2-D/2}\int\frac{{\rm d}^{D}l}{(2\pi)^{D% }}\frac{1}{A_{1}A_{2}\cdots A_{N}}~{}.$$ (3) If $P+D-2N\geq 0$, the integral (1) is UV divergent. In addition to UV divergence, the integral can contain IR divergence. There are two types of IR divergence: collinear and soft. A Feynman diagram with massless particles contains a soft singularity if it contains an internal gluon line attached to two external quark lines which are on mass$-$shell. On the other hand, a diagram contains a collinear singularity if it contains an internal gluon line attached to an external quark line which is on mass $-$shell. Therefore, a diagram containing a soft singularity at the same time contains two collinear singularities, i.e. soft and collinear singularities overlap. When evaluating Feynman diagrams, one ought to regularize all divergences. Making use of the dimensional regularization method, one can simultaneously regularize UV and IR divergences, which makes the dimensional regularization method optimal for the case of massless field theories. The tensor integral (1) is, as it is seen, invariant under the permutations of the propagators $A_{i}$, and is symmetric with respect to the Lorentz indices $\mu_{i}$. Lorentz covariance allows the decomposition of the tensor integral (1) in the form of a linear decomposition consisting of the momenta $p_{i}$ and the metric tensor $g_{\mu\nu}$. 3 Decomposition of tensor integrals Various approaches have been proposed for reducing the dimensionally regulated $N-$point tensor integrals to a linear combination of $N-$ and lower$-$point scalar integrals multiplied by tensor structures made from the metric tensor $g^{\mu\nu}$ and external momenta. In this section we briefly review the derivation of the tensor reduction formula originally obtained in Ref. [2]. For the purpose of the following discussion, let us consider the tensor integral $$I^{N}_{\mu_{1}\cdots\mu_{P}}(D;\{{\nu}_{i}\})\equiv(\mu^{2})^{2-D/2}\int\frac{% {\rm d}^{D}l}{(2\pi)^{D}}\frac{l_{\mu_{1}}\cdots l_{\mu_{P}}}{A_{1}^{{\nu}_{1}% }A_{2}^{{\nu}_{2}}\cdots A_{N}^{{\nu}_{N}}}~{},$$ (4) and the corresponding scalar integral $$I_{0}^{N}(D;\{{\nu}_{i}\})\equiv(\mu^{2})^{2-D/2}\int\frac{{\rm d}^{D}l}{(2\pi% )^{D}}\frac{1}{A_{1}^{{\nu}_{1}}A_{2}^{{\nu}_{2}}\cdots A_{N}^{{\nu}_{N}}}~{}\cdot$$ (5) The above integrals represent generalizations of the integrals (1) and (3), in that they contain arbitrary powers ${\nu}_{i}\in\bf N$ of the propagators in the integrand, where $\{\nu_{i}\}$ is the shorthand notation for ($\nu_{1},\cdots,\nu_{N}$). Also, for notational simplicity, the external momenta are omitted from the argument of the integral. The Feynman parameter representation of the tensor integral $I^{N}_{\mu_{1}\cdots\mu_{P}}(D;\{\nu_{i}\})$, given in (4), which is valid for arbitrary values of $N$, $P$, $r_{i}$ and $\nu_{i}(>0)$, for the values of $D$ for which the remaining integral is finite, and the $\Gamma-$function does not diverge is given by $$\displaystyle I^{N}_{\mu_{1}\cdots\mu_{P}}(D;\{\nu_{i}\})=\frac{{\rm i}}{(4\pi% )^{2}}(4\pi\mu^{2})^{2-D/2}\!\!\sum_{k,j_{1},\cdots,j_{N}\geq 0\atop 2k+{% \scriptstyle\Sigma}j_{i}=P}\!\!\left\{[g]^{k}[r_{1}]^{j_{1}}\cdots[r_{N}]^{j_{% N}}\right\}_{\mu_{1}\cdots\mu_{P}}$$ (6) $$\displaystyle\times\,\frac{\Gamma\left(\sum\nolimits_{i}\nu_{i}-D/2-k\right)}{% 2^{k}\,\left[\prod\nolimits_{i}\Gamma(\nu_{i})\right]}(-1)^{\Sigma_{i}\nu_{i}+% P-k}\int_{0}^{1}\!\left(\prod\nolimits_{i}{\rm d}y_{i}y_{i}^{\nu_{i}+j_{i}-1}\right)$$ $$\displaystyle\times\delta\left(\sum\nolimits_{i=1}^{N}y_{i}-1\right)\left[\,-% \sum_{i,j=1\atop i<j}^{N}y_{i}y_{j}\left(r_{i}-r_{j}\right)^{2}-{\rm i}% \epsilon\,\right]^{k+D/2-\Sigma_{i}\nu_{i}},$$ where $\{[g]^{k}[r_{1}]^{j_{1}}\cdots[r_{N}]^{j_{N}}\}_{\mu_{1}\cdots\mu_{P}}$ represents a symmetric (with respect to $\mu_{1}\cdots\mu_{P}$) combination of tensors, each term of which is composed of $k$ metric tensors and $j_{i}$ external momenta $r_{i}$. Thus, for example, $$\left\{gr_{1}\right\}_{\mu_{1}\mu_{2}\mu_{3}}=g_{\mu_{1}\mu_{2}}r_{1\mu_{3}}+g% _{\mu_{1}\mu_{3}}r_{1\mu_{2}}+g_{\mu_{2}\mu_{3}}r_{1\mu_{1}}.$$ As for the integral representation of the corresponding scalar integral (5) the result is of the form $$\displaystyle I^{N}_{0}(D;\{\nu_{i}\})=\frac{{\rm i}}{(4\pi)^{2}}(4\pi\mu^{2})% ^{2-D/2}\frac{\Gamma\left(\sum\nolimits_{i=1}^{N}\nu_{i}-D/2\right)}{\prod% \nolimits_{i=1}^{N}\Gamma(\nu_{i})}(-1)^{\Sigma_{i=1}^{N}\nu_{i}}$$ (7) $$\displaystyle\times\,\int_{0}^{1}\!\left(\prod_{i=1}^{N}{\rm d}y_{i}y_{i}^{\nu% _{i}-1}\right)\delta\left(\sum_{i=1}^{N}y_{i}-1\right)\left[\,-\sum_{i,j=1% \atop i<j}^{N}y_{i}y_{j}\left(r_{i}-r_{j}\right)^{2}-{\rm i}\epsilon\,\right]^% {D/2-\Sigma_{i=1}^{N}\nu_{i}}\hskip-42.679134pt.$$ Now, on the basis of (7), (6) can be written in the form $$\displaystyle I^{N}_{\mu_{1}\cdots\mu_{P}}(D;\{\nu_{i}\})=\sum_{k,j_{1},\cdots% ,j_{N}\geq 0\atop 2k+{\scriptstyle\Sigma}j_{i}=P}\left\{[g]^{k}[r_{1}]^{j_{1}}% \cdots[r_{N}]^{j_{N}}\right\}_{\mu_{1}\cdots\mu_{P}}$$ (8) $$\displaystyle{}\times\frac{(4\pi\mu^{2})^{P-k}}{(-2)^{k}}\left[\prod_{i=1}^{N}% \frac{\Gamma(\nu_{i}+j_{i})}{\Gamma(\nu_{i})}\right]I^{N}_{0}(D+2(P-k);\{\nu_{% i}+j_{i}\}).$$ This is the desired decomposition, of the dimensionally regulated $N-$point rank $P$ tensor integral. It is originally obtained in Ref. [2]. Based on (8), any dimensionally regulated $N-$point tensor integral can be expressed as a linear combination of $N-$point scalar integrals multiplied by tensor structures made from the metric tensor $g^{\mu\nu}$ and external momenta. Therefore, with the decomposition (8), the problem of calculating the tensor integrals has been reduced to the calculation of the general scalar integrals. It should be pointed out that among the tensor reduction methods presented in the literature one can find methods, e.g. [7] which for $N\geq 5$ completely avoid the terms proportional to the metric tensor $g^{\mu\nu}$. Compared with the method expressed by (8), these reduction procedures lead to decomposition containing a smaller number of terms. The methods of this type are based on the assumption that for $N\geq 5$, one can find four linearly independent 4-vectors forming a basis of the 4-dimensional Minkowski space, in terms of which the metric tensor can then be expressed. This assumption is usually not realized when analyzing the exclusive processes at large-momentum-transfer (hard scattering picture) in PQCD. Thus, for example, in order to obtain the next-to-leading order corrections to the hard-scattering amplitude for the proton-Compton scattering, one has to evaluate one-loop $N=8$ diagrams. The set of external momenta contains two subsets comprised of three collinear momenta (representing the proton). The kinematics of the process is thus limited to the 3-dimensional subspace. If this is the case, the best way of doing the tensor decomposition is the one based on formula (8), regardless of the fact that for large $N$ the number of terms obtained can be very large. As is well known, the direct evaluation of the general scalar integral (5) (i.e. (7)) represents a non-trivial problem. However, with the help of the recursion relations, the problem can be significantly simplified in the sense that the calculation of the original scalar integral can be reduced to the calculation of a certain number of simpler fundamental (basic) intgerals. 4 Recursion relations for scalar integrals Recursion relations for scalar integrals have been known for some time [3, 4, 5, 6, 7]. However, as it turns out, the existing set of relations that can be found in the literature is not sufficient to perform the reduction procedure completely, i.e. for all one-loop integrals appearing in practice. The problem is related to vanishing of various kinematic determinants, it is manifest for the cases corresponding to $N>6$, and it is especially acute when evaluating one-loop Feynman integrals appearing in the NLO analysis of large momentum transfer exclusive processes in PQCD. As is well known, these processes are generally described in terms of Feynman diagrams containing a large number of external massless lines. Thus, for example, for nucleon Compton scattering is $N=8$. A large number of external lines implies a large number of diagrams to be considered, as well as a very large number of terms generated when performing the tensor decomposition using (8). In view of the above, to treat the Feynman integrals (diagrams) with a large number of external lines the use of computers is unavoidable. This requires that the scalar reduction procedure be generally applicable. It is therefore absolutely clear that any ambiguity or uncertainty present in the scalar recursion relations constitutes a serious problem. The method presented below makes it possible to perform the reduction completely regardless of the kinematics of the process considered and the complexity of the structure of the contributing diagrams. For the reason of completeness and clearness of presentation and with the aim of comparison with the already existing results, we now briefly present a few main steps of the derivation of recursion relations. It should be pointed out that the derivation essentially represents a variation of the derivation originally given in [5]. Recursion relations for scalar integrals are obtained with the help of the integration-by-parts method [5, 13, 14]. Owing to translational invariance, the dimensionally regulated integrals satisfy the following identity: $$0\equiv\int\frac{{\rm d}^{D}l}{(2\pi)^{D}}\frac{\partial}{\partial l^{\mu}}% \left(\frac{z_{0}l^{\mu}+\sum\nolimits_{i=1}^{N}z_{i}r_{i}^{\mu}}{A_{1}^{\nu_{% 1}}\cdots A_{N}^{\nu_{N}}}\right)~{},$$ (9) where $z_{i}~{}(i=0\cdots N)$ are arbitary constants, while $A_{i}$ are the propagators given by (2). The identity (9) is a variation of the identity used in [5], where it was assumed that $r_{N}=0$. Performing the differentiation, expressing scalar products in the numerator in terms of propagators $A_{i}$, choosing $z_{0}=\sum_{i=1}^{N}z_{i}$, (which we assume in the following) and taking into account the scalar integral (5), the identity (9) leads to the relation $$\displaystyle\sum_{j=1}^{N}\left(\sum_{i=1}^{N}\left[(r_{j}-r_{i})^{2}+2{\rm i% }\epsilon\right]z_{i}\right)\nu_{j}I_{0}^{N}(D;\{\nu_{k}+\delta_{kj}\})$$ (10) $$\displaystyle=$$ $$\displaystyle\!\!\!\sum_{i,j=1}^{N}z_{i}\nu_{j}I_{0}^{N}(D;\{\nu_{k}+\delta_{% kj}-\delta_{ki}\})-(D-\sum_{j=1}^{N}\nu_{j})z_{0}I_{0}^{N}(D;\{\nu_{k}\}),$$ where $\delta_{ij}$ is the Kronecker delta symbol. In arriving at (10), it has been understood that $$I_{0}^{N}(D;\nu_{1},\cdots,\nu_{l-1},0,\nu_{l+1},\cdots,\nu_{N})\equiv I_{0}^{% N-1}(D;\nu_{1},\cdots,\nu_{l-1},\nu_{l+1},\cdots,\nu_{N}).$$ (11) The relation (10) represents the starting point for the derivation of the recursion relations for scalar integrals. We have obtained the fundamental set of recursion relations by choosing the arbitrary constants $z_{i}$ so as to satisfy the following system of linear equations: $$\sum\nolimits_{i=1}^{N}(r_{i}-r_{j})^{2}z_{i}=C,\quad j=1,\ldots,N,$$ (12) where $C$ is an arbitrary constant. Introducing the notation $r_{ij}=(r_{i}-r_{j})^{2}$, the system (12) may be written in matrix notation as $$\left(\begin{array}[]{ccccc}0&r_{12}&\cdots&r_{1N}\\ r_{12}&0&\cdots&r_{2N}\\ \vdots&\vdots&\ddots&\vdots\\ r_{1N}&r_{2N}&\cdots&0\end{array}\right)\left(\begin{array}[]{c}z_{1}\\ z_{2}\\ \vdots\\ z_{N}\end{array}\right)=\left(\begin{array}[]{c}C\\ C\\ \vdots\\ C\end{array}\right).$$ (13) It should be pointed out that the expression of the type (10) and the system of the type (13), for the case of massive propagators ($A_{i}=(l+r_{i})^{2}-m_{i}^{2}+{\rm i}\epsilon$), see Ref. [5], can simply be obtained from the relation given above by making a change $r_{ij}\rightarrow r_{ij}-m_{i}^{2}-m_{j}^{2}$. Consequently, considerations performed for the massive case [5, 6] apply to the massless case, and vice versa. It should be mentioned that, in the existing literature, the constant $C$ used to be chosen as a real number different from zero. However, it is precisely this fact that, at the end, leads to the breakdown of the existing scalar reduction methods. Namely, for some kinematics (e.g. collinear on-shell external lines) the system (13) has no solution for $C\neq 0$. However, if the possibility $C=0$ is allowed, the system (13) will have a solution regardless of kinematics. This makes it possible to obtain additional reduction relations and formulate methods applicable to arbitrary number of external lines and to arbitrary kinematics. If (12) is taken into account, and after using the relation [2] $$-\sum\nolimits_{j=1}^{N}\nu_{j}I_{0}^{N}(D;\{\nu_{k}+\delta_{kj}\})=(4\pi\mu^{% 2})^{-1}I^{N}_{0}(D-2;\{\nu_{k}\}),$$ (14) which can be easily proved from the representation (7), the relation (10) reduces to $$\displaystyle C\,I_{0}^{N}(D-2;\{\nu_{k}\})$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}^{N}z_{i}I_{0}^{N}(D-2;\{\nu_{k}-\delta_{ki}\})$$ (15) $$\displaystyle+(4\pi\mu^{2})(D-1-\sum_{j=1}^{N}\nu_{j})z_{0}I_{0}^{N}(D;\{\nu_{% k}\}),$$ where $z_{i}$ are given by the solution of the system (12), and the infinitesimal part proportional to ${\rm i}\epsilon$ has been omitted. This is a generalized form of the recursion relation which connects the scalar integrals in a different number of dimensions [3, 4, 5, 6, 7]. The use of the relation (15) in practical calculations depends on the form of the solution of the system of equations (12). For general considerations, it is advantageous to write the system (12) in the following way: $$\left(\begin{array}[]{ccccc}0&1&1&\cdots&1\\ 1&0&r_{12}&\cdots&r_{1N}\\ 1&r_{12}&0&\cdots&r_{2N}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&r_{1N}&r_{2N}&\cdots&0\end{array}\right)\left(\begin{array}[]{c}-C\\ z_{1}\\ z_{2}\\ \vdots\\ z_{N}\end{array}\right)=\left(\begin{array}[]{c}z_{0}\\ 0\\ 0\\ \vdots\\ 0\end{array}\right).$$ (16) In writing (16), we have taken into account the fact that $z_{0}=\sum_{i=1}^{N}z_{i}$. In this way, the only free parameter is $z_{0}$ and by choosing it in a convenient way, one can always find the solution of the above system and, consequently, be able to use the recursion relations (15). In the literature, for example in Ref. [5, 6, 7], the recursion relations are obtained by inserting the general solution of the system (12), i.e. the system (16), into the relation (15). The recursion relations thus obtained are of limited practical use if the matrices of the mentioned systems are very singular. This happens when there are either two or more collinear external lines or, in general, for $N>6$. When this is the case, the analysis of the general coefficient of the recursion becomes very complicated and in many cases unmanageable. There are cases when all coefficients vanish. As stated in [6], for $N\geq 7$, owing to the drastic reduction of the recurrence relations these cases need a separate investigation. In addition, the above-mentioned problems with $C\neq 0$ appear. To avoid these problems, a different approach to recursion relations can be taken. It is based on the fact, that finding any solution of the systems of equations mentioned above, makes it possible to perform the reduction. Being forced to use computers, it is very convenient and important that the reduction procedure be organized in a such a way that the recursion relations are classified and used depending on the form of the solutions of the above systems. If this is done, the increased singularity of the kinematic determinants turns out to be working in our favour by making it easy to find a solution of the systems of linear equations relevant to the reduction. In the following we frequently refer to two determinants, for which we introduce the notations: for the determinant of the system (13) we introduce det$(R_{N})$, while for the determinant of the system (16) we use det$(S_{N})$. Depending on whether the kinematic determinants det$(R_{N})$ and det$(S_{N})$ are equal to zero or not, we distinguish four different types of recursion relations following from (15). Before proceeding to consider various cases, note that in the case when det$(S)\neq 0$, it holds $$C=-z_{0}\frac{{\rm det}(R_{N})}{{\rm det}(S_{N})}.$$ (17) It should be mentioned that for some of the recursion relations presented below one can find similar expressions in the literature. For reasons of clearness, connections of the relations given below with those existing in the literature are commented upon after the analysis of all possible cases has been considered. Let us now discuss all possible cases separately. 4.1 Case I:   ${\rm det}(S_{N})\neq 0$, ${\rm det}(R_{N})\neq 0$ The most convenient choice in this case is $z_{0}=1$. It follows from (17) that $C\neq 0$, so that the recursion relation (15) can be written in the following form: $$\displaystyle I_{0}^{N}(D;\{\nu_{k}\})$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{4\pi\mu^{2}(D-1-\sum\nolimits_{j=1}^{N}\nu_{j})}\bigg{[}% C\,I_{0}^{N}(D-2;\{\nu_{k}\})$$ (18) $$\displaystyle-\sum\nolimits_{i=1}^{N}z_{i}I_{0}^{N}(D-2;\{\nu_{k}-\delta_{ki}% \})\bigg{]}.$$ As it is seen, this recursion relation connects the scalar integral in $D$ dimensions with the scalar integrals in $D-2$ dimensions and can be used to reduce the dimensionality of the scalar integral. Since det$(R_{N})\neq 0$, some more recursion relations can be directly derived from (10). By directly choosing the constants $z_{i}$ in (10) in such a way that $z_{i}=\delta_{ik}$, for $k=1,\cdots,N$, we arrive at a system of $N$ equations which is always valid: $$\displaystyle\sum_{j=1}^{N}(r_{k}-r_{j})^{2}\nu_{j}I_{0}^{N}(D;\{\nu_{i}+% \delta_{ij}\})$$ $$\displaystyle=$$ $$\displaystyle\sum_{j=1}^{N}\nu_{j}I_{0}^{N}(D;\{\nu_{i}+\delta_{ij}-\delta_{ik% }\})$$ (19) $$\displaystyle-(D-\sum_{j=1}^{N}\nu_{j})I_{0}^{N}(D;\{\nu_{i}\}),\qquad k=1,% \cdots,N.$$ In the system (19) we have again disregarded the non-essential infinitesimal term proportional to ${\rm i}\epsilon$. The matrix of the system (19) is the same as the matrix of the system (12), whose determinant is different from zero, so that the system (19) can be solved with respect to $I_{0}^{N}(D;\{\nu_{i}+\delta_{ij}\})$, $j=1,\cdots,N$. The solutions represent the recursion relations which can be used to reduce the powers of the propagators in the scalar integrals. Making use of these relations and the relation (18), each scalar integral $I_{0}^{N}(D;\{\nu_{i}\})$ belonging to the type for which det$(S_{N})\neq 0$, det$(R_{N})\neq 0$ can be represented as a linear combination of integrals $I_{0}^{N}(D^{\prime};\{1\})$ and integrals with the number of propagators which is less than $N$. For the dimension $D^{\prime}$, one usually chooses $4+2\varepsilon$, where $\varepsilon$ is the infinitesimal parameter regulating the divergences. Even in the case when one starts with $D<D^{\prime}$, one can make use of the recursion (18) to change from the dimension $D$ to the dimension $D^{\prime}$. In addition to the two sets of recursion relations presented above, by combining them one can obtain an additional and very useful set of recursion relations. This set at the same time reduces $D$ and $\nu_{i}$ in all terms. By adding and subtracting the expression ${\delta}_{jk}I_{0}^{N}(D;\{\nu_{i}+\delta_{ij}-\delta_{ik})$ in the first term on the right-hand side of the system (19) and makeing use of the relation (14), one finds $$\displaystyle\sum_{j=1}^{N}(r_{k}-r_{j})^{2}\nu_{j}I_{0}^{N}(D;\{\nu_{i}+% \delta_{ij}\})$$ $$\displaystyle=$$ $$\displaystyle-(4\pi{\mu}^{2})^{-1}I_{0}^{N}(D-2;\{\nu_{i}-\delta_{ik}\})$$ (20) $$\displaystyle-(D-1-\sum_{j=1}^{N}\nu_{j})I_{0}^{N}(D;\{\nu_{i}\}),\qquad k=1,% \cdots,N.$$ The solution of this system of equations can in principle be used for reducing the dimension of the integral and the propagator powers. However, a much more useful set of the recursion relations is obtained by combining (20) and (18). Expressing the second term on the right$-$hand side of (20) with the help of (18), leads to $$\displaystyle\sum_{j=1}^{N}(r_{k}-r_{j})^{2}\nu_{j}I_{0}^{N}(D;\{\nu_{i}+% \delta_{ij}\})$$ $$\displaystyle=$$ $$\displaystyle(4\pi{\mu}^{2})^{-1}\bigg{[}\sum_{j=1}^{N}(z_{j}-{\delta}_{jk})I_% {0}^{N}(D-2;\{\nu_{i}-\delta_{ij}\})$$ (21) $$\displaystyle-CI_{0}^{N}(D-2;\{\nu_{i}\})\bigg{]},~{}~{}~{}\qquad k=1,\cdots,N,$$ where $z_{i}$ and $C$ represent solutions of the system (16) for $z_{0}=1$. Solutions of the system (21) represent the recursion relations which, at the same time, reduce (make smaller) the dimension and the powers of the propagators in all terms (which is very important). As such, they are especialy convenient for making a rapid reduction of the scalar integrals which appear in the tensor decomposition of high-rank tensor integrals. 4.2 Case II:   ${\rm det}(S_{N})\neq 0$, ${\rm det}(R_{N})=0$ The most convenient choice in this case is $z_{0}=1$. Unlike in the preceding case, it follows from (17) that $C=0$, so that the recursion relation (15) can be written as $$I_{0}^{N}(D;\{\nu_{k}\})=\frac{1}{4\pi\mu^{2}(D-1-\sum\nolimits_{j=1}^{N}\nu_{% j})}\left[-\sum\nolimits_{i=1}^{N}z_{i}I_{0}^{N}(D-2;\{\nu_{k}-\delta_{ki}\})% \right].$$ (22) It follows from (22) that it is possible to represent each integral of this type as a linear combination of scalar integrals with the number of propagators being less than $N$. 4.3 Case III:   ${\rm det}(S_{N})=0$, ${\rm det}(R_{N})\neq 0$ This possibility arises only if the first row of the matrix of the system (16) is a linear combination of the remaining rows. Then, the system (16) has a solution only for the choice $z_{0}=0$. With this choice, the remaining system of equations reduces to the system (12), where the constant $C$ can be chosen at will. After the parameter $C$ is chosen, the constants $z_{i}$ are uniquely determined. Thus the recursion relation (15) with the choice $C=1$ leads to $$I_{0}^{N}(D;\{\nu_{k}\})=\sum\nolimits_{i=1}^{N}z_{i}I_{0}^{N}(D;\{\nu_{k}-% \delta_{ki}\}).$$ (23) Consequently, as in the preceding case, the scalar integrals of the type considered can be represented as a linear combination of scalar integrals with a smaller number of propagators. 4.4 Case IV:   ${\rm det}(S_{N})=0$, ${\rm det}(R_{N})=0$ Unlike in the preceding cases, in this case two different recursion relations arise. To derive them, we proceed by subtracting the last, $(N+1)$$-$th, equation of the system (16) from the second, third,… and $N$$-$th equation, respectively. As a result, we arrive at the following system of equations: $$\displaystyle\left(\begin{array}[]{ccccc}0&1&1&\cdots&1\\ 0&-r_{1N}&r_{12}-r_{2N}&\cdots&r_{1N}\\ 0&r_{12}-r_{1N}&-r_{2N}&\cdots&r_{2N}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 0&r_{1,N-1}-r_{1N}&r_{2,N-1}-r_{2N}&\cdots&r_{N-1,N}\\ 1&r_{1N}&r_{2N}&\cdots&0\end{array}\right)\left(\begin{array}[]{c}-C\\ z_{1}\\ z_{2}\\ \vdots\\ z_{N-1}\\ z_{N}\end{array}\right)=\left(\begin{array}[]{c}z_{0}\\ 0\\ 0\\ \vdots\\ 0\\ 0\end{array}\right).$$ (24) As it is seen, the first $N$ equations of the above system form a system of equations in which the constant $C$ does not appear, and which can be used to determine the constants $z_{i}$, $i=1,\cdots N$. The fact that det$(S_{N})=0$ implies that the determinant of this system vanishes. Therefore, for the system in question to be consistent (for the solution to exist), the choice $z_{0}=0$ has to be made. Consequently, the solution of the system, $z_{i}\,\,(i=1,2,...N)$, will contain at least one free parameter. Inserting this solution into the last, $(N+1)$$-$th, equation of the system (24), we obtain $$\sum\nolimits_{i=1}^{N}r_{iN}z_{i}=C~{}.$$ (25) Now, by arbitrarily choosing the parameter $C$, one of the free parameters on the left-hand side can be fixed. Sometimes, (for instance, when there are collinear external lines) the left-hand side of Eq. (25) vanishes explicitly, although the solution for $z_{i}$ contains free parameters. In this case the choice $C=0$ has to be made. Therefore, corresponding to the case when det$(S_{N})$=det$(R_{N})=0$, one of the following two recursion relations holds: $$I_{0}^{N}(D;\{\nu_{k}\})=\sum\nolimits_{i=1}^{N}z_{i}I_{0}^{N}(D;\{\nu_{k}-% \delta_{ki}\}),$$ (26) obtained from (15) by setting $z_{0}=0$ and $C=1$, or $$0=\sum\nolimits_{i=1}^{N}z_{i}I_{0}^{N}(D;\{\nu_{k}-\delta_{ki}\}),$$ (27) obtained from (15) by setting $z_{0}=0$ and $C=0$. In the case (26), it is clear that the integral with $N$ external lines can be represented in terms of the integrals with $N-1$ external lines. What happens, however, in the case (27)? With no loss of generality, we can take that $z_{1}\neq 0$. The relation (27) can then be written in the form $$z_{1}I_{0}^{N}(D;\{\nu_{k}\})=-\sum\nolimits_{i=2}^{N}z_{i}I_{0}^{N}(D;\{\nu_{% k}+\delta_{k1}-\delta_{ki}\}).$$ (28) We can see that, in this case too, the integral with $N$ external lines can be represented in terms of the integrals with $N-1$ external lines. In this reduction, $\sum\nolimits_{i=1}^{N}\nu_{i}$ remains conserved. Based on the above considerations, it is clear that in all the above cases with the exception of that when det$(S_{N})\neq 0$, det$(R_{N})\neq 0$, the integrals with $N$ external lines can be represented in terms of the integrals with smaller number of external lines. Consequently, then, there exists a fundemantal set of integrals in terms of which all integrals can be represented as a linear combination. Before moving on to determine a fundamental set of integrals, let us briefly comment on the recursion relations for scalar integrals that can be found in the literature. As we see below, det$(S_{N})$ is proportional to the Gram determinant. All recursion relations for which the Gram determinant does not vanish are well known. Thus the relation of the type (18) can be found in Refs. [3, 4, 5, 6, 7], while the solutions of the systems (19) and (21) correspond to the recursion relations (28) and (30), respectively, given in Ref. [6]. Even though Case II also belongs to the class of cases for which the Gram determinant is different from zero, the system (13) has no solution for $C\neq 0$. This is a reason why the problem with using recursion relations appear in all approaches in which it is required that $C\neq 0$. This can be seen from the discussion in [7] (the method is based on the choice $C=1$) where the authors state that the reduction cannot be done for $N=3$ with on-shell external lines, and for $N=4$ when one of the Mandelstam variables $s$ or $t$ vanishes. Such cases, however, are unavoidable when obtaining leading twist NLO PQCD predictions for exclusive processes at large momentum transfer. On the other hand, in the approach of Ref. [6], where all coefficients of the recursion are given in terms of det$(S_{N})$ and the minors of the matrix $S_{N}$, the relation of the type (22) can be obtained (Eq. (35) in Ref. [6]). Cases III and IV, for which the Gram determinant vanishes are of special interest. One of the most discussed cases in the literature, belonging to Case III, is $N=6$. The recursion relations of the type (23) can be found in Ref. [4, 5, 6, 7]. As for Case IV, it is especially interesting owing to the fact that it includes all cases for $N\geq 7$. In this case, the systems (13) and (16) have no unique solution. Case IV causes a lot of trouble for approaches in which the recursion coefficients are given in terms of det$(S_{N})$ and the minors of the matrix $S_{N}$, for example in Ref. [6]. The problem consists in the fact that all determinants vanish, making it impossible to formulate the recursion relation, so these cases need a separate investigation. On the other hand, the method of Ref. [7], based on using pseudo-inverse matrices, can be used to construct the most general solution of the system (13) for the case of the vanishing Gram determinant. Even though the authors of Ref. [7] claim that using their approach one can always perform the reduction of the $N$-point function ($N\geq 6$), that does not seem to be the case. Namely, the method in [7] is based on the choice $C=1$, and as it has been shown above, in some cases belonging to Case IV the system (12) has no solution for $C\neq 0$, implying that the reduction cannot be performed. The impossibility of performing the reduction manifests itself such that $v\cdot K\cdot v=0$ (see (15) and (19) in [7]), a consequence of which is that the recursion coefficients become divergent. The situation of this kind arises regularly when dealing with integrals containing collinear external lines, i.e. for exceptional kinematics. The method of Ref. [7] has been obtained for non-exceptional kinematics. In view of what has been said above, most of the problems with existing reduction methods appear when dealing with the integrals with a large number of external lines. In all considerations in the literature that happens for $N>6$. It is very important to point out that, this is valid for the case when external momenta span the 4-dimensional Minkowski space. If the dimensionality of space span by external momenta is smaller, the problems start appearing for smaller $N$. Even though one can find the statements that such cases are at the moment of minor physical interest, we disagree. Namely, as stated earlier, the analysis of exclusive processes in PQCD, even for simple processes, requires evaluation of the diagrams with $N\geq 6$. Since these diagrams contain collinear external lines, the kinematics is limited to ($d<4$)-dimensional subspace. Thus, for example, for nucleon Compton scattering the integrals with $N=8$ external lines contribute and the kinematics is limited to the 3-dimensional subspace. A consequence of this is that the problem with using existing reduction methods will start appearing at the level of one-loop $N=5$ diagrams. The reduction method presented in this paper is formulated with an eye on exclusive processes in PQCD. The main point of the method is that the reduction is defined in terms of the solution of the linear systems given by (13) and (16). A consequence of this is that the method is quite general, very flexible, practical and easily transfered to the computer program. To perform reduction, one only needs to find solution of the above systems which can always be done. A very pleasing feature of this reduction is that the increased singularity of the kinematic determinants facilitates reduction, since finding a solution of the relevant linear systems becomes easy. 5 On the fundamental set of integrals We now turn to determine the fundamental set of integrals. To this end, let us first evaluate the determinant of the system (16), det$(S_{N})$, and determine the conditions under which this kinematic determinant vanishes. By subtracting the last column from the second, third, … and $N-$th column, respectively, and then the last row from the second, third, … and $N-$th row, respectively, we find that the determinant det$(S_{N})$ is given by the following expression: $${\rm det}(S_{N})=-{\rm det}\left[-2(r_{i}-r_{N})(r_{j}-r_{N})\right],\qquad i,% j=1,\ldots,N-1.$$ (29) As it can be seen, det$(S_{N})$ is proportional to the Gram determinant. Denote by $n$ the dimension of the vector space spanned by the vectors $r_{i}-r_{N}$,  ($i=1,\ldots,N-1$). Owing to the linear dependence of these vectors, the determinant vanishes when $N>n+1$. As in practice, we deal with the 4$-$dimensional Minkowski space, the maximum value for $n$ equals 4. An immediate consequence of this is that all integrals with $N>5$ can be reduced to the integrals with $N\leq 5$. In view of what has been said above, all one-loop integrals are expressible in terms of the integrals $I_{0}^{3}(4+2\varepsilon;\{1\})$, $I_{0}^{4}(4+2\varepsilon;\{1\})$, $I_{0}^{5}(4+2\varepsilon;\{1\})$, belonging to Case I, and the general two$-$point integrals $I_{0}^{2}(D^{\prime};{\nu}_{1},{\nu}_{2})$, which are simple enough to be evaluated analytically. Next, by substituting $D=6+2\varepsilon$, $N=5$ and ${\nu}_{i}=1$ into the recursion relation (15), one finds $$C\,I_{0}^{5}(4+2\varepsilon;\{1\})=\sum_{i=1}^{5}z_{i}I_{0}^{5}(4+2\varepsilon% ;\{\delta_{kk}-\delta_{ki}\})+(4\pi\mu^{2})(2{\varepsilon})~{}z_{0}~{}I_{0}^{5% }(6+2\varepsilon;\{1\}).$$ (30) Owing to the fact that the integral $I_{0}^{5}(6+2\varepsilon;\{1\})$ is IR finite[4], the relation (30) implies that the $N=5$ scalar integral, $I_{0}^{5}(4+2\varepsilon;\{1\})$, can be expressed as a linear combination of the $N=4$ scalar integrals, $I_{0}^{4}(4+2\varepsilon;\{1\})$, plus a term linear in $\varepsilon$. In massless scalar theories, the term linear in $\varepsilon$ can simply be omitted, with a consequence that the $N=5$ integrals can be reduced to the $N=4$ integrals. On the other hand, when calculating in renormalizable gauge theories (like QCD), the situation is not so simple, owing to the fact that the rank $P(\leq N)$ tensor integrals are required. In the process of the tensor decomposition and then reduction of scalar integrals all way down to the fundamental set of integrals, there appears a term of the form $(1/{\varepsilon})I_{0}^{5}(4+2\varepsilon;\{1\})$, which implies that one would need to know an analytical expression for the integral $I_{0}^{5}(4+2\varepsilon;\{1\})$, to order $\varepsilon$. Going back to the expression (30), we notice that all such terms can be written as a linear combination of the box (4$-$point) integrals in $4+2\varepsilon$ dimensions and 5$-$point integrals in $6+2\varepsilon$ dimensions. Therefore, at this point, the problem has been reduced to calculating the integral $I_{0}^{5}(6+2\varepsilon;\{1\})$, which is IR finite, and need to be calculated to order ${\cal O}({\varepsilon}^{0})$. It is an empirical fact [4, 5, 6, 7, 15] that in final expressions for physical quantities all terms containing the integral $I_{0}^{5}(6+2\varepsilon;\{1\})$ always combine so that this integral ends up being multiplied by the coefficients $\cal O(\varepsilon)$, and as such, can be omitted in one$-$loop calculations. A few theoretical proofs of this fact can be found in literature [4, 6, 7], but, to the best of our knowledge, the proof for the case of exceptional kinematics is still missing. That being the case, in concrete calculations, (to be sure and to have all the steps of the calculation under control ), it is absolutely necessary to keep track of all the terms containing the integral $I_{0}^{5}(6+2\varepsilon;\{1\})$, add them up and check whether the factor multiplying it is of order ${\cal O}(\varepsilon)$. Even though the experience gained in numerous calculations shows that this is so, a situation in which the integral $I_{0}^{5}(6+2\varepsilon;\{1\})$ would appear in the final result for a physical quantity accompained by a factor ${\cal O}(1)$ would not, from practical point of view, present any problem. Namely, being IR finite, although extremely complicated to be evaluated analyticaly, the integral $I_{0}^{5}(6+2\varepsilon;\{1\})$ can always, if necessary, be evaluated numerically. Based on the above considerations we may conclude that all one$-$loop integrals occurring when evaluating physical processes in massless field theories can be expressed in terms of the integrals $$I_{0}^{2}(D^{\prime};{\nu}_{1},{\nu}_{2}),~{}~{}~{}I_{0}^{3}(4+2\varepsilon;1,% 1,1),~{}~{}~{}I_{0}^{4}(4+2\varepsilon;1,1,1,1).$$ These integrals, therefore, constitute a minimal set of fundamental integrals. In view of the above discussion, we conclude that the set of fundamental integrals is comprised of integrals with two, three and four external lines. Integrals with two external lines can be calculated analytically in arbitrary number of dimensions and with arbitrary powers of the propagators. They do not constitute a problem. As far as the integrals with three and four external lines are concerned, depending on how many kinematic variables vanish, we distinguish several different cases. We now show that in the case $N=3$ we have only one fundamental integral, while in the case corresponding to $N=4$ there are six integrals. For this purpose, we make use of the vanishing of the kinematic determinants det$(R_{N})$ and det$(S_{N})$. 5.1 The general scalar integral for $N=2$ According to (5), the general massless scalar two$-$point integral in $D$ space$-$time dimensions is of the form $$I_{0}^{2}(D;\nu_{1},\nu_{2})\equiv(\mu^{2})^{2-D/2}\int\frac{{\rm d}^{D}l}{(2% \pi)^{D}}\frac{1}{A_{1}^{{\nu}_{1}}A_{2}^{{\nu}_{2}}}\cdot$$ (31) The closed form expression for the above integral, valid for arbitrary $D=n+2\varepsilon$, and arbitrary propagator powers ${\nu}_{1}$ and ${\nu}_{2}$, is given by $$\displaystyle I^{2}_{0}(n+2\varepsilon;\nu_{1},\nu_{2})$$ $$\displaystyle=$$ $$\displaystyle(4\pi\mu^{2})^{2-n/2}(-1)^{\nu_{1}+\nu_{2}}\left(-p^{2}-{\rm i}% \epsilon\right)^{n/2-\nu_{1}-\nu_{2}}$$ (32) $$\displaystyle\times$$ $$\displaystyle\frac{\Gamma\left(\nu_{1}+\nu_{2}-n/2-\varepsilon\right)}{\Gamma(% -\varepsilon)}\frac{\Gamma\left(n/2-\nu_{1}+\varepsilon\right)}{\Gamma(1+% \varepsilon)}\frac{\Gamma\left(n/2-\nu_{2}+\varepsilon\right)}{\Gamma(1+% \varepsilon)}$$ $$\displaystyle\times$$ $$\displaystyle\frac{1}{\Gamma(\nu_{1})\Gamma(\nu_{2})}\frac{\Gamma(2+2% \varepsilon)}{\Gamma(n-\nu_{1}-\nu_{2}+2\varepsilon)}I_{0}^{2}(4+2\varepsilon,% 1,1)\,,$$ where $$\displaystyle I^{2}_{0}(4+2\varepsilon;1,1)$$ $$\displaystyle=$$ $$\displaystyle\frac{{\rm i}}{(4\pi)^{2}}\left(-\frac{p^{2}+{\rm i}\epsilon}{4% \pi\mu^{2}}\right)^{\varepsilon}\frac{\Gamma\left(-\varepsilon\right)\Gamma^{2% }\left(1+\varepsilon\right)}{\Gamma\left(2+2\varepsilon\right)}\,.$$ (33) It is easily seen that in the formalism of the dimensional regularization the above integral vanishes for $p^{2}=0$. 5.2 The scalar integrals for $N=3$ The massless scalar one$-$loop triangle integral in $D=4+2\varepsilon$ dimensions is given by $$I_{0}^{3}(4+2\varepsilon,\{1\})=({\mu}^{2})^{-\varepsilon}\int\frac{{\rm d}^{4% +2\varepsilon}l}{(2\pi)^{4+2\varepsilon}}\frac{1}{A_{1}A_{2}A_{3}}~{}.$$ (34) Making use of the representation (7), and introducing the external masses $p_{i}^{2}=m_{i}^{2}$ $(i=1,2,3)$, the integral (34) can be written in the form $$\displaystyle I_{0}^{3}(4+2\varepsilon,\{1\})$$ $$\displaystyle=$$ $$\displaystyle\frac{{-\rm i}}{(4\pi)^{2}}\frac{\Gamma(1-\varepsilon)}{(4\pi{\mu% }^{2})^{\varepsilon}}\int_{0}^{1}{\rm d}x_{1}{\rm d}x_{2}{\rm d}x_{3}\;\delta(% x_{1}+x_{2}+x_{3}-1)$$ (35) $$\displaystyle\times\left(-x_{1}x_{2}\;m_{2}^{2}-x_{2}x_{3}\;m_{3}^{2}-x_{3}x_{% 1}\;m_{1}^{2}-{\rm i}\epsilon\right)^{\varepsilon-1}.$$ It is evident that the above integral is invariant under permutations of external massess $m_{i}^{2}$. Depending on the number of the external massless lines, and using the above mentioned symmetry, there are three relevant special cases of the above integral. We denote them by $$\displaystyle I_{3}^{1m}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{3}(4+2\varepsilon,\{1\};0,0,m_{3}^{2}),$$ (36) $$\displaystyle I_{3}^{2m}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{3}(4+2\varepsilon,\{1\};0,m_{2}^{2},m_{3}^{2}),$$ (37) $$\displaystyle I_{3}^{3m}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{3}(4,\{1\};m_{1}^{2},m_{2}^{2},m_{3}^{2}),$$ (38) The integrals $I_{3}^{1m}$ and $I_{3}^{2m}$ are IR divergent and need to be evaluated with $\varepsilon>0$, while the integral $I_{3}^{3m}$ is finite and can be calculated with $\varepsilon=0$. Now, it is easily found that the determinants of the systems of equations (13) and (16) are, for $N=3$, given by $$\displaystyle{\rm det}(R_{3})$$ $$\displaystyle=$$ $$\displaystyle 2m_{1}^{2}m_{2}^{2}m_{3}^{2},$$ (39) $$\displaystyle{\rm det}(S_{3})$$ $$\displaystyle=$$ $$\displaystyle(m_{1}^{2})^{2}+(m_{2}^{2})^{2}+(m_{3}^{2})^{2}-2m_{1}^{2}m_{2}^{% 2}-2m_{1}^{2}m_{3}^{2}-2m_{2}^{2}m_{3}^{2},$$ (40) As is seen from (39), if at least one of the external lines is on mass-shell, the determinant det$(R_{3})$ vanishes. Consequently, using the recursion relations (Case II or IV) the integrals $I_{3}^{1m}$ and $I_{3}^{2m}$ can be reduced to the integrals with two external lines. Therefore, we conclude that among the scalar integrals with three external lines the integral $I_{3}^{3m}$ is the only fundamental one. The result for this integral is well known [12, 13, 16]. In [12] it is expressed in terms of the dimensionless quantities of the form $$x_{1,2}=\frac{1}{2}\left[1-\frac{m_{1}^{2}}{m_{2}^{2}}+\frac{m_{3}^{2}}{m_{2}^% {2}}\pm\sqrt{\left(1-\frac{m_{1}^{2}}{m_{2}^{2}}-\frac{m_{3}^{2}}{m_{2}^{2}}% \right)^{2}-4\frac{m_{1}^{2}}{m_{2}^{2}}\frac{m_{3}^{2}}{m_{2}^{2}}}\right]$$ (41) and, being proportional to $1/(x_{1}-x_{2})$, appears to have a pole at $x_{1}=x_{2}$. It appears that the final expression [12] is not well defined when $x_{1}=x_{2}$. On the basis of Eqs.(40) and (41), one finds that $$x_{1}-x_{2}=\frac{1}{m_{2}^{2}}\sqrt{{\rm\det}(S_{3})}.$$ This equation implies that when $x_{1}-x_{2}=0$, instead of examining the limit of the general expression in [12], one can utilize the reduction relations (23) (corresponding to ${\rm det}(R_{3})\neq 0$ and ${\rm det}(S_{3})=0$) to reduce the IR finite integral with three external lines, $I_{3}^{3m}$, to the integrals with two external lines. 5.3 The scalar integrals for $N=4$ The massless scalar one$-$loop box integral in $D=4+2\varepsilon$ space$-$time dimensions is given by $$I_{0}^{4}(4+2\varepsilon,\{1\})=({\mu}^{2})^{-\varepsilon}\int\frac{{\rm d}^{4% +2\varepsilon}l}{(2\pi)^{4+2\varepsilon}}\frac{1}{A_{1}A_{2}A_{3}A_{4}}~{},.$$ (42) Making use of (7), introducing the external ”masses” $p_{i}^{2}=m_{1}^{2}$ $(i=1,2,3,4)$, and the Mandelstam variables $s=(p_{1}+p_{2})^{2}$ and $t=(p_{2}+p_{3})^{2}$, the integral (42) becomes $$\displaystyle I_{0}^{4}(4+2\varepsilon,\{1\})=\frac{{\rm i}}{(4\pi)^{2}}\frac{% \Gamma(2-\varepsilon)}{(4\pi{\mu}^{2})^{\varepsilon}}\int_{0}^{1}{\rm d}x_{1}{% \rm d}x_{2}{\rm d}x_{3}{\rm d}x_{4}\;\delta(x_{1}+x_{2}+x_{3}+x_{4}-1)$$ (43) $$\displaystyle\times\left(-x_{1}x_{3}\;t-x_{2}x_{4}\;s-x_{1}x_{2}\;m^{2}_{2}% \right.\left.-x_{2}x_{3}\;m^{2}_{3}-x_{3}x_{4}\;m^{2}_{4}-x_{1}x_{4}\;m^{2}_{1% }-{\rm i}\epsilon\right)^{\varepsilon-2}.$$ Introducing the following set of ordered pairs $${(s,t),\,(m_{1}^{2},m_{3}^{2}),\,(m_{2}^{2},m_{4}^{2})},$$ (44) one can easily see that the integral (43) is invariant under the permutations of ordered pairs, as well as under the simultaneous exchange of places of elements in any two pairs. The determinants of the coefficient matrices of the systems of equations (13) and (16), corresponding to the above integral, are $$\displaystyle{\rm det}(R_{4})$$ $$\displaystyle=$$ $$\displaystyle{}s^{2}t^{2}+(m_{1}^{2}m_{3}^{2})^{2}+(m_{2}^{2}m_{4}^{2})^{2}$$ $$\displaystyle{}-2stm_{1}^{2}m_{3}^{2}-2stm_{2}^{2}m_{4}^{2}-2m_{1}^{2}m_{2}^{2% }m_{3}^{2}m_{4}^{2}\,.$$ $$\displaystyle{\rm det}(S_{4})$$ $$\displaystyle=$$ $$\displaystyle{}2\,[\,st\,(m_{1}^{2}+m_{2}^{2}+m_{3}^{2}+m_{4}^{2}-s-t)$$ (46) $$\displaystyle{}+m_{2}^{2}m_{4}^{2}(s+t+m_{1}^{2}+m_{3}^{2}-m_{2}^{2}-m_{4}^{2})$$ $$\displaystyle{}+m_{1}^{2}m_{3}^{2}(s+t-m_{1}^{2}-m_{3}^{2}+m_{2}^{2}+m_{4}^{2})$$ $$\displaystyle{}-s(m_{1}^{2}m_{2}^{2}+m_{3}^{2}m_{4}^{2})-t(m_{1}^{2}m_{4}^{2}+% m_{2}^{2}m_{3}^{2})\,]~{}.$$ By looking at the expression for ${\rm det}(R_{4})$ given in (5.3) it follows that all box integrals $I_{0}^{4}$ that are characterized by the fact that in each of the ordered pairs in (44) at least one kinematic variable vanishes, are reducible. Therefore, for a box integral to be irreducible, it is necessary that both kinematic variables in at least one of the ordered pairs should be different from zero. Owing to the symmetries valid for the box integrals it is always possible to choose that pair to be $(s,t)$. Taking into account symmetries, and the number of external massless lines, there are six potentially irreducible special cases of the integral (43). Adopting the notation of Ref. [4], we denote them by $$\displaystyle I_{4}^{4m}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{4}(4,\{1\};s,t,m_{1}^{2},m_{2}^{2},m_{3}^{2},m_{4}^{2}),$$ (47) $$\displaystyle I_{4}^{3m}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{4}(4+2\varepsilon,\{1\};s,t,0,m_{2}^{2},m_{3}^{2},m_{4}^{% 2}),$$ (48) $$\displaystyle I_{4}^{2mh}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{4}(4+2\varepsilon,\{1\};s,t,0,0,m_{3}^{2},m_{4}^{2}),$$ (49) $$\displaystyle I_{4}^{2me}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{4}(4+2\varepsilon,\{1\};s,t,0,m_{2}^{2},0,m_{4}^{2}),$$ (50) $$\displaystyle I_{4}^{1m}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{4}(4+2\varepsilon,\{1\};s,t,0,0,0,m_{4}^{2}),$$ (51) $$\displaystyle I_{4}^{0m}$$ $$\displaystyle\equiv$$ $$\displaystyle I_{0}^{4}(4+2\varepsilon,\{1\};s,t,0,0,0,0,),$$ (52) with all kinematic variables appearing above being different from zero. The results for these integrals are well known [4, 11, 12, 17]. The integrals (48-52) are IR divergent, and as such need to be evaluated with $\varepsilon>0$, while the integral (47) is finite and can be calculated in $D=4$. The results for these integrals, obtained in [11, 12] for arbitrary values of the relevant kinematic variables, and presented in a simple and compact form, have the following structure: $$\displaystyle I_{4}^{K}(s,t;{m_{i}^{2}})$$ $$\displaystyle=$$ $$\displaystyle{}\frac{i}{(4{\pi})^{2}}\,\frac{\Gamma(1-\varepsilon){\Gamma}^{2}% (1+\varepsilon)}{\Gamma(1+2\varepsilon)}\,\frac{1}{\sqrt{{\rm det}(R_{4}^{K})}}$$ (53) $$\displaystyle{}\times\left[\frac{G^{K}(s,t;\varepsilon;{m_{i}^{2}})}{{% \varepsilon}^{2}}+H^{K}(s,t;{m_{i}^{2}})\right]+{\cal O}(\varepsilon),$$ $$\displaystyle{}K\in\{0m,1m,2me,2mh,3m,4m\}\,.$$ The IR divergences (both soft and collinear) of the integrals are contained in the first term within the square brackets, while the second term is finite. The function $G^{K}(s,t;\varepsilon;{m_{i}^{2}})$ is represented by a sum of powerlike terms, it depends on $\varepsilon$ and is finite in the $\varepsilon\rightarrow 0$ limit. As for the function $H^{K}(s,t;{m_{i}^{2}})$, it is given in terms of dilogarithm functions and constants. In the above, ${\rm det}(R_{4}^{K})$ is the determinant corresponding to the integral $I_{4}^{K}$ given in (47-52). For the purpose of numerical integration, it is very useful to have the exact limit of the integral $I_{4}^{K}$ when ${\rm det}(R_{4}^{K})\rightarrow 0$. This limit can be determined in an elegant manner by noticing that for ${\rm det}(R_{4}^{K})=0$ the reduction relations corresponding to Cases II and IV apply, making it possible to represent the box integral $I_{4}^{K}$ as a linear combination of the triangle integrals. This result can be made use of to combine box and triangle integrals (or pieces of these integrals) with the aim of obtaining numerical stability of the integrand [9]. The integrals (47-52) are irreducible only if the corresponding kinematic determinant ${\rm det}(R_{4}^{K})$ does not vanish. With the help of the tensor decomposition and the scalar reduction procedures, any dimensionally regulated one$-$loop $N-$point Feynman integral can be represented as a linear combination of the integrals: $$\displaystyle I_{0}^{2}(D^{\prime};{\nu}_{1},{\nu}_{2}),$$ $$\displaystyle I_{3}^{3m},$$ $$\displaystyle I_{4}^{4m},~{}I_{4}^{3m},~{}I_{4}^{2mh},~{}I_{4}^{2me},~{}I_{4}^% {1m},~{}I_{4}^{0m},$$ (54) multiplied by tensor structures made from the external momenta and the metric tensor. The integrals in (54) constitute a fundamental set of integrals. An alternative and more convenient set of fundamental integrals is obtained by noticing that all the relevant box integrals are finite in $D=6$. On the basis of Eq. (15), all IR divergent box integrals can be expressed as linear combinations of triangle integrals in $D=4+2\varepsilon$ dimension and a box integral in $D=6+2\varepsilon$ dimension. Next, using the same equation, all triangle integrals can be decomposed into a finite triangle integral and two$-$point integrals. In the final expression thus obtained all divergences, IR as well as UV, are contained in the general two-point integrals and associated coefficients. Therefore, an alternative fundamental set of integrals is comprised of $$\displaystyle I_{0}^{2}(D^{\prime};{\nu}_{1},{\nu}_{2}),$$ $$\displaystyle I_{3}^{3m},$$ $$\displaystyle I_{4}^{4m},~{}J_{4}^{3m},~{}J_{4}^{2mh},~{}J_{4}^{2me},~{}J_{4}^% {1m},~{}J_{4}^{0m},$$ (55) where $J_{4}^{K}$ denotes box integrals in $D=6$ dimensions, explicit expressions for which are given in the Appendix A. A characteristic feature of this fundamental set of integrals, which makes it particularly interesting, is that the integral $I_{0}^{2}$ is the only divergent one, while the rest of integrals are finite. 6 Conclusion In this work we have considered one$-$loop scalar and tensor Feynman integrals with an arbitrary number of external lines which are relevant for construction of multi$-$parton one$-$loop amplitudes in massless field theories. Main result of this paper is a scalar reduction approach by which an arbitrary $N-$point scalar one$-$loop integral can be reqursively represented as a linear combination of eight basic scalar integrals with rational coefficients depending on the external momenta and the dimensionality of space$-$time, provided the external momenta are kept in four dimensions. The problem of vanishing of the kinematic determinants, which is a reflection of very complex singularity structure of these integrals, has been solved in an elegant and transparent manner. Namely, the approach has been taken according to which instead of solving the general system of linear equations given in (12), and then finding the limit, which sometimes doesn’t exists, of the obtained solution corresponding to a given singular kinematic situation, we first obtain and then solve the system of equations appropriate to the situation being considered. Our method has been derived without any restrictions regarding the external momenta. As such, it is completely general and applicable for arbitrary kinematics. In particular, it applies to the integrals in which the set of external momenta contains subsets comprised of two or more collinear momenta. This kind of integrals are encountered when performing leading$-$twist NLO PQCD analysis of the hadronic exclusive processes at large$-$momentum$-$transfer. Trough the tensor decomposition and scalar reduction presented, any massless one-loop Feynman integral with generic 4-dimensional momenta can be expressed as a linear combination of a fundamental set of scalar integrals: six box integrals in $D=6$, a triangle integral in $D=4$, and a general two$-$point integral. All the divergences present in the original integral are contained in the general two-point integral and associated coefficients. In conclusion, the computation of IR divergent one$-$loop integrals for arbitrary number of external lines can be mastered with the reduction formulas presented above. The iterative structure makes it easy to implement the formalism in algebraic computer program. With this work all the conceptual problems concerning the construction of multi$-$parton one$-$loop amplitudes are thus solved. Acknowledgements We thank T.Binoth and G.Heinrich for useful discussions and helpful comments. This work was supported by the Ministry of Science and Technology of the Republic of Croatia under Contract No. 00980102. Appendix A In addition to the explicit calculation, the irreducible box integrals in $D=6$ dimensions can be obtained using the existing analytical expressions for the irreducible box integrals in $D=4+2\varepsilon$ dimensions and the reduction formula (15). To this end, we substitute $D=6+2\varepsilon$, $N=4$, ${\nu}_{i}=1$ and $C=1$ into the relation (15) and find $$I_{0}^{4}(6+2\varepsilon;\{1\})=\frac{1}{4\pi\mu^{2}(2{\varepsilon}+1)~{}z_{0}% }\left(I_{0}^{4}(4+2\varepsilon;\{1\})-\sum_{i=1}^{4}z_{i}I_{0}^{4}(4+2% \varepsilon;\{\delta_{kk}-\delta_{ki}\})\right)~{}.$$ (56) Note that the IR divergences in $D=4+2\varepsilon$ box integrals are exactly cancelled by the divergences of the triangle integrals. The expressions for the relevant basic massless scalar box integrals in $D=6$ space$-$time dimensions are listed below: The three$-$mass scalar box integral $$\displaystyle I^{3m}_{4}(D=6;s,t;m_{2}^{2},m_{3}^{2},m_{4}^{2})=\frac{{\rm i}}% {(4\pi)^{2}}\,\frac{1}{4\pi\mu^{2}}\,$$ (57) $$\displaystyle{}\times h^{3m}\Bigg{\{}\frac{1}{2}\ln\left(\frac{s+{\rm i}% \epsilon}{m_{3}^{2}+{\rm i}\epsilon}\right)\ln\left(\frac{s+{\rm i}\epsilon}{m% _{4}^{2}+{\rm i}\epsilon}\right)+\frac{1}{2}\ln\left(\frac{t+{\rm i}\epsilon}{% m_{2}^{2}+{\rm i}\epsilon}\right)\ln\left(\frac{t+{\rm i}\epsilon}{m_{3}^{2}+{% \rm i}\epsilon}\right)$$ $$\displaystyle{}+\mbox{Li}_{2}\left(\,1-\frac{m_{2}^{2}+{\rm i}\epsilon}{t+{\rm i% }\epsilon}\,\right)+\mbox{Li}_{2}\left(\,1-\frac{m_{4}^{2}+{\rm i}\epsilon}{s+% {\rm i}\epsilon}\,\right)$$ $$\displaystyle{}+\mbox{Li}_{2}\Big{[}\,1-(s+{\rm i}\epsilon)\,f^{3m}\,\Big{]}+% \mbox{Li}_{2}\Big{[}\,1-(t+{\rm i}\epsilon)\,f^{3m}\,\Big{]}$$ $$\displaystyle{}-\mbox{Li}_{2}\Big{[}\,1-(m_{2}^{2}+{\rm i}\epsilon)\,f^{3m}\,% \Big{]}-\mbox{Li}_{2}\Big{[}\,1-(m_{4}^{2}+{\rm i}\epsilon)\,f^{3m}\,\Big{]}$$ $$\displaystyle{}-\frac{1}{2}\left(t-m_{2}^{2}-m_{3}^{2}+2m_{2}^{2}m_{3}^{2}% \frac{t-m_{4}^{2}}{st-m_{2}^{2}m_{4}^{2}}\right){\cal I}_{3}(m_{2}^{2},m_{3}^{% 2},t)$$ $$\displaystyle{}-\frac{1}{2}\left(s-m_{3}^{2}-m_{4}^{2}+2m_{3}^{2}m_{4}^{2}% \frac{s-m_{2}^{2}}{st-m_{2}^{2}m_{4}^{2}}\right){\cal I}_{3}(m_{3}^{2},m_{4}^{% 2},s)\Bigg{\}}.$$ The adjacent (”hard”) two$-$mass scalar box integral $$\displaystyle I_{4}^{2mh}(D=6;s,t;m_{3}^{2},m_{4}^{2})=\frac{{\rm i}}{(4\pi)^{% 2}}\,\frac{1}{4\pi\mu^{2}}\,$$ (58) $$\displaystyle{}\times h^{2mh}\Bigg{\{}\frac{1}{2}\ln\left(\frac{s+{\rm i}% \epsilon}{m_{3}^{2}+{\rm i}\epsilon}\right)\ln\left(\frac{s+{\rm i}\epsilon}{m% _{4}^{2}+{\rm i}\epsilon}\right)+\mbox{Li}_{2}\left(\,1-\frac{m_{4}^{2}+{\rm i% }\epsilon}{s+{\rm i}\epsilon}\,\right)$$ $$\displaystyle{}-\mbox{Li}_{2}\left(\,1-\frac{m_{3}^{2}+{\rm i}\epsilon}{t+{\rm i% }\epsilon}\,\right)+\mbox{Li}_{2}\Big{[}\,1-(s+{\rm i}\epsilon)\,f^{2mh}\,\Big% {]}$$ $$\displaystyle{}+\mbox{Li}_{2}\Big{[}\,1-(t+{\rm i}\epsilon)\,f^{2mh}\,\Big{]}-% \mbox{Li}_{2}\Big{[}\,1-(m_{4}^{2}+{\rm i}\epsilon)\,f^{2mh}\,\Big{]}$$ $$\displaystyle{}-\frac{1}{2}\left(s-m_{3}^{2}-m_{4}^{2}+2\frac{m_{3}^{2}m_{4}^{% 2}}{t}\right){\cal I}_{3}(m_{3}^{2},m_{4}^{2},s)\Bigg{\}}.$$ The opposite (”easy”) two$-$mass scalar box integral $$\displaystyle I^{2me}_{4}(D=6;s,t;m_{2}^{2},m_{4}^{2})=\frac{{\rm i}}{(4\pi)^{% 2}}\,\frac{1}{4\pi\mu^{2}}\,$$ (59) $$\displaystyle{}\times h^{2me}\Bigg{\{}\,\mbox{Li}_{2}\Big{[}\,1-(s+{\rm i}% \epsilon)\,f^{2me}\,\Big{]}+\mbox{Li}_{2}\Big{[}\,1-(t+{\rm i}\epsilon)\,f^{2% me}\,\Big{]}$$ $$\displaystyle{}-\mbox{Li}_{2}\Big{[}\,1-(m_{2}^{2}+{\rm i}\epsilon)\,f^{2me}\,% \Big{]}-\mbox{Li}_{2}\Big{[}\,1-(m_{4}^{2}+{\rm i}\epsilon)\,f^{2me}\,\Big{]}% \,\Bigg{\}}.$$ The one$-$mass scalar box integral $$\displaystyle I^{1m}_{4}(D=6;s,t;m_{4}^{2})=\frac{{\rm i}}{(4\pi)^{2}}\,\frac{% 1}{4\pi\mu^{2}}\,$$ (60) $$\displaystyle{}\times h^{1m}\Bigg{\{}\,\mbox{Li}_{2}\Big{[}\,1-(s+{\rm i}% \epsilon)\,f^{1m}\,\Big{]}+\mbox{Li}_{2}\Big{[}\,1-(t+{\rm i}\epsilon)\,f^{1m}% \,\Big{]}$$ $$\displaystyle{}-\mbox{Li}_{2}\Big{[}\,1-(m_{4}^{2}+{\rm i}\epsilon)\,f^{1m}\,% \Big{]}-\frac{{\pi}^{2}}{6}\,\Bigg{\}}.$$ The zero$-$mass (massless) scalar box integral $$\displaystyle I^{0m}_{4}(D=6;s,t)=\frac{{\rm i}}{(4\pi)^{2}}\,\frac{1}{4\pi\mu% ^{2}}\,$$ (61) $$\displaystyle{}\times h^{0m}\Bigg{\{}\,\mbox{Li}_{2}\Big{[}\,1-(s+{\rm i}% \epsilon)\,f^{0m}\,\Big{]}+\mbox{Li}_{2}\Big{[}\,1-(t+{\rm i}\epsilon)\,f^{0m}% \,\Big{]}-\frac{{\pi}^{2}}{3}\,\Bigg{\}},$$ where $$h^{K}=\left(-2\frac{\sqrt{{\rm det}(R_{4}^{K})}}{{~{}\rm det}(S_{4}^{K})}% \right),$$ (62) and $$\frac{\rm i}{(4\pi)^{2}}{\cal I}_{3}(a,b,c)=I_{3}^{3m}(D=4;a,b,c)~{}.$$ (63) The functions appearing above are given by $$\displaystyle f^{3m}=f^{2me}$$ $$\displaystyle=$$ $$\displaystyle\frac{s+t-m_{2}^{2}-m_{4}^{2}}{st-m_{2}^{2}m_{4}^{2}}~{},$$ $$\displaystyle f^{2mh}=f^{1m}$$ $$\displaystyle=$$ $$\displaystyle\frac{s+t-m_{4}^{2}}{st}~{},$$ $$\displaystyle f^{0m}$$ $$\displaystyle=$$ $$\displaystyle\frac{s+t}{st}~{},$$ $$\displaystyle h^{3m}$$ $$\displaystyle=$$ $$\displaystyle\left(s+t-m_{2}^{2}-m_{3}^{2}-m_{4}^{2}+m_{3}^{2}\frac{m_{2}^{2}t% +m_{4}^{2}s-2m_{2}^{2}m_{4}^{2}}{st-m_{2}^{2}m_{4}^{2}}\right)^{-1},$$ $$\displaystyle h^{2mh}$$ $$\displaystyle=$$ $$\displaystyle\left(s+t-m_{3}^{2}-m_{4}^{2}+\frac{m_{3}^{2}m_{4}^{2}}{t}\right)% ^{-1},$$ $$\displaystyle h^{2me}$$ $$\displaystyle=$$ $$\displaystyle\left(s+t-m_{2}^{2}-m_{4}^{2}\right)^{-1},$$ $$\displaystyle h^{1m}$$ $$\displaystyle=$$ $$\displaystyle\left(s+t-m_{4}^{2}\right)^{-1},$$ $$\displaystyle h^{0m}$$ $$\displaystyle=$$ $$\displaystyle\left(s+t\right)^{-1}.$$ Appendix B As an illustration of the tensor decomposition and scalar reduction methods, we evaluate an one-loop 6-point Feynman diagram shown in Fig. 2. Note that, due to the kinematics which is bounded to three dimensional Minkowski subspace there are no four linearly independent four-vectors. Consequently, this diagram is of complexity of the 7-point one-loop diagram with four dimensional external kinematics. We choose this particular diagram because of the compactness of intermediate and final expressions. This is one (out of 462) diagram contributing to the NLO hard-scattering amplitude for the exclusive process $\gamma(k_{1},\varepsilon_{1})\,\gamma(k_{2},\varepsilon_{2})\rightarrow\pi^{+}% (P_{+})\,\pi^{-}(P_{-})$, (with both photons on-shell) at large momentum transfer. In the $\gamma\,\gamma$ centre-of-mass frame, the 4-momenta of the incoming and outgoing particles are $$k_{1,\,2}={\sqrt{s}}/2\,(1,\mp\sin\theta_{c.m.},0,\pm\cos\theta_{c.m.}),\quad P% _{\pm}={\sqrt{s}}/2\,(1,0,0,\pm 1),$$ (64) while the polarization states of the photons are $$\varepsilon_{1}^{\pm}=\varepsilon_{2}^{\mp}=\mp 1/{\sqrt{2}}\,(0,\cos\theta_{c% .m.},\pm i,\sin\theta_{c.m.}),$$ (65) where ${\sqrt{s}}$ is the total centre-of-mass energy of the $\gamma\,\gamma$ system (or the invariant mass of the $\pi^{+}\,\pi^{-}$ pair). For example, taking $\theta_{c.m.}=\pi/2$ and assuming that the photons have opposite helicities, the amplitude corresponding to the Feynman diagram of Fig. 2 is proportional to the integral $${\cal I}=\frac{(\mu^{2})^{-\varepsilon}}{2}\int\frac{d^{4+2\varepsilon}l}{(4% \pi)^{4+2\varepsilon}}\frac{{\rm Tr}\left[\gamma_{\mu}\gamma_{5}\hskip 3.74994% 3pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943ptP_{+}\gamma^{\mu}(% \hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943ptl+\hskip 3% .749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943ptp_{3})\hskip 3.74% 9943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943pt\varepsilon_{1}(% \hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943ptl-\hskip 3% .749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943ptp_{4})\gamma_{\nu% }\gamma_{5}\hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943% ptP_{-}\gamma^{\nu}(\hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip% -3.749943ptl+\hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.7499% 43ptp_{5})\hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943% pt\varepsilon_{2}(\hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3% .749943ptl+\hskip 3.749943pt\hskip-3.749943pt/\hskip-3.749943pt\hskip-3.749943% ptp_{6})\right]}{l^{2}(l+p_{3})^{2}(l-p_{4})^{2}(l+p_{5})^{2}(l+p_{6})^{2}(l+p% _{7})^{2}},$$ (66) with the momenta $p_{i}$ $(i=1,\ldots,7)$ $$\displaystyle{}p_{1}=\overline{x}\,P_{+},\,\,\,\,p_{2}=x\,P_{+},\,\,\,\,p_{3}=% k_{1}-\overline{y}\,P_{-},\,\,\,\,p_{4}=\overline{y}\,P_{-},$$ $$\displaystyle{}p_{5}=y\,P_{-},\,\,\,\,p_{6}=y\,P_{-}-k_{2},\,\,\,\,p_{7}=% \overline{x}\,P_{+}+y\,P_{-}-k_{2}.$$ (67) The quantities $x$ and $\overline{x}\equiv 1-x$ ($y$ and $\overline{y}\equiv 1-y$) are the fractions of the momentum $P_{+}$ ($P_{-}$) shared between the quark and the antiquark in the $\pi^{+}$ ($\pi^{-}$). With the aim of regularizing the IR divergences, the dimension of the integral is taken to be $D=4+2\varepsilon$. The integral ${\cal I}$ is composed of one-loop 6-point tensor integrals of rank 0, 1, 2, 3 and 4. Performing the tensor decomposition and evaluating the trace, we obtain the integral ${\cal I}$ in the form $$\displaystyle{\cal I}$$ $$\displaystyle=$$ $$\displaystyle-2{\left(1+\varepsilon\right)}^{2}\,{\Big{[}}\,24\,s^{3}\,x\,% \overline{x}\,y\,\overline{y}\,{\left(4\,\pi\,{\mu}^{2}\right)}^{4}\,I_{0}^{6}% (12+2\,\varepsilon,\{1,1,1,1,1,5\})$$ (68) $$\displaystyle{}+6\,s^{3}\,\overline{y}\,y\,{\left(4\,\pi\,{\mu}^{2}\right)}^{4% }\,I_{0}^{6}(12+2\,\varepsilon,\{1,4,1,1,2,1\})$$ $$\displaystyle{}+2\,s^{3}\,\overline{y}\,\left(y-\overline{y}\right)\,{\left(4% \,\pi\,{\mu}^{2}\right)}^{4}\,I_{0}^{6}(12+2\,\varepsilon,\{1,3,2,1,2,1\})$$ $$\displaystyle{}+8\,s^{3}\,y\,\overline{y}\,{\left(4\,\pi\,{\mu}^{2}\right)}^{4% }\,I_{0}^{6}(12+2\,\varepsilon,\{1,3,1,1,3,1\})$$ $$\displaystyle{}+2\,s^{3}\,x\,y\,\overline{y}\,{\left(4\,\pi\,{\mu}^{2}\right)}% ^{4}\,I_{0}^{6}(12+2\,\varepsilon,\{1,2,2,2,1,2\})$$ $$\displaystyle{}+s^{3}\,\overline{y}\,\left(y-\overline{y}\right)\,{\left(4\,% \pi\,{\mu}^{2}\right)}^{3}\,I_{0}^{6}(10+2\,\varepsilon,\{1,2,2,1,2,1\})$$ $$\displaystyle{}+s^{2}\,y\,\overline{y}\,\left(1+\varepsilon\right)\,\left(4\,% \pi\,{\mu}^{2}\right)\,I_{0}^{6}(6+2\,\varepsilon,\{1,1,1,1,1,1\})$$ $$\displaystyle{}+s\,\left(1+\varepsilon\right)\,\left(2+\varepsilon\right)\,{% \left(4\,\pi\,{\mu}^{2}\right)}^{2}\,I_{0}^{6}(8+2\,\varepsilon,\{1,1,1,1,1,1\})$$ $$\displaystyle{}+\ldots\mbox{75 similar terms}\,{\Big{]}}.$$ Next, performing the scalar reduction using the method described in the paper, we arrive at the following expression for the integral written in terms of the basic integrals: $$\displaystyle{\cal I}=8\,(1+\varepsilon)^{2}\left\{(4\,\pi\,{\mu}^{2})\left[% \frac{\varepsilon}{\overline{x}}\,I_{4}^{1m}(6+2\,\varepsilon;-s/2,-s\,% \overline{y}/2;-s\,y/2)\right.\right.$$ (69) $$\displaystyle{}+\frac{1+\varepsilon}{\overline{x}}\,I_{4}^{1m}(6+2\,% \varepsilon;-s\,y/2,-s/2;-s\,\overline{y}/2)$$ $$\displaystyle{}+\left(1+\varepsilon\left(1-\frac{x}{\overline{x}}\right)\right% )\,I_{4}^{1m}(6+2\,\varepsilon;-s\,x/2,-s\,\overline{y}/2;-s\,(\overline{x}\,% \overline{y}+x\,y)/2)$$ $$\displaystyle{}\left.+\left(-\frac{x}{\overline{x}}+\varepsilon\left(1-\frac{x% }{\overline{x}}\right)\right)\,I_{4}^{2me}(6+2\,\varepsilon;-s\,\overline{x}/2% ,-s\,\overline{y}/2;-s/2,-s\,(\overline{x}\,\overline{y}+x\,y)/2)\right]$$ $$\displaystyle{}+\frac{1}{s}\left[\frac{1}{(\overline{x}-x)\,y}\left(\frac{2\,% \overline{x}}{\varepsilon\,(\overline{x}-x)}+2-\frac{\overline{x}}{x}\right)\,% I_{2}(4+2\,\varepsilon;-s\,\overline{x}/2)\right.$$ $$\displaystyle{}+\frac{1}{(x-\overline{x})\,\overline{y}}\left(\frac{2\,x}{% \varepsilon\,(x-\overline{x})}+2-\frac{x}{\overline{x}}\right)\,I_{2}(4+2\,% \varepsilon;-s\,x/2)$$ $$\displaystyle{}+\frac{1}{(\overline{y}-y)\,x}\left(\frac{2\,\overline{y}}{% \varepsilon\,(\overline{y}-y)}+2-\frac{\overline{y}}{y}\right)\,I_{2}(4+2\,% \varepsilon;-s\,\overline{y}/2)$$ $$\displaystyle{}+\frac{1}{(y-\overline{y})\,\overline{x}}\left(\frac{2\,y}{% \varepsilon\,(y-\overline{y})}+2-\frac{y}{\overline{y}}\right)\,I_{2}(4+2\,% \varepsilon;-s\,y/2)$$ $$\displaystyle{}+\left(\frac{(1-x\,\overline{y}-3\,y\,\overline{x})(1-y\,% \overline{x}-3\,x\,\overline{y})}{x\,\overline{x}\,y\,\overline{y}\,(x-% \overline{x})(y-\overline{y})}+\frac{2(\overline{x}\,\overline{y}+x\,y)(8\,x\,% \overline{x}\,y\,\overline{y}-x\,\overline{x}-y\,\overline{y})}{\varepsilon\,x% \,\overline{x}\,y\,\overline{y}\,(x-\overline{x})^{2}(y-\overline{y})^{2}}\right)$$ $$\displaystyle{}\left.\left.\times\,I_{2}(4+2\,\varepsilon;-s\,(\overline{x}\,% \overline{y}+x\,y)/2)+\frac{(\overline{x}\,\overline{y}+x\,y)}{x\,\overline{x}% \,y\,\overline{y}}\,I_{2}(4+2\,\varepsilon;-s/2)\right]\right\}.$$ Here, $I_{2}$ is the two-point scalar integral in $D=4+2\,\varepsilon$ with $\nu_{i}=1$, while $I_{4}^{1m}$ and $I_{4}^{2me}$ are box scalar integrals in $D=6+2\,\varepsilon$. Analytic expressions for these integrals are given in the Appendix A. Expanding Eq. (69) up to order ${\cal O}(\varepsilon^{0})$, we finally get $$\displaystyle{\cal I}=\frac{i}{(4\,\pi)^{2}}\frac{8}{s}\left\{-\frac{1}{x\,% \overline{y}}{\rm Li}_{2}(\overline{x}-x)-\frac{1}{y\,\overline{x}}{\rm Li}_{2% }(x-\overline{x})-\frac{1}{y\,\overline{x}}{\rm Li}_{2}(\overline{y}-y)-\frac{% 1}{x\,\overline{y}}{\rm Li}_{2}(y-\overline{y})\right.$$ (70) $$\displaystyle{}+\frac{x\,\overline{y}+y\,\overline{x}}{x\,\overline{x}\,y\,% \overline{y}}{\rm Li}_{2}\left(-(x-\overline{x})(y-\overline{y})\right)+\frac{% \pi^{2}}{6}\frac{x\,\overline{y}+y\,\overline{x}}{x\,\overline{x}\,y\,% \overline{y}}-\frac{\overline{x}\,\overline{y}+x\,y}{x\,\overline{x}\,y\,% \overline{y}}\ln\!\left(\frac{s}{2\,\mu^{2}}\right)$$ $$\displaystyle{}+\frac{(\overline{x}-2\,x)}{x\,y\,(\overline{x}-x)}\,\ln\left(% \frac{s\,\overline{x}}{2\,\mu^{2}}\right)+\frac{(x-2\,\overline{x})}{\overline% {x}\,\overline{y}\,(x-\overline{x})}\,\ln\left(\frac{s\,x}{2\,\mu^{2}}\right)$$ $$\displaystyle{}+\frac{(\overline{y}-2\,y)}{x\,y\,(\overline{y}-y)}\,\ln\left(% \frac{s\,\overline{y}}{2\,\mu^{2}}\right)+\frac{(y-2\,\overline{y})}{\overline% {x}\,\overline{y}\,(y-\overline{y})}\,\ln\left(\frac{s\,y}{2\,\mu^{2}}\right)$$ $$\displaystyle{}-\frac{x}{\overline{y}\,(x-\overline{x})^{2}}\,\ln^{2}\left(% \frac{s\,x}{2\,\mu^{2}}\right)-\frac{\overline{x}}{y\,(x-\overline{x})^{2}}\,% \ln^{2}\left(\frac{s\,\overline{x}}{2\,\mu^{2}}\right)$$ $$\displaystyle{}-\frac{y}{\overline{x}\,(y-\overline{y})^{2}}\ln^{2}\!\left(% \frac{s\,y}{2\,\mu^{2}}\right)-\frac{\overline{y}}{x\,(y-\overline{y})^{2}}\ln% ^{2}\!\left(\frac{s\,\overline{y}}{2\,\mu^{2}}\right)$$ $$\displaystyle{}-\frac{(1-x\,\overline{y}-3\,y\,\overline{x})(1-y\,\overline{x}% -3\,x\,\overline{y})}{x\,\overline{x}\,y\,\overline{y}\,(x-\overline{x})(y-% \overline{y})}\,\ln\left(\frac{s\,(\overline{x}\,\overline{y}+x\,y)}{2\,\mu^{2% }}\right)$$ $$\displaystyle{}-\frac{(\overline{x}\,\overline{y}+x\,y)(8\,x\,\overline{x}\,y% \,\overline{y}-x\,\overline{x}-y\,\overline{y})}{x\,\overline{x}\,y\,\overline% {y}\,(x-\overline{x})^{2}(y-\overline{y})^{2}}\,\ln^{2}\left(\frac{s\,(% \overline{x}\,\overline{y}+x\,y)}{2\,\mu^{2}}\right)$$ $$\displaystyle-\frac{2}{\hat{\varepsilon}}\left[\frac{x}{\overline{y}\,(x-% \overline{x})^{2}}\,\ln\left(\frac{s\,x}{2\,\mu^{2}}\right)+\frac{\overline{x}% }{y\,(x-\overline{x})^{2}}\,\ln\left(\frac{s\,\overline{x}}{2\,\mu^{2}}\right)\right.$$ $$\displaystyle{}+\frac{y}{\overline{x}\,(y-\overline{y})^{2}}\,\ln\left(\frac{s% \,y}{2\,\mu^{2}}\right)+\frac{\overline{y}}{x\,(y-\overline{y})^{2}}\,\ln\left% (\frac{s\,\overline{y}}{2\,\mu^{2}}\right)$$ $$\displaystyle{}\left.\left.+\frac{(\overline{x}\,\overline{y}+x\,y)(8\,x\,% \overline{x}\,y\,\overline{y}-x\,\overline{x}-y\,\overline{y})}{x\,\overline{x% }\,y\,\overline{y}\,(x-\overline{x})^{2}(y-\overline{y})^{2}}\,\ln\left(\frac{% s\,(\overline{x}\,\overline{y}+x\,y)}{2\,\mu^{2}}\right)\right]\right\},$$ where $1/\hat{\varepsilon}=1/\varepsilon+\gamma-\ln(4\,\pi)$. References [1] G. Passarino and M. Veltman, Nucl. Phys. B 160 (1979) 151; G. J. van Oldenborgh and J. A. M. Vermaseren, Z. Phys. C 46 (1990) 425; W. L. van Neerven and J. A. M. Vermaseren, Phys. Lett. B 137 (1984) 241. [2] A. I. Davydychev, Phys. Lett. B 263 (1991) 107. [3] Z. Bern, L. J. Dixon and D. A. Kosower, Phys. Lett. B 302 (1993) 299 [Erratum-ibid. B 318 (1993) 649] [arXiv:hep-ph/9212308]. [4] Z. Bern, L. J. Dixon and D. A. Kosower, Nucl. Phys. B 412 (1994) 751 [arXiv:hep-ph/9306240]. [5] O. V. Tarasov, Phys. Rev. D 54 (1996) 6479 [arXiv:hep-th/9606018]. [6] J. Fleischer, F. Jegerlehner and O. V. Tarasov, Nucl. Phys. B 566 (2000) 423 [arXiv:hep-ph/9907327]. [7] T. Binoth, J. P. Guillet and G. Heinrich, Nucl. Phys. B 572 (2000) 361 [arXiv:hep-ph/9911342]; G. Heinrich and T. Binoth, Nucl. Phys. Proc. Suppl.  89 (2000) 246 [arXiv:hep-ph/0005324]. [8] T. Binoth, J. P. Guillet, G. Heinrich and C. Schubert, Nucl. Phys. B 615 (2001) 385 [arXiv:hep-ph/0106243]. [9] J. M. Campbell, E. W. Glover and D. J. Miller, Nucl. Phys. B 498 (1997) 397 [arXiv:hep-ph/9612413]. [10] A. Denner and S. Dittmaier, Nucl. Phys. B 658 (2003) 175 [arXiv:hep-ph/0212259]. [11] G. Duplančić and B. Nižić, Eur. Phys. J. C 20 (2001) 357 [arXiv:hep-ph/0006249]. [12] G. Duplančić and B. Nižić, Eur. Phys. J. C 24 (2002) 385 [arXiv:hep-ph/0201306]. [13] A. I. Davydychev, J. Phys. A 25 (1992) 5587. [14] K. G. Chetyrkin and F. V. Tkachov, Nucl. Phys. B 192 (1981) 159; F. V. Tkachov, Phys. Lett. B 100 (1981) 65. [15] F. Jegerlehner and O. Tarasov, Nucl. Phys. Proc. Suppl. 116 (2003) 83 [arXiv:hep-ph/0212004]. [16] A. T. Suzuki, E. S. Santos and A. G. Schmidt, Eur. Phys. J. C 26 (2002) 125 [arXiv:hep-th/0205158]. [17] C. Anastasiou, E. W. Glover and C. Oleari, Nucl. Phys. B 565 (2000) 445 [arXiv:hep-ph/9907523]. A. T. Suzuki, E. S. Santos and A. G. Schmidt, arXiv:hep-ph/0210083.
The mass and anisotropy profiles of galaxy clusters from the projected phase space density: testing the method on simulated data Radosław Wojtak,${}^{1}$ Ewa L. Łokas,${}^{1}$ Gary A. Mamon${}^{2,3}$ and Stefan Gottlöber${}^{4}$ ${}^{1}$Nicolaus Copernicus Astronomical Center, Bartycka 18, 00-716 Warsaw, Poland ${}^{2}$Institut d’Astrophysique de Paris (UMR 7095: CNRS and Université Pierre & Marie Curie), 98 bis Bd Arago, F-75014 Paris, France ${}^{3}$Astrophysics 7 BIPAC, University of Oxford, Keble Rd, OX1 3RH, Oxford, UK ${}^{4}$Astrophysikalisches Institut Potsdam, An der Sternwarte 16, 14482 Potsdam, Germany Abstract We present a new method of constraining the mass and velocity anisotropy profiles of galaxy clusters from kinematic data. The method is based on a model of the phase space density which allows the anisotropy to vary with radius between two asymptotic values. The characteristic scale of transition between these asymptotes is fixed and tuned to a typical anisotropy profile resulting from cosmological simulations. The model is parametrized by two values of anisotropy, at the centre of the cluster and at infinity, and two parameters of the NFW density profile, the scale radius and the scale mass. In order to test the performance of the method in reconstructing the true cluster parameters we analyze mock kinematic data for 20 relaxed galaxy clusters generated from a cosmological simulation of the standard $\Lambda$CDM model. We use Bayesian methods of inference and the analysis is carried out following the Markov Chain Monte Carlo approach. The parameters of the mass profile are reproduced quite well, but we note that the mass is typically underestimated by 15 percent, probably due to the presence of small velocity substructures. The constraints on the anisotropy profile for a single cluster are in general barely conclusive. Although the central asymptotic value is determined accurately, the outer one is subject to significant systematic errors caused by substructures at large clustercentric distance. The anisotropy profile is much better constrained if one performs joint analysis of at least a few clusters. In this case it is possible to reproduce the radial variation of the anisotropy over two decades in radius inside the virial sphere. keywords: galaxies: clusters: general – galaxies: kinematics and dynamics – cosmology: dark matter 1 Introduction Kinematic data play an important role in dynamical studies of galaxy clusters. They offer a unique possibility to constrain simultaneously the total (dark and luminous) mass profile and the orbital structure of galaxies which is commonly quantified by the anisotropy of velocity dispersion tensor (Binney & Tremaine 2008). The main challenge of this approach is the fact that both factors, the mass and the anisotropy profile, are interconnected. In the particular case of commonly used Jeans analysis of velocity dispersion profile this leads to a well known problem of the mass-anisotropy degeneracy that hampers the constraining power of the data (e.g. Binney & Mamon 1982; Merritt 1987; Merrifield & Kent 1990). It turns out that in general there are two ways to break this degeneracy: one can either combine the results with other constraints on the mass profile or use a dynamical model going beyond the Jeans equation. The first solution has been widely applied in the literature. Promising constraints on the anisotropy of galactic orbits in clusters have been obtained by combining the results from velocity dispersion profiles with the mass constraints from X-ray gas (e.g. Benatov et al. 2006; Hwang & Lee 2008) or lensing data (e.g. Natarajan & Kneib 1996). A different approach was adopted by Biviano & Katgert (2004) who argued for an isotropic velocity dispersion tensor of early type galaxies in clusters from the ESO Nearby Abell Cluster Survey (ENACS). Assuming this property they inferred the total mass profile from velocity dispersions of ellipticals and used this result to constrain the anisotropy profile of late type galaxies. This approach was an implementation of the anisotropy inversion algorithm (Binney & Mamon 1982; Solanes & Salvador-Solé 1990). The second method of breaking the mass-anisotropy degeneracy relies on the extension of the classical Jeans formalism. The first natural step in this field is to consider the fourth velocity moment or kurtosis (Merrifield & Kent 1990). The method based on the joint fitting of velocity moments (dispersion and kurtosis) was shown to provide constraints both on the mean anisotropy of a system and the parameters of the mass profile (Łokas 2002; Łokas & Mamon 2003; Sanchis, Łokas & Mamon 2004; Łokas et al. 2006). These results confirmed the idea that any attempt to infer the anisotropy profile from kinematic data of spherical systems must be preceded by the construction of a detailed dynamical model. In general there are two ways to achieve a desired complexity of a model. One is the so-called Schwarzschild modelling (Schwarzschild 1979) in which one considers a superposition of base orbits defined in the integral space (e.g. Merritt & Saha 1993; Gerhard et al. 1998; Chanamé, Kleyna & van der Marel 2008). Another one, which we adopt in this work, is to provide a properly parametrized form for the phase space density. There were several studies devoted to the analysis of kinematic data in terms of the phase space density. An important step in this field was made by Kent & Gunn (1982) who used a family of simple analytical models of the distribution function to analyze the data for the Coma cluster. Van der Marel et al. (2000) obtained constraints on the anisotropy of 16 galaxy clusters from the CNOC1 (Canadian Network for Observational Cosmology) cluster redshift survey. A conceptually similar analysis was carried out by Mahdavi & Geller (2004) for galaxy groups and clusters. In all cases a constant anisotropy was assumed which does not reproduce well the results of cosmological simulations where the dependence of the anisotropy on radius is usually seen (e.g. Mamon & Łokas 2005; Wojtak et al. 2005; Ascasibar & Gottlöber 2008). Since the anisotropy profile has recently become a subject of growing interest, it appears reasonable to generalize the above methods so that both the mass and the anisotropy profiles may be inferred from the data. This implies that one has to deal with an anisotropic model of the phase space density which accounts for its radial variation. Quite recently several models satisfying this requirement have been proposed. The anisotropy profile is specified by a proper parametrization of the angular momentum part of the distribution function (Wojtak et al. 2008) or the augmented density (Van Hese, Baes & Dejonghe 2009). The purpose of the present work was to adopt the approach of Wojtak et al. (2008) to the Bayesian analysis of kinematic data and to test on mock data sets how well the mass and anisotropy profiles are reproduced. The paper is organized as follows. In the first section we introduce a model of the phase space density and discuss its projection on to the plane of sky. In section 2 we describe mock kinematic data of galaxy clusters generated from a cosmological simulation. Section 3 provides technical details on the Monte Carlo Markov Chain analysis and section 4 presents the results. The discussion follows in section 5. 2 The phase space density Any spherical system in equilibrium embedded in a fixed gravitational potential is described completely by the distribution function which depends on the phase space coordinates through the binding energy $E$ and the absolute value of the angular momentum $L$. In this work, we use the model of the distribution function recently proposed by Wojtak et al. (2008) which was shown to recover spherically averaged phase space distribution of dark matter particles in simulated cluster-size haloes. The main idea of this approach lies in the assumption that the distribution function is separable in energy and angular momentum, i.e. $f(E,L)=f_{E}(E)f_{L}(L)$. The angular momentum part $f_{L}(L)$ is given by an analytical ansatz motivated by the purpose of providing an appropriate parametrization of the anisotropy profile, which is traditionally quantified by the so-called anisotropy parameter $$\beta(r)=1-\frac{\sigma_{\theta}^{2}(r)}{\sigma_{r}^{2}(r)},$$ (1) where $\sigma_{\theta}$ and $\sigma_{r}$ are dispersions of the tangential and radial velocity respectively. This part of the distribution function takes the following form $$f_{L}(L)=\Big{(}1+\frac{L^{2}}{2L_{0}^{2}}\Big{)}^{-\beta_{\infty}+\beta_{0}}L% ^{-2\beta_{0}},$$ (2) where $\beta_{0}$ and $\beta_{\infty}$ are the asymptotic values of the anisotropy parameter at the halo centre and at infinity respectively. The scale of transition between these two asymptotes is determined by $L_{0}$, whereas a typical radial range of the growth or decrease of $\beta(r)$ is fixed at about 2 orders of magnitude centred on the radius corresponding to $L_{0}$ (see the anisotropy profiles in the top right panel of Fig. 1). Although some recent models of the distribution function offer a little more flexible parametrization of the anisotropy profile (e.g. Baes & Van Hese 2007; Van Hese et al. 2009), we find that our choice is quite suitable for the purpose of this work, given that we wish to reproduce the variability of $\beta(r)$ with as few parameters as possible. The energy part of the distribution function $f_{E}(E)$ is given by the solution of the integral equation $$\rho(r)=\int\!\!\!\int\!\!\!\int f_{E}(E)\Big{(}1+\frac{L^{2}}{2L_{0}^{2}}\Big% {)}^{-\beta_{\infty}+\beta_{0}}L^{-2\beta_{0}}\textrm{d}^{3}v.$$ (3) This equation can be simplified to the one-dimensional integral equation and then solved numerically for $f_{E}$ (see Appendix B in Wojtak et al. 2008). As an approximation for the density profile in (3) we use the NFW profile (Navarro, Frenk & White 1997), i.e. $$\rho(r/r_{s})=\frac{1}{4\pi(\ln 2-1/2)}\frac{1}{(r/r_{s})(1+r/r_{s})^{2}}\frac% {M_{\rm s}}{r_{\rm s}^{3}},$$ (4) where $r_{s}$ is the scale radius and $M_{s}$ is the mass enclosed in a sphere of this radius. Both parameters of the mass profile provide natural scales of the phase space coordinates so that any change of $M_{s}$ or $r_{s}$ corresponds to the expansion or contraction of a system in the space of velocities or the positions. For the sake of convenience, we use the scaling which fixes the range of positively defined binding energy per unit mass $E$ at $[0,1]$, namely $r_{s}$ as the unit of radius and $V_{s}=(GM_{s}/r_{s})^{1/2}(\ln 2-1/2)^{-1/2}$ as the unit of velocity (see Wojtak et al. 2008). Due to projection effects, a fraction of the phase space is not accessible to observation. An observer is able to measure velocity along the line of sight $v_{\rm los}$ and the position on the sky which can be easily translated into the projected clustercentric distance $R$. This data set, when plotted $v_{\rm los}$ versus $R$, is commonly referred to as the phase space diagram or the velocity diagram. Data points in such diagrams are distributed according to the projected phase space density which is given by (e.g. Dejonghe & Merritt 1992; Merritt & Saha 1993; Mahdavi & Geller 2004) $$f_{\rm los}(R,v_{\rm los})=2\pi R\int_{-z_{\rm max}}^{z_{\rm max}}\!\!\!\!% \textrm{d}z\int\!\!\!\int_{E>0}\!\!\!\!\textrm{d}v_{R}\textrm{d}v_{\phi}f_{E}(% E)f_{L}(L),$$ (5) where $z$ is the distance along the line sight from the cluster centre, while $v_{R}$ and $v_{\phi}$ are velocity components in cylindrical coordinates. We will hereafter refer to $f_{\rm los}(R,v_{\rm los})$ as the phase space density. The integration area of the last two integrals is given by the circle $v_{R}^{2}+v_{\phi}^{2}<2\Psi(\sqrt{R^{2}+z^{2}})-v_{\rm los}^{2}$, where $\Psi(r)$ is a positively defined gravitational potential for the NFW density profile, $\Psi(r)=V_{s}^{2}\ln(1+r/r_{s})/(r/r_{s})$ (Cole & Lacey 1996). The boundaries of the integral along the line of sight are given by the distance $\pm z_{\rm max}$ at which $v_{\rm los}(R)$ becomes the escape velocity. It is worth mentioning that in (5) we assume that the density profiles of a tracer (galaxies) and dark matter are proportional to each other. Second, $f_{\rm los}$ is defined up to the normalization which must therefore be imposed by the following additional condition $$2\int_{0}^{R_{\rm max}}\!\!\!\textrm{d}R\int_{0}^{\sqrt{2\Psi(R)}}f_{\rm los}(% R,v_{\rm los})\textrm{d}v_{\rm los}=1,$$ (6) where $R_{\rm max}$ is a cut-off radius of the phase space diagram. From (6) one can immediately infer that the normalization factor is $\{r_{s}V_{s}[M_{\rm los}(R_{\rm max})/M_{s}]\}^{-1}$, where $M_{\rm los}(R_{\rm max})$ is the projected mass within the aperture $R_{\rm max}$ for the NFW density profile (see Łokas & Mamon 2001 for an analytical expression). From now on when referring to the phase space density $f_{\rm los}$ we will always mean that it is properly normalized as in (6). We calculate the phase space density (5) numerically using the algorithm of Gaussian quadrature to evaluate each integral (Press et al. 1996). The energy part of the distribution function $f_{E}(E)$ is interpolated between points which are equally spaced in energy and provide the numerical solution of equation (3). In order to remove improper boundaries of the integral along the line of sight for $v_{\rm los}=0$ we changed the variable $z$ into $\Psi$. Next, to avoid singularity along the line $L^{2}=(R^{2}+z^{2})v_{\phi}^{2}+(v_{R}z-v_{\rm los}R)^{2}=0$, when $\beta_{0}>0$, we used an even number of abscissas for the variable $v_{\phi}$ so that $f_{L}(L)$ is never evaluated at $L=0$. Fig. 1 shows the contour maps of phase space density $f_{\rm los}(R,v_{\rm los})$ for 5 different anisotropy profiles plotted in the top right panel. In all cases we used $R_{\rm max}=5r_{s}$, a typical virial radius of massive galaxy clusters, and the scale of transition between $\beta_{0}$ and $\beta_{\infty}$, $L_{0}=0.2V_{s}r_{s}$, which corresponds to $\sim 1r_{s}$. To facilitate comparison, in the four bottom panels we plot the differences between the given and the isotropic ($\beta_{0}=\beta_{\infty}=0$) $f_{\rm los}$. The contours below the $0.01$ threshold are neglected as insignificant for the typical number of data points considered in this work. The results shown in Fig. 1 are quite intuitive to interpret. We notice that $\beta_{0}$ modifies $f_{\rm los}$ in the central part of the diagram in such a way that radially biased models predict more high velocity particles than tangentially biased ones (see the middle panels). On the other hand, $\beta_{\infty}$ influences the outer part of a diagram so that the tangentially biased anisotropy suppresses $f_{\rm los}$ at $v_{\rm los}=0$ and increases it for moderate velocities (see the bottom panels). In the limit of $\beta_{\infty}\ll 0$ the velocity distribution at $R/r_{s}\gg 1$ takes the characteristic horn-like shape with a minimum at $v_{\rm los}=0$ and a maximum at the circular velocity $\pm[GM(R)/R]^{1/2}$ (see van der Marel & Franx 1993 for a qualitative comparison). As a final remark on Fig. 1, let us note that the parameters of the mass profile, $M_{s}$ and $r_{s}$, manifest themselves only in the scaling of the axes of the diagram while preserving the shape of the phase space density. This property, when put together with the way how $f_{\rm los}$ depends on the anisotropy, automatically excludes the mass-anisotropy degeneracy that is an intrinsic problem of the analysis based on the Jeans equation (see e.g. Merrifield & Kent 1990). 3 The mock catalogue of phase space diagrams The model of the phase space density introduced in the previous section provides an idealized description of the real data. It neglects various secondary effects that occur in real galaxy clusters and perturb their phase space diagrams. Among those the most important seem to be: the breaking of spherical symmetry, the presence of substructures, the finite size of equilibrated zone and filamentary structures surrounding the clusters and infalling towards them. In order to study the impact of such effects on the reliability of our approach to data analysis we need to construct a mock catalogue of phase space diagrams which on the one hand resembles the real spectroscopic cluster survey and on the other provides the true mass and anisotropy profiles. This can be easily achieved with the use of cosmological simulations. We used an $N$-body cosmological simulation of a $\Lambda$CDM model. The simulation was carried out in a box of $160{{h^{-1}{\rm Mpc}}}$ with WMAP3 cosmological parameters (Spergel et al. 2007): $\Omega_{m}=0.24$, $\Omega_{\Lambda}=0.76$ and the dimensionless Hubble constant $h=0.73$ (for more details see Wojtak et al. 2008). We identified all cluster-size haloes at redshift $z=0$ and selected $20$ of them that appeared not to be products of recent major mergers. For each halo we measured the virial mass $M_{v}$ and the virial radius $r_{v}$ which are defined in terms of the mean density within the virial sphere by the following equation $$\frac{M_{v}}{(4/3)\pi r_{v}^{3}}=\Delta_{c}\rho_{c},$$ (7) where $\Delta_{c}$ is the so-called overdensity parameter. For cosmological model under consideration we adopted $\Delta_{c}=94$ (see e.g. Łokas & Hoffman 2001). The virial mass of selected haloes is in the range $(1.5-20)\times 10^{14}{{{\rm{M_{\odot}}}}}$. The minimum number of particles inside the virial sphere is $4.2\times 10^{5}$. By fitting the NFW formula (4) to the density profile calculated in radial bins equally spaced in logarithmic scale we determined the scale radius $r_{s}$ of each halo and then the concentration parameter $c=r_{v}/r_{s}$. The scale mass $M_{s}$ was measured directly as the mass enclosed by the sphere of radius $r_{s}$. We found that the mass profile extrapolated from $r_{s}$ outward to the virial sphere recovers the virial mass with the precision of a few percent. Fig. 2 shows the anisotropy profiles in selected dark matter haloes. The profiles were measured from velocity dispersions of dark matter particles inside thin spherical shells of radii extending to $1.5r_{v}$. Although a general trend for radial variation of the anisotropy is very clear, the profiles are considerably scattered. One can distinguish at least two classes of anisotropy profiles: steeply rising and flat or moderately rising. According to this criterion we divided our halo sample into two equally numerous groups of ten haloes each (see the left and right panel of Fig. 2). The reason for this division will become clear in section 5, where we consider constraints on the anisotropy profile from the joint analysis of many phase space diagrams. In order to construct phase space diagrams associated with the haloes we pick a line of sight and find all particles inside a cylinder of observation. The selection is restricted to the particles with the projected halocentric distances $R<r_{v}$ and velocities along the line of sight $v_{\rm los}$ (with the Hubble flow included) in the range $\pm 4000$ km s${}^{-1}$ in the rest frame of a halo, which is the commonly used and sufficiently conservative velocity cut-off. The final phase space diagrams are generated by drawing randomly $300$ particles for each halo, where we assume, to make the scheme unambiguous, that the tracer is given simply by the particles. The number of data points is fixed at a typical number of spectroscopic redshifts available for a nearby galaxy cluster with the same criterion of velocity cut-off. The selection of particles is done three times for three lines of sight chosen to be parallel to the Cartesian axes of the simulation box. Thus we effectively get three independent sets of phase space diagrams for the same sample of 20 haloes. Due to projection effects, the phase space diagrams include the so-called interlopers: the particles (galaxies) of background or foreground which are not physically connected to the haloes (clusters). It is commonly accepted that any dynamical analysis should be necessarily preceded by an identification and removal of as many of these objects as possible. In this work we apply the procedure proposed by den Hartog & Katgert (1996) which appears to be one of the most effective algorithms for interloper removal (Wojtak et al. 2007; Wojtak & Łokas 2007). The method is based on an iterative rejection of galaxies with velocity exceeding a maximum velocity which is a function of the clustercentric distance $R$. The maximum velocity is calculated using a model which allows the galaxies to be either on circular orbit with velocity $v_{\rm cir}=\sqrt{GM(r)/r}$ or to fall towards the cluster centre with velocity $\sqrt{2}v_{\rm cir}$, where $M(r)$ is approximated in each iteration by the virial mass estimator (Heisler, Tremaine & Bahcall 1985). The phase space diagrams preprocessed in such a way are suitable for the proper analysis in terms of the phase space density. Hereafter, when referring to the phase space diagram we will always mean the diagram after interloper removal. 4 Bayesian analysis and MCMC Our aim is to constrain the parameters of the mass and anisotropy profiles by analyzing the mock phase space diagrams generated from the simulation. Following the principle of Bayesian inference our knowledge of model parameters a given the data $D$ may be expressed as the posterior probability $p({\textbf{\em a}}|D)$ which is related to the probability of obtaining the data in a given model $p(D|{\textbf{\em a}})$ (likelihood) and the prior probability $p({\textbf{\em a}})$ via equation $$p({\textbf{\em a}}|D)=\frac{p({\textbf{\em a}})p(D|{\textbf{\em a}})}{p(D)},$$ (8) where a is a vector of model parameters and $D$ stands for the data. The probability $p(D)$ plays the role of the normalization coefficient and can be neglected without loss of generality. The data $D$ of a phase space diagram consists of $N$ points located at $(R_{i},v_{{\rm los},i})$, where $i=1,\dots,N$. Assuming that the number and mass density are proportional to each other the likelihood is given by $$p(D|{\textbf{\em a}})=\prod_{i=1}^{N}f_{\rm los}(R_{i},v_{{\rm los},i}|{% \textbf{\em a}}),$$ (9) where $f_{\rm los}$ is the phase space density given by (5) and ${\textbf{\em a}}=\{M_{s},r_{s},\beta_{0},\beta_{\infty},L_{0}\}$. It is worth mentioning that this formula does not take into account statistical errors of $v_{\rm los}$ and $R$. This is motivated by the fact that the errors of spectroscopic redshift measurements for nearby clusters are quite small and have negligible impact on the results of the analysis (see e.g. van der Marel et al. 2000). 4.1 Priors and reparametrization The two parameters of the mass profile, $M_{s}$ and $r_{s}$, are the scale parameters. The most appropriate prior probability for such parameters is the Jeffreys prior which represents equal probability per decade (Jeffreys 1946), i.e. $p(M_{s})\propto 1/M_{s}$ and $p(r_{s})\propto 1/r_{s}$. We will use this kind of prior in the analysis of any single phase space diagram. The situation will change in subsection $5.3$ where we consider joint analysis of many phase space diagrams. The commonly used anisotropy parameter $\beta(r)$ gives rise to unequal weights for tangentially and radially biased models. In order to make the prior probability for both types of anisotropy equal we introduce, following Wilkinson et al. (2002) and Mahdavi & Geller (2004), the following reparametrization $$\ln\frac{\sigma_{r}(r)}{\sigma_{\theta}(r)}=-\frac{1}{2}\ln[1-\beta(r)].$$ (10) Using a uniform prior for $\ln(\sigma_{r}/\sigma_{\theta})_{0}=-1/2\ln(1-\beta_{0})$ and $\ln(\sigma_{r}/\sigma_{\theta})_{\infty}=-1/2\ln(1-\beta_{\infty})$ we put equal weights to all types of anisotropy. The ranges of both priors are limited by $\beta_{0}\leq 1/2$ and $\beta_{\infty}\leq 0.99$. The first condition is motivated by the requirement of the distribution function to be positive (see An & Evans 2006), whereas the second one allows us to avoid improper posterior distribution that is important when the data favour $\beta_{\infty}\lesssim 1$. The scale of transition between the two asymptotic values of anisotropy is defined by $L_{0}$. We find that keeping this parameter free causes degeneracy with $\beta_{\infty}$ and $\beta_{0}$. This considerably restricts the information on the growth or decrease of the anisotropy profile that $\beta_{0}$ and $\beta_{\infty}$ are expected to convey. To overcome this difficulty we fix $L_{0}$ at $0.2V_{s}r_{s}$, a value corresponding to the $\sim 1r_{s}$ transition scale expected from cosmological simulations (see Wojtak et al. 2008). This implies that the anisotropy profiles approach their asymptotic values at some characteristic scales of radius, namely $\beta_{0}=\beta(r\lesssim 0.1r_{s})$ and $\beta_{\infty}=\beta(r\gtrsim 10r_{s})$. Fixing $L_{0}$ reduces the dimension of the parameter space by one so that the analysis is based on the four-parameter phase space model specified by $M_{s}$, $r_{s}$, $\beta_{0}$ and $\beta_{\infty}$. 4.2 MCMC approach Our purpose is to calculate the posterior probability density and provide credibility regions in the parameter space. This involves numerical integrations of multivariate semi-Gaussian functions that can be efficiently tackled with the Markov Chain Monte Carlo (MCMC) algorithm. The main idea of this approach is to construct a sufficiently long chain of models which are distributed in the parameter space according to the posterior probability density. Once such a chain is provided, one can easily compute the marginal probability distributions by projecting all points on to an appropriate parameter subspace and evaluating the histograms. We construct the Markov chains following the Metropolis-Hastings algorithm (see e.g. Gregory 2005). In the first step of this algorithm one picks a trial point in parameter space ${\textbf{\em a}}_{t+1}$ by drawing from the so-called proposal distribution $q({\textbf{\em a}}|{\textbf{\em a}}_{t})$ centred on the previous point of the chain ${\textbf{\em a}}_{t}$. Then the point ${\textbf{\em a}}_{t+1}$ is accepted with the probability equal to $\textrm{min}\{1,r\}$, where $r$ is the so-called Metropolis ratio given by $$r=\frac{p({\textbf{\em a}}_{t+1}|D)q({\textbf{\em a}}_{t}|{\textbf{\em a}}_{t+% 1})}{p({\textbf{\em a}}_{t}|D)q({\textbf{\em a}}_{t+1}|{\textbf{\em a}}_{t})},$$ (11) otherwise it takes the value of its predecessor, ${\textbf{\em a}}_{t+1}={\textbf{\em a}}_{t}$. In the case of a few parameter model the proposal probability density can be any function that roughly resembles the target posterior distribution. In our work we use a multivariate Gaussian with a diagonal covariance matrix. The variances are calculated from short trial Markov chains of about $2000$ models, where our best initial guess for parameter variances is used. This approach is a simplified version of the scheme outlined in Widrow, Pym & Dubinski (2008). It is worth mentioning that the proposal distribution is symmetric, $q({\textbf{\em a}}_{t+1}|{\textbf{\em a}}_{t})=q({\textbf{\em a}}_{t}|{\textbf% {\em a}}_{t+1})$, so that the Metropolis ratio is given by $p({\textbf{\em a}}_{t+1}|D)/p({\textbf{\em a}}_{t}|D)$. An important issue of the MCMC analysis is a relative number of distinct points in the parameter space which is described by the so-called acceptance rate (the average rate at which proposed models are accepted). A recommended value of this parameter for many-parameter models is around $20-30$ percent (see e.g. Gregory 2005). We note that our choice of the proposal distribution keeps the acceptance rates of the resulting Markov chains within this range. Some additional modification of the proposal distribution is required in subsection 5.3, where we consider a joint analysis involving $N_{p}=22$ parameters. In order to keep the acceptance rate at a desirable level all initial variances of the proposal distribution are scaled by $2.4^{2}/N_{p}$ (see e.g. Gelman et al. 1995). This correction prevents the acceptance rate from dropping, due to a large number of parameters in use, to a very low value of around 1 percent. When applying the MCMC algorithm, it is crucial to check whether the chains explore properly the parameter space, i.e. whether they possess the property often referred to as mixing. To save computing time, we decided not to create several chains for each data sample that is a commonly advised way to look for convergence. As an alternative, we follow two simple indicators of mixing. First, we calculate parameter dispersions in two halves of a given chain and within the whole chain. If the relative differences between them do not exceed 10 percent for each parameter, the chain is considered to be mixed. Second, we monitor the variation of the posterior probability along the chains. Unless the profiles of $p({\textbf{\em a}}|D)$ exhibit a long-scale tendency to grow or decline, the chain is again expected to be mixed. We note that the so-called burn-in part of the chains, when the first models gradually approach the most favoured zone of the parameter space, is not longer than 1 percent of the total length of the chains. All chains used in this analysis consist of $10^{4}$ models that is above the recommended minimum of $330N_{p}$ (Dunkley et al. 2005). 4.3 Number of data points One of the leading factors affecting the posterior probability is the limited number of data points. In order to study the impact of this effect we carried out the analysis of three theoretical phase space diagrams with $300$, $3000$ and $9000$ points that correspond to a typical size of a data sample for a single nearby galaxy cluster (the first number) and to a compilation of $10-30$ of them (the last two numbers). The diagrams were generated from a discrete representation of the full phase space distribution function with the following parameters of the anisotropy profile: $\beta_{0}=0.1$, $\beta_{\infty}=0.5$ and $L_{0}=0.2V_{s}r_{s}$. The phase space was sampled using the acceptance-rejection technique (Press et al. 1996) in a manner described by Kazantzidis, Magorrian & Moore (2004). Fig. 3 shows the $1\sigma$ credibility regions, i.e. the regions enclosing $68.3$ percent of the corresponding marginal probability, inferred from the MCMC analysis of the three theoretical diagrams. For the sake of simplicity, the parameters of the mass profile were rescaled by the true values. The best-fitting model for $N=9000$ case is overplotted in solid lines on top of the smoothed contour map of the phase space density (dashed lines) in Fig. 4. From Fig. 3 we conclude that typical relative errors of $M_{s}$ and $r_{s}$ from the analysis of a single phase space diagram with $N=300$ points are about 20 percent. On the other hand, the corresponding constraints on the anisotropy parameters are very poor. The marginal distribution is so wide, particularly along $\beta_{\infty}$, that it is expected to be sensitive to any kind of noise in the real data. Anticipating the results of the following section we emphasize that the only way to reliably constrain the anisotropy profile is to increase the number of data points. In practice, this can be achieved in the joint analysis of at least $10$ independent diagrams ($3000$ data points), where one assumes a universal anisotropy profile. The result for $N=3000,9000$ in the top panel of Fig. 3 shows what we can expect from such analysis. 5 Results of the analysis We carried out the MCMC analysis of $60$ phase space diagrams of our mock data catalogue described in section 3. The results are presented in the form of the maximum a posteriori (MAP) values of the parameters and the errors that correspond to the boundary of the $1\sigma$ credibility regions of the marginal distributions, i.e. the regions enclosing $68.3$ percent of the marginal probability. 5.1 Mass profile Fig. 5 shows the results for the parameters of the mass profile, $M_{s}$ and $r_{s}$. From the diagrams showing residuals, attached below the mains plots, we conclude that in half of the cases the relative errors do not exceed 25 percent (see the lines of the first and third quartile of the residuals). On the other hand, around 15 percent of the results differ from the real values by more than 50 percent that can hardly be reconciled with the expectations from the effect of a shot noise. Analyzing a few tens of theoretical phase space diagrams from subsection 4.3 with $N=300$ data points we find that residuals induced by a shot noise do not exceed 30 percent. This may suggest that the most outlying points in Fig. 5 are probably subject to non-negligible systematic errors. Potential sources of these errors could include projection effects as well as those associated with internal structure. We looked for the influence of the shape of dark matter haloes and found no correlation between the accuracy of parameter estimates and the halo ellipticity or the alignment of the major axis with respect to the line of sight. All Markov chains can be easily transformed into the chains of models parametrized by the virial mass $M_{v}$ and the concentration $c$. The resulting credibility ranges and the MAP values of these parameters are shown in Fig. 6. We find that the scatter in residuals is comparable to the case where the scale parameters were used and the positions of the most discrepant points imply the presence of similar systematic errors. There is noticeable progress in the constraints on the concentration parameter in comparison to previous work based on modelling velocity moments (Sanchis et al. 2004; Łokas et al. 2006). On the other hand, there is no convincing evidence for a similar improvement for the virial mass. The MAP values of the mass parameters typically underestimate the true values by 15 percent and 11 percent for the virial and scale mass respectively (see the position of median lines in the diagrams showing residuals in Figs. 6 and 5). Interestingly, a similar offset was reported by Biviano et al. (2006) for the virial mass estimated from the Jeans analysis of velocity dispersion profiles. They suggested that this bias is related to the presence of interlopers infalling towards the cluster along filaments. Due to relatively small velocities in the rest frame of a cluster, these objects cannot be identified by any algorithm of interloper removal and, therefore, remain in the sample decreasing the velocity dispersion and the resulting virial mass. We think that a similar mechanism is likely responsible for the offset of our results. Nevertheless, we emphasize that this bias is smaller than the statistical errors obtained in the MCMC analysis so that the overall effect is not statistically significant. It is a well known fact the concentration parameter is weakly correlated to the virial mass (e.g. Navarro et al. 1997; Bullock et al. 2001). This so-called mass-concentration relation is an imprint of the formation history and is therefore thought to be one of the predictions of the cosmological model. Fig. 7 demonstrates the prospects of recovering this relation with our approach. Each panel in this Figure shows the results obtained for 20 phase space diagrams for a given direction of observation (filled circles with error bars) and the best power-law fit (solid line). For comparison with the prediction of the $\Lambda$CDM model, we plot the true parameters of our 20 clusters (open circles) and with a dashed line the power-law fit to the mean mass-concentration relation for relaxed haloes simulated in the framework of WMAP3 cosmology from Macciò, Dutton & van den Bosch (2008). We find that, within errors, the empirical $M_{v}$-$c$ relation is consistent with a flat profile in all cases. For one projection (top panel) the slope deviates from the expected value of $-0.1$ by more than two sigma. The normalization within the mass range under consideration does not exhibit any offset with respect to the prediction so that future determination of this quantity for real clusters seems to be feasible. This result is particularly important in the context of recently discussed inconsistency between the normalization from the simulations and observational constraints (see e.g. Comerford & Natarajan 2007). 5.2 The anisotropy profile Fig. 8 shows the results of the MCMC analysis for the anisotropy parameters. Since the phase space model is well-defined only within the virial sphere, we decided to replace the parameter $\beta_{\infty}$ with $\beta_{5r_{s}}$ measured at $5r_{s}$ which is the typical scale of the virial radius. This allows us to avoid extrapolation of the model beyond the virial sphere. For comparison the Figure also shows in gray boxes the range of anisotropy values at $0.1r_{s}$ and $5r_{s}$ in two samples of clusters introduced is section $3$ which differ in steepness of their anisotropy profiles. The majority of results for $\beta_{0}$ are consistent with the true values. The errors are quite large, however, so they would not allow to distinguish between radially and tangentially biased anisotropy. We emphasize that all $1\sigma$ credibility ranges are clearly detached from the upper boundary of the prior probability, $\beta=0.5$. This means that this limit is not artificial but is embedded in the phase space structure of an equilibrated system. The anisotropy at $5r_{s}$ is poorly constrained. Although there is a weak tendency for the results to cluster around the true values, most of them are shifted considerably upwards or downwards. An important circumstance responsible for this bias is the fact that the posterior probability distribution is so wide in the space of anisotropy parameters that it is sensitive to any irregularities in the phase space diagram. These irregularities often occur in the outermost part of the diagrams, mostly due to subclustering and the presence of small velocity interlopers from the outside of the virial sphere. The overall consequence is that the posterior probability is often perturbed in $\beta_{\infty}$ (or $\beta_{5r_{s}}$) and typically stable in $\beta_{0}$. Due to significant systematic errors of $\beta_{5r_{s}}$ (or $\beta_{\infty}$), the constraints on the radial variation of the anisotropy for a single galaxy cluster are very weak. Most probably this caveat cannot be avoided in any approach aiming to determine the anisotropy profile of galaxies in a cluster (see e.g. Hwang & Lee 2008). 5.3 The anisotropy profile from the joint analysis The only possibility to obtain reliable constraints on the anisotropy profile is to increase the number of data points used in the analysis. In subsection 4.3 we showed that a sample of $3000$ data points is expected to provide conclusive results in this matter. When dealing with galaxy clusters, such increase in the amount of data can only be achieved at present by means of the joint analysis of several clusters ($\sim 10$ clusters). An additional advantage of this approach is that all local irregularities of the phase space diagrams compensate each other minimizing systematic errors. A common approach to carry out a joint analysis of several clusters consists of two stages. First, one merges all phases space diagrams with properly rescaled radii $R$ and velocities $v_{\rm los}$. As the units of phase space coordinates one most often uses the virial radius $r_{v}$ and the so-called virial velocity $V_{v}=(GM_{v}/r_{v})^{1/2}$. Then, assuming homology of the cluster sample, the resulting phase space diagram is reanalyzed in the framework of a model reduced by the parameters that were used to scale the radii and velocities. The final constraints obtained in this approach are thought to represent typical properties of galaxy clusters in a given sample (e.g. van der Marel et al. 2000; Mahdavi & Geller 2004; Łokas et al. 2006). Our approach differs from the above scheme in both respects. First, instead of merging many phase space diagrams we introduce the following generalization of the likelihood function (9) for the joint analysis of $n$ phase space diagrams $$\displaystyle f(M_{s,1},\dots,M_{s,n},r_{s,1},\dots,r_{s,n},\beta_{0},\beta_{% \infty})=$$ $$\displaystyle\prod_{j=1}^{n}\prod_{i=1}^{N}f_{\rm los}(R_{j,i},v_{{\rm los},j,% i}|\{M_{s,j},r_{s,j},\beta_{0},\beta_{\infty}\}),$$ (12) where $j$ and $i$ are respectively the reference number of a cluster and a data point of $j$-th phase space diagram. The likelihood as well as the posterior probability distribution are functions of $2n+2$ parameters (the parameter $L_{0}$ is fixed as discussed in subsection 4.1). We assume that the two parameters of the anisotropy profile are common to all clusters so that the final result concerning both of them is expected to provide a typical variation of the anisotropy in a given cluster sample. Our knowledge of $M_{s,i}$ and $r_{s,i}$ obtained in the previous analysis (see Fig. 5) is incorporated in the prior probability. For the sake of maintaining the analytical form of the prior, we used the product of $2n$ Gaussian distributions centred at the MAP values of $M_{s,i}$ and $r_{s,i}$. The dispersion of each Gaussian was fixed at the dispersion of the corresponding marginal probability distribution. It is interesting to note that if we arbitrarily ignored the credibility region of the nuisance parameters $M_{s,i}$ and $r_{s,i}$, our approach would be conceptually very similar to the case when all phase space diagrams are merged except that radii and velocities would be scaled by $r_{s}$ and $V_{s}$. We analyzed two samples of clusters, with $n=10$ objects each, observed in three directions as described in section 3. The samples separate the objects with distinctly rising anisotropy profiles from those with flat or moderately rising ones. This provides an opportunity to verify whether our approach is able to distinguish between these two cases. Fig. 9 shows the $1\sigma$ credibility regions in the $\beta_{0}-\beta_{\infty}$ plane inferred from the Markov chains of $10^{4}$ models. First, we notice that all results are consistent with the velocity dispersion tensor that is isotropic in the cluster centre and radially biased outside. Second, the resulting anisotropy profiles are clearly steeper for clusters with rising anisotropy profiles (top panel) and flatter for the others (bottom panel). In order to check the radial variation of the anisotropy in Fig. 10 we plot the $1\sigma$ credibility regions of its radial profiles. The resulting anisotropy effectively traces the true profiles (solid lines) within the radial range covering more than two orders of magnitude around $r_{s}$. A local deviation from the true values occurs only once (middle right panel for $r<0.5r_{s}$) as a result of relatively low quality of this data sample (see the results shown with asterisks in Fig. 8). Since the constraints on $\beta_{0}$ are considerably tighter than on $\beta_{\infty}$, statistical errors of local anisotropy typically increase with radius. Nevertheless, it is still possible to distinguish between steeply rising (left panels) and flat (right panels) profiles. Note that the accuracy achieved in this analysis is close to the upper limit, since statistical errors become comparable to the internal dispersion of the anisotropy profiles in the cluster samples (see e.g. the bottom right panel of Fig. 10). Still, one may still expect to improve the results and minimize systematic errors by adding more clusters. 6 Discussion We have performed the Bayesian analysis of mock phase space diagrams of galaxy clusters in terms of a fully anisotropic model of the phase space density. Our approach allows to constrain parameters of the total mass profile, $M_{s}$ and $r_{s}$, as well as the asymptotic values of the anisotropy profile, $\beta_{0}$ and $\beta_{\infty}$. The phase space model was designed to detect the radial variation of the anisotropy profile within a fixed distance range covering two orders of magnitude in radius around $r_{s}$. The choice of this radial range is motivated by the results from cosmological simulations (Wojtak et al. 2008) and our goal to avoid degeneracy between radial scales and the asymptotes of the anisotropy. Parameters of the mass profile are determined in our approach with rather satisfying average accuracy of about 30 percent. On the other hand, around 15 percent of the results are probably subject to systematic error and differ from the true values by $50-90$ percent. Nevertheless, it is worth noting that the typical relative errors of 30 percent in the virial mass are comparable to the potential accuracy of mass determination from other methods, such as modelling of X-ray gas (e.g. Nagai, Vikhlinin & Kravtsov 2007), analysis of velocity moments (Sanchis, Łokas & Mamon 2004) or the standard virial mass estimator (e.g. Biviano et al. 2006). The constraints on the concentration parameter are however more reliable and tighter in comparison to the approach based on velocity moments (Sanchis, Łokas & Mamon 2004; Łokas et al. 2006). We find that the scale mass $M_{s}$ and the virial mass $M_{v}$ tend to be underestimated on average by 11 and 15 percent respectively. It is very likely that this bias is caused by small velocity interlopers, as described in Biviano et al. (2006), which are not tractable by algorithms for interloper removal and effectively decrease the velocity dispersion of a system. Interestingly, a similar offset in the virial mass estimates is noticed in the analysis of mock X-ray data of clusters (e.g. Ameglio et al. 2009; Nagai et al. 2007). However, this happens probably incidentally, since these authors claim that their bias is due to the lack of hydrostatic equilibrium in the outer parts of clusters. The constraints on the anisotropy profile of a single cluster are barely conclusive. The reason for this is the presence of substructures at large clustercentric distances which gives rise to significant systematic errors of $\beta_{\infty}$. The final effect is that, although the central asymptotic anisotropy is determined rather precisely, the overall constraint on the anisotropy profile is rather poor. This situation changes in the case of a joint analysis of several phase space diagrams. We find that it is then easy to obtain reliable constraints on the radial variation of the anisotropy within the radial range of two orders of magnitude around the scale radius $r_{s}$. Note that we adopted the phase space scaling that is consistent with the intrinsic parameters of the mass profile: $M_{s}$ and $r_{s}$, in contrast to commonly used assumption that a sample of phase space diagrams is homologous with respect to scaling by the virial radius and virial velocity (e.g. van der Marel et al. 2000; Łokas et al. 2006). In this work we demonstrated the potential and discussed the reliability of the analysis of kinematic data of galaxy clusters in terms of an anisotropic phase space density model. The method is able to provide robust constraints on the parameters of the total mass profile and the mean anisotropy profile. The results could certainly contribute to tests of the mass-concentration relation as well as improve constraints on the mean anisotropy profile of galaxy clusters (e.g. van der Marel et al. 2000; Biviano & Katgert 2004; Łokas et al. 2006). This will be the subject of a follow-up work where we analyze real kinematic data for a sample of $\sim 20$ nearby galaxy clusters. Acknowledgments The simulations have been performed at the Altix of the LRZ Garching. RW is grateful for the hospitality of Institut d’Astrophysique de Paris and Astrophysikalisches Institut Potsdam where parts of this work were done. This work was partially supported by the Polish Ministry of Science and Higher Education under grant NN203025333 as well as by the Polish-German exchange program of Deutsche Forschungsgemeinschaft and the Polish-French collaboration program of LEA Astro-PF. RW acknowledges support from the START Fellowship for Young Researchers granted by the Foundation for Polish Science. References [] Ameglio S., Borgani S., Pierpaoli E., Dolag K., Ettori S., Morandi A., 2009, MNRAS, 394, 479 [] An J. H., Evans N. W., 2006, ApJ, 642, 752 [] Ascasibar Y., Gottlöber S., 2008, MNRAS, 386, 2022 [] Baes M., van Hese M., 2007, A&A, 471, 419 [] Benatov L., Rines K., Natarajan P., Kravtsov A., Nagai D., 2006, MNRAS, 370, 427 [] Binney J., Mamon G. A., 1982, MNRAS, 200, 361 [] Binney J., Tremaine S., 2008, Galactic Dynamics. Princeton Univ. Press, Princeton [] Biviano A., Katgert P., 2004, A&A, 424, 779 [] Biviano A., Murante G., Borgani S., Diaferio A., Dolag K., Girardi M., 2006, A&A, 456, 23 [] Bullock J. S., Kolatt T. S., Sigad Y., Somerville R. S., Kravtsov A. V., Klypin A. A., Primack J. R., Dekel A., 2001, MNRAS, 321, 559 [] Chanamé J., Kleyna J., van der Marel R., 2008, ApJ, 682, 841 [] Cole S., Lacey C., 1996, MNRAS, 281, 716 [] Comerford J. M., Natarajan P., 2007, MNRAS, 379, 190 [] Dejonghe H., Merritt D., 1992, ApJ, 391, 531 [] den Hartog R., Katgert P., 1996, MNRAS, 279, 349 [] Dunkley J., Bucher M., Ferreira P. G., Moodley K., Skordis C., 2005, MNRAS, 356, 925 [] Gelman A., Carlin J. B., Stern H. S., Rubin D. B., 1995, Bayesian Data Analysis. Chapman & Hall, London [] Gerhard O., Jeske G., Saglia R. P., Bender R., 1998, MNRAS, 295, 197 [] Gregory P. C., 2005, Bayesian Logical Data Analaysis for the Physical Science. Cambridge Univ. Press, Cambridge [] Heisler J., Tremaine S., Bahcall J. N., 1985, ApJ, 298, 8 [] Hwang H. S., Lee M. G., 2008, ApJ, 676, 218 [] Jeffreys H., 1946, Proc. Roy. Soc. A, 186, 453 [] Kazantzidis S., Magorrian J., Moore B., 2004, ApJ, 601, 37 [] Kent S. M., Gunn J. E., 1982, AJ, 87, 945 [] Łokas E. L., 2002, MNRAS, 333, 697 [] Łokas E. L., Hoffman Y., 2001, in Spooner N. J. C., Kudryavtsev V., eds, Proc. 3rd International Workshop, The Identification of Dark Matter. World Scientific, Singapore, p. 121 [] Łokas E. L., Mamon G. A., 2001, MNRAS, 321, 155 [] Łokas E. L., Mamon G. A., 2003, MNRAS, 343, 401 [] Łokas E. L., Wojtak R., Gottlöber S., Mamon G. A., Prada F., 2006, MNRAS, 367, 1463 [] Macciò A. V., Dutton A. A., van den Bosch F. C., 2008, MNRAS, 391, 1940 [] Mahdavi A., Geller M. J., 2004, ApJ, 607, 202 [] Mamon G. A., Łokas E. L., 2005, MNRAS, 363, 705 [] Merrifield M. R., Kent S. M., 1990, AJ, 99, 1548 [] Merritt D., 1987, ApJ, 313, 121 [] Merritt D., Saha P., 1993, ApJ, 409, 75 [] Nagai D., Vikhlinin A., Kravtsov A. V., 2007, ApJ, 655, 98 [] Natarajan P., Kneib J.-P., 1996, MNRAS, 283, 1031 [] Navarro J. F., Frenk C. S., White S. D. M., 1997, ApJ, 490, 493 [] Press W. H., Teukolsky S. A., Vetterling W. T., Flannery B. P., 1996, Numerical Recipes in Fortran 77. Cambridge Univ. Press, Cambridge [] Sanchis T., Łokas E. L., Mamon G. A., 2004, MNRAS, 347, 1198 [] Schwarzschild M., 1979, ApJ, 232, 236 [] Solanes J. M., Salvador-Solé E., 1990, A&A, 234, 93 [] Spergel D. N., Bean R., Doré O. et al., 2007, ApJS, 170, 377 [] van der Marel R., Franx M., 1993, ApJ, 407, 525 [] van der Marel R. P., Magorrian J., Carlberg R. G., Yee H. K. C., Ellingson E., 2000, AJ, 119, 2038 [] Van Hese E., Baes M., Dejonghe H., 2009, ApJ, 690, 1280 [] Widrow L. M., Pym B., Dubinski J., 2008, ApJ, 679, 1239 [] Wilkinson M. I., Kleyna J., Evans N. W., Gilmore G., 2002, MNRAS, 330, 778 [] Wojtak R., Łokas E. L., 2007, MNRAS, 377, 843 [] Wojtak R., Łokas E. L., Gottlöber S., Mamon G. A., 2005, MNRAS, 361, L1 [] Wojtak R., Łokas E. L., Mamon G. A., Gottlöber S., Prada F., Moles M., 2007, A&A, 466, 437 [] Wojtak R., Łokas E. L., Mamon G. A., Gottlöber S., Klypin A., Hoffman Y., 2008, MNRAS, 388, 815
Fluid limits for earliest-deadline-first networks Rami Atar Viterbi Faculty of Electrical Engineering, Technion – Israel Institute of Technology, Haifa 32000, Israel    Yonatan Shadmi${}^{*}$ Abstract This paper analyzes fluid scale asymptotics of two models of generalized Jackson networks employing the earliest deadline first (EDF) policy. One applies the ‘soft’ EDF policy, where deadlines are used to determine priority but jobs do not renege, and the other implements ‘hard’ EDF, where jobs renege when deadlines expire, and deadlines are postponed with each migration to a new station. The arrival rates, deadline distribution and service capacity are allowed to fluctuate over time at the fluid scale. Earlier work on EDF network fluid limits, used as a tool to obtain stability of these networks, addressed only the soft version of the policy, and moreover did not contain a full fluid limit result. In this paper, tools that extend the notion of the measure-valued Skorokhod map are developed and used to establish for the first time fluid limits for both the soft and hard EDF network models. AMS subject classifications: 60K25, 60G57, 68M20 Keywords: measure-valued Skorokhod map, measure-valued processes, fluid limits, earliest deadline first, generalized Jackson networks 1 Introduction This work continues a line of research initiated in [4] and expanded in [3], in which a Skorokhod map (SM) is used in conjunction with stochastic evolution equations in measure space, to characterize scaling limits of queueing systems. Specifically, the measure-valued Skorokhod map (MVSM) of [4], an infinite-dimensional analogue of various well-known SMs in the finite-dimensional orthant, is a tool for analyzing policies that prioritize according to a continuous parameter. This notion has yielded new results on fluid or law of large numbers (LLN) scaling limits of queueing systems implementing the policies earliest deadline first (EDF), shortest job first (SJF) and shortest remaining processing time (SRPT) [4], as well as EDF in a many-server scaling [3]. In the case of EDF, the state of the system is given by a measure $\xi$ on ${\mathbb{R}}$, where $\xi(B)$ expresses the queue length associated with all jobs in the buffer that have deadlines in the set $B$, for $B\subset{\mathbb{R}}$ a Borel set. The aforementioned continuous parameter corresponds to a job’s deadline, and the MVSM encodes priority by enforcing the rule that, for every $x\in{\mathbb{R}}$, work with deadline $>x$ can be transferred from the buffer to the server only at times when $\xi((-\infty,x])=0$, that is, when there is no work in the buffer associated with deadline $\leq x$. Thus the MVSM of [4] is useful for the analysis of a single node. The goal of this paper is to address EDF in network setting. To this end two multidimensional counterparts of the MVSM are developed involving multiple measure-valued processes, one consisting of a map that combines properties of the MVSM and of the SM with oblique reflection in the orthant (SMOR), and another is where the MVSM serves as a building block in a system of equations. EDF is a rational choice of service policy for systems in which job urgency is quantifiable. Its practical significance stems from its use in real time applications such as telecommunication networks [1], real time operating systems [9], and emergency department operations [18]. The policy has two main versions: • soft EDF: all jobs are served whether their deadline expires or not; deadlines serve only to determine the priority of a job, • hard EDF: deadlines determine priority and jobs whose deadlines expire renege the system. The soft and hard versions of this policy are also known in the literature as EDF without and with reneging, respectively, and EDF is also referred to as earliest due date first served. A further subdivision is according to whether service is preemptive or non-preemptive; in this paper we restrict our attention to the non-preemptive disciplines. Although fluid limits of soft EDF queueing networks have been studied before in relation to the question of their stability in [8], [19], these papers have established only partial results as far as convergence was concerned (most significantly, for stability analysis only convergence along subsequences is required, and indeed only such convergence was proved; see more details below). In this paper, the two aforementioned counterparts of the MVSM yield for the first time fluid limits for both the soft and the hard versions of EDF networks. There has been great interest in this policy in theoretical studies. It has been argued to possess optimality properties, and much effort has been devoted to studying its scaling limits [24], [14], [13], [21], [20], [22], [2], [6], mostly in the single node case. The hard EDF policy was shown to be optimal with respect to expected number of reneging jobs [27], for the $M/G/1+G$ and $G/D/1+G$ queues. It was also showed that if there exists an optimal policy for the $G/G/1+G$ queue then EDF is optimal. Optimality properties of EDF were further studied with respect to cumulative reneged work [22] as well as steady state probability of loss [26]. Fluid models of multi-class soft EDF Jackson networks were studied in [20], focusing on invariant states of the dynamical system and characterizing its invariant manifold. As for scaling limits, most effort has been devoted to the heavy traffic regime. The paper [24] studied diffusion limits of soft EDF queues under heavy traffic assumptions as Markov processes in Banach spaces. An expression for the lead time profile given the queue length was obtained. An important line of research started from [14], a paper that pioneered (along with [15]) the use of measure-valued processes for scaling limits of queueing-related models. In this paper the diffusion limit of soft EDF queues was characterized. Further significant development was in [22], that used SM techniques to show convergence of a hard EDF queue to a diffusion limit. Specifically, the total workload process was shown to converge to a doubly reflected Brownian motion, and the limit of the underlying state process, that is again a measure-valued process, is given in terms of this reflected Brownian motion. The fact that the total workload in the system is sufficient to recover the complete state of the system is due to a certain state space collapse (SSC) phenomenon, which remarkably generalizes the well known SSC for priority queues. SSC, as a phenomenon and as a tool for proving convergence, is unique to the heavy traffic diffusion regime, and thus cannot be of help in the fluid limit regime that is of interest in this paper. Hard EDF fluid limits were established in [13] and [2]. These papers considered the $M/M/1+G$ and, respectively, $G/G/1+G$ models. The paper [4] introduced the MVSM and used it to establish fluid limits for several models, including soft and hard EDF. The paper [3] employed this approach to analyze EDF in a many-server fluid limit regime. Scaling limits of EDF in network setting were studied in [25] and [23] (in heavy traffic) and in [8] and [19] (at fluid scale). The paper [25] extended [24] to a network setting. Given the queue lengths at the nodes, the Fourier transform of the lead time distribution was represented as the solution of a fixed point equation. The paper [23] extended [14] to multi-class acyclic networks. Much closer to the subject of this paper are the results on fluid scale limits established in [8] and [19]. It is well known that fluid models are useful in proving stability of queueing networks, by reducing the problem of stability of the stochastic dynamical system representing the queueing network into the stability of all solutions to a deterministic dynamical system, usually represented in terms of deterministic evolution equations, comprising the fluid model. This approach, that was first formulated in general terms in [12], was taken in [8] to establish the stability of soft EDF networks, in fact in the broader, multi-class setting, provided that they are sub-critically loaded. The approach is based on showing stability properties of any solution to the fluid model equations, and does not require that uniqueness holds for these solutions (for a given initial condition). As a consequence, uniqueness of solutions to the fluid model equations is not required, nor is it established in [8]. A related issue is that when using this method it suffices to establish convergence of the rescaled processes along subsequences. Indeed, fluid limits are proved only along subsequences, and so the full convergence to a fluid limit is not established there. It is also important to point out that the scaling in [8] is such that the gap between time of arrival and deadline vanishes in the limit. Consequently, the asymptotics are indistinguishable from that under a policy that prioritizes by order of arrival, namely first in system first out (FISFO). In contrast, under our scaling the gaps alluded to above remain of order one, and are therefore captured by the limiting dynamics. In this we follow the treatment in [4] and [3]. This aspect is also similar to the nature of the limit dynamics in most of the aforementioned papers on EDF in heavy traffic, where the limiting system’s state is given by a nondegenerate measure that accounts for a variety of deadlines. The stability problem was also studied via the fluid limit approach in [19] in the more complicated case of preemptive multiclass EDF networks, under the assumption that customers have fixed routes through the system. In [19] too it was not claimed, nor is it obvious, that the fluid model equations uniquely characterize the fluid model solutions for a given initial condition, and limits were only established along subsequences. The same paper also studied the stability of hard EDF networks (as well as several other network models with reneging), though not via fluid models, hence it did not address the problem of fluid limits of hard EDF networks. Our first main result is the fluid limit for soft EDF Jackson networks. The approach is based on a tool that we develop, that combines properties of the aforementioned MVSM, which encodes the priority structure, and SMOR, which encodes the structure of the flow within the network. It is well known since [29] that the heavy traffic asymptotics of Jackson networks is given by a reflected Brownian motion in the orthant with oblique reflection vector field, a process that can be represented in terms of a corresponding SMOR. The same SMOR has been used since then to study Jackson networks in other asymptotic regimes, namely fluid limits [11, Ch. 7], and large deviations [5]. The way we use it in this paper is as follows. We represent the state of the system in terms of a vector measure (an ${\mathbb{R}}^{d}$ valued set function) $\xi$ on ${\mathbb{R}}$, where for a Borel set $B\subset{\mathbb{R}}$, $\xi(B)=(\xi^{i}(B))$ expresses the workload associated with jobs having deadlines in $B$ in the various buffers $i$. The resulting extended version of the MVSM, that we call the vector measure valued SM (VMVSM) is a transformation on the space of trajectories with values in the space of vector measures on ${\mathbb{R}}$. It has the property that for each deadline level $x\in{\mathbb{R}}_{+}$, the trajectory $t\mapsto\xi_{t}([0,x])$ is given as the image of the data of the problem (a suitable version of the so called netput process) under a SMOR. This provides a generalization of the concept from [4]. The convergence result is valid under very mild probabilistic assumptions. Note that fluid limits of soft EDF networks immediately imply fluid limits of FISFO networks, if the deadlines are set to be the arrival times. Our second contribution is the treatment of hard EDF networks. The model is considerably more difficult as far as scaling limits are concerned, a fact reflected by the very small number of papers on the subject. In particular, the method of [4] to handle hard EDF for a single node does not extend to networks (in [4] uniqueness for the fluid model solutions is obtained via pathwise minimality, a property that does not generalize to networks – counterexamples can be constructed). In our treatment we make two model assumptions that differ from the soft EDF model besides the obvious difference between the two policies. First, the deadlines of a job are not fixed but increase whenever it migrates to a new service station. The problem appears to be considerably harder with fixed deadlines, having to do with regularity of the arrivals processes into each station. That is, in a model with fixed deadlines, when one of the nodes becomes supercritical, jobs begin to renege at some point, causing a potentially large number of jobs to arrive at other stations very close to their deadline. Consequently, small (asymptotically negligible) perturbations in deadlines may result in large (fluid scale) differences in the system’s state. We leave open the important question of establishing fluid limits with fixed deadlines. Under the dynamic deadline assumption we are able to harness the single node results from [4] to the network setting. An important part of the proof is devoted to establishing regularity properties of arrival processes into each station, composed of exogenous and endogenous arrival processes (due to routing within the system), affected by the reneging in a nontrivial way. The aforementioned assumption allows us to use an argument by induction over squares in the time-deadline plane, where at each step the size of the square side is increased. This is used in treating both the fluid model equations and the convergence result. Second, the service time distributions are assumed to have bounded support. This eases the notation for a certain technical reason related to the possibility that, while a job is in service, the (extended) deadline expires in the station it may be routed to. This assumption is not difficult to remove but we keep it because it does simplify the form of the model equations. Finally we note that the techniques developed in this paper are likely to be useful for networks implementing other related disciplines, such as SJF. The organization of this paper is as follows. At the end of this section we introduce notation used throughout. In §2 we describe the VMVSM and the soft EDF queueing network, and then state and prove the convergence result. In §3 we first describe the fluid model equations and prove uniqueness of solutions to these equations, then we formulate the queueing network model under hard EDF, state the main convergence result and provide its proof. Notation In what follows, ${\mathbb{R}}_{+}=[0,\infty)$. For a Polish space ${\cal S}$, $\mathbb{C}_{\cal S}$ and $\mathbb{D}_{\cal S}$ denote the space of continuous and, respectively, cádlág functions from ${\mathbb{R}}_{+}$ into ${\cal S}$; if ${\cal S}=\mathbb{R}$ we simply write $\mathbb{C}$ and $\mathbb{D}$. Denote by $\mathbb{D}_{+}$, respectively $\mathbb{D}^{\uparrow}$, the subset of $\mathbb{D}$ of non-negative functions, respectively of non-decreasing and non-negative functions, and apply a similar notation to $\mathbb{C}$. We also define $$\displaystyle\mathbb{D}_{0}^{K}=\left\{f\in\mathbb{D}^{K}:\quad f^{i}\left(0% \right)\geq 0\text{ for all }1\leq i\leq K\right\}.$$ ${\cal M}$ denotes the space of finite Borel measures on $\mathbb{R}_{+}$, endowed with the topology of weak convergence. Its subset of atomless measures is denoted by ${\cal M}_{\sim}$. It is well known that the topology of weak convergence is metrized by the Levy-Prohorov metric, denoted $d_{\cal L}$. Define the subset of $\mathbb{D}_{\cal M}$ of non-decreasing elements as $\mathbb{D}_{\cal M}^{\uparrow}=\{\zeta\in\mathbb{D}_{\cal M}$: $t\mapsto\int_{{\mathbb{R}}_{+}}f(x)\zeta_{t}(dx)$ is non-decreasing for any continuous, bounded, non-negative function $f\}$. With a slight abuse of notation we denote $\mathbb{D}^{\uparrow K}=\left(\mathbb{D}^{\uparrow}\right)^{K}$ and $\mathbb{D}_{{\cal M}}^{\uparrow K}=\left(\mathbb{D}_{{\cal M}}^{\uparrow}% \right)^{K}$. The support of a measure $\zeta\in{\cal M}$ is denoted by $\text{supp}\left[\zeta\right]$. For $\zeta\in{\cal M}$ we write $\zeta[a,b]$ for $\zeta([a,b])$, and similarly for $[a,b)$, etc. We use the convention that for $a>b$, $\left[a,b\right]=\emptyset$. $\mathbb{D}$ and $\mathbb{D}_{\cal M}$ are equipped with the corresponding $J_{1}$ Skorokhod topologies, and $\mathbb{D}^{K}$ and $\mathbb{D}_{\cal M}^{K}$ are equipped with the product topologies. For $x\in{\mathbb{R}}^{K}$, $x^{i}$ denotes the $i$-th component, and $\left\lVert{x}\right\rVert=\max_{1\leq i\leq K}\left|x^{i}\right|$. For $x\in\mathbb{D}^{K}$ denote $\left\lVert{x}\right\rVert_{T}=\sup_{t\in\left[0,T\right]}\left\lVert{x\left(t% \right)}\right\rVert$. The modulus of continuity of a function $f:\mathbb{R}_{+}\to\mathbb{R}$ is denoted by $$\displaystyle w_{T}\left(f,\epsilon\right)=\sup\{\left|f\left(s\right)-f\left(% t\right)\right|:s,t\in\left[0,T\right],\left|s-t\right|\leq\epsilon\}.$$ 2 Soft EDF networks In this section we develop the VMVSM, which combines elements of the MVSM and the SMOR. We then use it to represent and establish the fluid limit of soft EDF networks. We begin by introducing, in §2.1.1, the MVSM, and then a Skorokhod problem whose well posedness is stated and proved. This gives rise to the VMVSM. Then, in §2.1.2, the queueing model and its rescaling are introduced. We then state the main result. Its proof appears in §2.2. 2.1 Model and main results 2.1.1 A Skorohod problem in measure space The measure valued Skorokhod problem (MVSP) introduced in [4] is as follows. Definition 1 (MVSP). Let $\left(\alpha,\mu\right)\in\mathbb{D}_{{\cal M}}^{\uparrow}\times\mathbb{D}^{\uparrow}$. Then $\left(\xi,\beta,\iota\right)\in\mathbb{D}_{{\cal M}}\times\mathbb{D}_{{\cal M}% }^{\uparrow}\times\mathbb{D}^{\uparrow}$ is said to solve the MVSP for the data $\left(\alpha,\mu\right)$ if, for each $x\in\mathbb{R}_{+}$, 1. $\xi\left[0,x\right]=\alpha\left[0,x\right]-\mu+\beta\left(x,\infty\right)+\iota$, 2. $\int_{\left[0,\infty\right)}\xi_{s}\left[0,x\right]d\beta_{s}\left(x,\infty% \right)=0$, 3. $\int_{\left[0,\infty\right)}\xi_{s}\left[0,x\right]d\iota\left(s\right)=0$, 4. $\beta\left[0,\infty\right)+\iota=\mu$, where in 2. and 3. the integration variable is $s$. It was shown that there exists a unique solution to the MVSP ([4, Proposition 2.8]), and thus, the MVSP defines a map $(\alpha,\mu)\mapsto(\xi,\beta,\iota)$. Moreover, this map has certain continuity properties that were key in establishing scaling limits for both soft and hard EDF [4, Theorem 5.4]. The approach that we take here builds on these ideas but replaces $\xi$ and $\beta$ by vector measures. We motivate our definitions by describing a fluid model for an EDF network. An informal description is followed by a precise mathematical formulation. Consider a queueing network with $K$ nodes; each node consists of a server and a queue, that accommodates fluid. Let $P\in{\mathbb{R}}^{K\times K}$ be a given substochastic matrix. Exogenous arrivals form an input to the system. They consist of fluid entering the various queues. Each unit of arriving mass has an associated distribution of deadlines. The servers prioritize the mass with the smallest deadline. After service, the mass splits between the nodes according to $P$. That is, the stream leaving server $i$ splits so that a fraction $P_{ij}$ routes to queue $j$. These streams form the endogenous arrival processes. The remaining fraction, $1-\sum_{j=1}^{K}P_{ij}$, exits the system. The deadlines do not vary during the entire sojourn in the system. Define for the $i$-th queue, respectively, the exogenous arrival process, cumulative potential effort, queue content, and departure process, as follows: $$\displaystyle\alpha^{i}\in\mathbb{D}_{{\cal M}}^{\uparrow}$$ $$\displaystyle:$$ $$\displaystyle\alpha_{t}^{i}\left[0,x\right]\text{ is the mass to have % exogenously entered the $i$-th queue by time $t$ }$$ (1) $$\displaystyle\text{with deadlines in $\left[0,x\right]$},$$ $$\displaystyle\mu^{i}\in\mathbb{D}^{\uparrow}$$ $$\displaystyle:$$ $$\displaystyle\mu^{i}\left(t\right)\text{ is the total mass the $i$-th server % can serve by time $t$, if it is never idle},$$ $$\displaystyle\xi^{i}\in\mathbb{D}_{{\cal M}}$$ $$\displaystyle:$$ $$\displaystyle\xi_{t}^{i}\left[0,x\right]\text{ is the mass in the $i$-th queue% at time $t$ with deadlines in $\left[0,x\right]$},$$ $$\displaystyle\beta^{i}\in\mathbb{D}_{{\cal M}}^{\uparrow}$$ $$\displaystyle:$$ $$\displaystyle\beta_{t}^{i}\left[0,x\right]\text{ is the mass to have left the % $i$-th queue by time $t$ with deadlines in $\left[0,x\right]$}.$$ The initial condition of the buffer content is already contained in $\alpha$, namely it is given by $\alpha_{0}$. Let the idleness (or lost effort) process be defined by $\iota^{i}=\mu^{i}-\beta^{i}\left[0,\infty\right)$. The following balance equation holds: $$\displaystyle\xi_{t}^{i}\left[0,x\right]$$ $$\displaystyle=\alpha_{t}^{i}\left[0,x\right]+\sum_{j=1}^{K}P_{ji}\beta_{t}^{j}% \left[0,x\right]-\beta_{t}^{i}\left[0,x\right].$$ (2) The work conservation property and the priority structure of EDF are expressed through $$\displaystyle\int$$ $$\displaystyle\xi_{t}^{i}\left[0,x\right]d\iota^{i}\left(t\right)=0,\quad 1\leq i% \leq K,$$ $$\displaystyle\int$$ $$\displaystyle\xi_{t}^{i}\left[0,x\right]d\beta_{t}^{i}\left(x,\infty\right)=0,% \quad 1\leq i\leq K.$$ (3) One can recognize similarities to the MVSP, although the objects in our equations are vector-valued parallelrs of those from the MVSP, and moreover, the equations are coupled. This motivates us to define a multi-dimensional version of the MVSP. Definition 2 (VMVSP). Let $P\in{\mathbb{R}}^{K\times K}$ be a substochastic matrix and let $R=I-P^{T}$. Let $\left(\alpha,\mu\right)\in\mathbb{D}_{{\cal M}}^{\uparrow K}\times\mathbb{D}^{% \uparrow K}$. Then $\left(\xi,\beta,\iota\right)\in\mathbb{D}_{{\cal M}}^{K}\times\mathbb{D}_{{% \cal M}}^{\uparrow K}\times\mathbb{D}^{\uparrow K}$ is said to solve the VMVSP associated with the matrix $P$ for the data $\left(\alpha,\mu\right)$ if, for each $x\in\mathbb{R}_{+}$, 1. $\xi\left[0,x\right]=\alpha\left[0,x\right]-R\beta\left[0,x\right]$, 2. $\int_{\left[0,\infty\right)}\xi_{s}^{i}\left[0,x\right]d\beta_{s}^{i}\left(x,% \infty\right)=0$ for $i\in\{1,...,K\}$, 3. $\int_{\left[0,\infty\right)}\xi_{s}^{i}\left[0,x\right]d\iota^{i}\left(s\right% )=0$ for $i\in\{1,...,K\}$, 4. $\beta\left[0,\infty\right)+\iota=\mu$. Note that the motivating fluid model indeed requires that $\xi_{t}$ be a nonnegative vector measure for each $t$, and that $t\mapsto\beta_{t}\left(x,\infty\right)+\iota\left(t\right)$ be non-decreasing and non-negative. These properties are assured by the above notion on VMVSM. A square matrix is called an $M$-matrix if it has positive diagonal elements, non-positive off-diagonal elements, and a non-negative inverse. By [11, Lemma 7.1], for a non-negative matrix $G$ whose spectral radius is strictly less than 1, also called convergent, $I-G$ is an $M$-matrix. Theorem 1. Let $P\in{\mathbb{R}}^{K\times K}$ be a convergent substochastic matrix, and $\left(\alpha,\mu\right)\in\mathbb{D}_{{\cal M}}^{\uparrow K}\times\mathbb{D}^{% \uparrow K}$ be such that $\mu\left(0\right)=0$. Then the VMVSP associated with the matrix $P$ and the data $\left(\alpha,\mu\right)$ has a unique solution. Proof. The uniqueness argument is based on SM theory for oblique reflection in the orthant. As for existence, our strategy is to construct a candidate and show that it is a solution, relying in a crucial way on a monotonicity result due to Ramasubramanian. We first present the existence argument. To construct a candidate, first rewrite the first condition of the VMVSP as $$\displaystyle\xi_{t}\left[0,x\right]=\alpha_{t}\left[0,x\right]-R\mu\left(t% \right)+R\left(\beta_{t}\left(x,\infty\right)+\iota\left(t\right)\right).$$ (4) The oblique reflection mapping theorem (ORMT), [16], [11, Theorem 7.2] states that for an M-matrix $R$, for every $u\in\mathbb{D}_{0}^{K}$ there exists a unique pair $\left(z,y\right)\in\mathbb{D}_{+}^{K}\times\mathbb{D}^{\uparrow K}$ satisfying $$\displaystyle z=u+Ry$$ (5) $$\displaystyle\int_{[0,\infty)}z^{i}\left(s\right)dy^{i}\left(s\right)=0,\quad 1% \leq i\leq K.$$ (6) The solution map, denoted $\Gamma:\mathbb{D}_{0}^{K}\to\mathbb{D}_{+}^{K}\times\mathbb{D}^{\uparrow K}$, is thus uniquely defined by the relation: $\left(z,y\right)=\Gamma\left(u\right)=\left(\Gamma_{1}\left(u\right),\Gamma_{2% }\left(u\right)\right)$ iff (5)–(6) hold. It is well known that the maps $\Gamma_{1}$ and $\Gamma_{2}$ are Lipschitz from $\mathbb{D}_{0}^{K}$ to $\mathbb{D}_{+}^{\uparrow K}$ and $\mathbb{D}_{+}^{K}$ in the sense that for any $T>0$ $$\displaystyle\left\lVert{\Gamma_{i}\left(u_{1}\right)-\Gamma_{i}\left(u_{2}% \right)}\right\rVert_{T}\leq L\left\lVert{u_{1}-u_{2}}\right\rVert_{T},\quad i% =1,2,$$ (7) where the constant $L$ depends only on $R$. The matrix $P^{T}$ is non-negative and convergent by assumption, hence $R$ is an $M$-matrix. Moreover, $\alpha_{0}\left[0,x\right]-R\mu\left(0\right)=\alpha_{0}\left[0,x\right]\geq 0$. Also ,if $\left(\xi,\beta,\iota\right)$ is a solution, then $\left(\xi\left[0,x\right],\beta\left(x,\infty\right)+\iota\right)\in\mathbb{D}% _{+}^{K}\times\mathbb{D}^{\uparrow K}$. Finally, Equation (4) has the form of (5), and conditions 2 and 3 of the VMVSP correspond to (6). Therefore, the conditions of the ORMT are satisfied, so we can construct a candidate as follows. Define $$\displaystyle\mathring{\xi}\left(x\right)=\Gamma_{1}\left(\alpha\left[0,x% \right]-R\mu\right)\in\mathbb{D}_{+}^{K},$$ $$\displaystyle\mathring{\beta}\left(x\right)+\iota=\Gamma_{2}\left(\alpha\left[% 0,x\right]-R\mu\right)\in\mathbb{D}^{\uparrow K}.$$ (8) The path $\iota$ can be recovered by taking $x$ to infinity, yielding $\iota=\Gamma_{2}\left(\alpha\left[0,\infty\right)-R\mu\right)$. One can then find $\mathring{\beta}_{t}\left(x\right)$ by $\mathring{\beta}\left(x\right)=\Gamma_{2}\left(\alpha\left[0,x\right]-R\mu% \right)-\iota=\Gamma_{2}\left(\alpha\left[0,x\right]-R\left(\mu-\iota\right)\right)$. It is now argued that these functions define vector measure valued paths via the relations $(\mathring{\xi}_{t}\left(x\right),\mathring{\beta}_{t}\left(x\right))=(\xi_{t}% \left[0,x\right],\beta_{t}\left(x,\infty\right))$. We must show that $\mathring{\xi}_{t}\left(x\right)$ is right-continuous and non-decreasing in $x$, $\mathring{\beta}\left(x\right)-\beta\left(y\right)$ is in $\mathbb{D}^{\uparrow K}$ for $x<y$ and that $\mathring{\beta}\left(x\right)$ right-continuous in $x$. The right-continuity follows by the Lipschitz property (7) and Lemma 2.4 in [4]. Next, Theorem 4.1 of [28] states that the map $\Gamma$ is monotone in the following sense. Let $u_{1},u_{2}\in\mathbb{D}_{0}^{K}$ such that $u_{2}-u_{1}\in\mathbb{D}^{\uparrow K}$, and let $\left(z_{i},y_{i}\right)=\Gamma\left(u_{i}\right),i=1,2$. Then $z_{2}-z_{1}\in\mathbb{D}_{+}^{K}$ and $y_{1}-y_{2}\in\mathbb{D}^{\uparrow K}$. Using this theorem in the setting of the VMVSP, let $x<y$ and denote $u_{1}=\alpha\left[0,x\right]-R\mu$, and $u_{2}=\alpha\left[0,y\right]-R\mu$. Then $u_{2}-u_{1}=\alpha\left(x,y\right]\in\mathbb{D}^{\uparrow K}$. Therefore $\mathring{\xi}\left(y\right)-\mathring{\xi}\left(x\right)\in\mathbb{D}_{+}^{K}$ and $\mathring{\beta}\left(x\right)-\mathring{\beta}\left(y\right)\in\mathbb{D}^{% \uparrow K}$. Thus $$\displaystyle\xi\left[0,x\right]=\mathring{\xi}\left(x\right)=\Gamma_{1}\left(% \alpha\left[0,x\right]-R\mu\right),$$ $$\displaystyle\beta\left(x,\infty\right)=\mathring{\beta}\left(x\right)=\Gamma_% {2}\left(\alpha\left[0,x\right]-R\mu\right)-\iota,$$ (9) define two vector measure valued paths in $\mathbb{D}_{{\cal M}}^{K}$ and $\mathbb{D}_{{\cal M}}^{\uparrow K}$ respectively. This demonstrates existence. As for uniqueness, it follows from the ORMT that any solution to the VMVSP must satisfy $$(\xi\left[0,x\right],\beta(x,\infty)+\iota)=\Gamma\left(\alpha\left[0,x\right]% -R\mu\right),$$ a relation that defines uniquely $\xi,\beta$ and $\iota$. ∎ Theorem 1 defines a map from $\mathbb{D}_{{\cal M}}^{\uparrow K}\times\mathbb{D}^{\uparrow K}\to\mathbb{D}_{% {\cal M}}^{K}\times\mathbb{D}_{{\cal M}}^{\uparrow K}\times\mathbb{D}^{% \uparrow K}$, namely the solution map of the VMVSM. We denote it by $\Theta$. The dependence on $P$ is not indicated explicitly. This notion is an extension of the MVSM from [4]. 2.1.2 Queueing model and scaling The queueing model is defined analogously to the fluid model. It is indexed by $N\in\mathbb{N}$, and defined on a probability space $(\mathnormal{\Omega},{\cal F},{\mathbb{P}})$. It consists of $K\geq 1$ service stations, each containing a buffer with infinite room and a server. The servers prioritize according to the EDF policy, and according to arrival times in case of a tie. Throughout this paper, for any parameter, random variable or process associated with the $N$-th system, say $a^{N}$, we use the notation $\bar{a}^{N}=N^{-1}a^{N}$ for normalization. A substochastic matrix $P\in{\mathbb{R}}^{K\times K}$, referred to as the routing matrix, is given, where the entry $P_{ij}$ represents the probability that a job that completes service at the $i$-th server is routed to the $j$-th queue. Denote $R=I-P^{T}$. For each $N$, processes denoted by $\alpha^{N}=(\alpha^{i,N})$, $\mu^{N}=(\mu^{i,N})$, $\xi^{N}=(\xi^{i,N})$, and $\beta^{N}=(\beta^{i,N})$ are associated with the $N$-th system, which represent discrete versions of their fluid model counterparts. Specifically, for a Borel set $B\subset{\mathbb{R}}_{+}$, $\alpha^{i,N}_{t}(B)$ denotes the number of external arrivals into queue $i$ up to time $t$ with deadline in $B$, $\xi^{i,N}_{t}(B)$ denotes the number of jobs in queue $i$ at time $t$ with deadline in $B$, and $\beta_{t}^{i,N}\left(B\right)$ denotes the number of jobs with deadline in $B$ transferred by time $t$ from the $i$-th queue to the corresponding server. Indeed, in the queueing model, the job being served is not counted in the queue, and therefore there is a distinction between the number of jobs transferred from queue $i$ and the number of jobs transferred from server $i$. Thus in addition to $\beta^{i,N}$ we introduce a process $\gamma^{N}$. Namely, $\gamma^{ij,N}_{t}(B)$ denotes the number of jobs with deadline in $B$ that were transferred from server $i$ to queue $j$ by time $t$, where $j=0$ corresponds to jobs leaving the system. For each $i$, $\gamma^{i,N}_{t}(B):=\sum_{j=0}^{K}\gamma^{ij,N}_{t}(B)$ gives the total number of jobs with deadline in $B$ that departed server $i$ by time $t$. The process $\mu^{i,N}\left(t\right)$ represents the cumulative service capacity of server $i$ by time $t$. Let the busyness at time $t$, $B_{t}^{i,N}\left(B\right)$, be defined as the indicator of the event that at this time, a job with deadline in $B$ occupies the $i$-th server, and let $B_{0-}^{i,N}\left(B\right)$ be its initial condition. Denote the total number of departures from server $i$ by $D^{i,N}\left(t\right):=\gamma^{i,N}_{t}\left[0,\infty\right)$. Denote by $\xi_{0-}^{i,N}\left(B\right)$ the number of jobs present in the $i$-th queue just prior to time $t=0$ with deadlines in $B$. Next, define the cumulative effort and cumulative lost effort processes, respectively, as $$\displaystyle T^{i,N}\left(t\right)=\int_{[0,t]}B_{s}^{i,N}\left[0,\infty% \right)d\mu^{i,N}\left(s\right),$$ (10) $$\displaystyle\iota^{i,N}\left(t\right)=\mu^{i,N}\left(t\right)-T^{i,N}\left(t% \right).$$ (11) To model the stochasticity of the service times a counting process $S^{i}\left(t\right)$ associated with each server $i$ is introduced, assumed to be a renewal process for which the interarrival times have unit mean, with the convention $S^{i}\left(0\right)=1$. Then the departure process is given by $$\displaystyle D^{i,N}\left(t\right)=\gamma^{i,N}_{t}\left[0,\infty\right)=S^{i% }\left(T^{i,N}\left(t\right)\right)-1.$$ (12) The various relations between these processes are described in what follows. The relation between $\gamma^{N}$, $\beta^{N}$ and $B^{N}$ is given by $$\displaystyle B_{0-}^{i,N}\left[0,x\right]+\beta_{t}^{i,N}\left[0,x\right]=% \gamma_{t}^{i,N}\left[0,x\right]+B_{t}^{i,N}\left[0,x\right].$$ (13) The balance equation for the queue content is $$\displaystyle\xi_{t}^{i,N}\left[0,x\right]$$ $$\displaystyle=\alpha_{t}^{i,N}\left[0,x\right]+\sum_{j=1}^{K}\gamma_{t}^{ji,N}% \left[0,x\right]-\beta_{t}^{i,N}\left[0,x\right].$$ (14) In addition, the work conservation condition and EDF policy are expressed through $$\displaystyle\int_{[0,\infty)}$$ $$\displaystyle\xi_{t}^{i,N}\left[0,x\right]d\iota^{i,N}\left(t\right)=0,$$ $$\displaystyle 1\leq i\leq K,$$ (15) $$\displaystyle\int_{[0,\infty)}$$ $$\displaystyle\xi_{t}^{i,N}\left[0,x\right]d\beta_{t}^{i,N}\left(x,\infty\right% )=0,$$ $$\displaystyle 1\leq i\leq K.$$ (16) For the routing process of server $i$, consider i.i.d. $\{0,1,\ldots,K\}$-valued random variables $\pi^{i,N}\left(n\right)$ with distribution given by $$\displaystyle\mathbb{P}\left(\pi^{i,N}\left(n\right)=j\right)=\begin{cases}1-% \sum_{k=1}^{K}P_{ik}&\text{if }j=0,\\ P_{ij}&\text{if }j\in\{1,...,K\}.\end{cases}$$ The stochastic primitives $\{\pi^{i,N}\left(n\right)\}$, $\{S\}$ and $\{\alpha^{N}\}$ are assumed to be mutually independent. Moreover, $\pi^{i,N}\left(n\right)$ are assumed to be mutually independent across $i$, and so are $S^{i}$ and $\alpha^{i,N}$. Define for $j\in\{1,...,K\}:$ $\theta^{ij,N}\left(n\right)=\mathbbm{1}_{\left\{\pi^{i,N}\left(n\right)=j% \right\}}$. The $n$-th job to depart server $i$ is routed to server $\pi^{i,N}\left(n\right)$, or, if this random variable is zero, leaves the system. Thus, the $n$-th job is routed to server $j$ if and only if $\theta^{ij,N}\left(n\right)=1$. Let the jump times of $D^{i,N}$ be denoted by $$\displaystyle\tau_{n}^{i,N}=\inf\{t\geq 0:D^{i,N}\left(t\right)\geq n\},$$ (17) and let $$\displaystyle\hat{\theta}^{ij,N}\left(t\right)=\sum_{n=1}^{\infty}\mathbbm{1}_% {\left[\tau_{n}^{i,N},\tau_{n+1}^{i,N}\right)}\left(t\right)\theta^{ij,N}\left% (n\right).$$ (18) Then $\gamma^{ij,N}$ can be obtained from $\gamma^{i,N}$ by $$\displaystyle\gamma_{t}^{ij,N}\left[0,x\right]=\int_{\left[0,t\right]}\hat{% \theta}^{ij,N}\left(s\right)d\gamma_{s}^{i,N}\left[0,x\right].$$ (19) This completes the definition of the model. Our main result concerning soft EDF networks is the following. Theorem 2. Assume that the routing matrix $P$ is a convergent substochastic matrix, and let $\left(\alpha,\mu\right)\in\mathbb{C}_{{\cal M}}^{\uparrow K}\times\mathbb{C}^{% \uparrow K}$ be such that $\mu\left(0\right)=0$. Assume, moreover, that $\alpha_{t}\in{\cal M}_{\sim}$ for all $t$, and that there exists a constant $C$ such that $\mathbb{E}\left[{\sum_{i=1}^{K}\alpha_{t}^{i,N}\left[0,\infty\right)}\right]% \leq CNt$. Finally, assume $(\bar{\alpha}^{N},\bar{\mu}^{N})\Rightarrow(\alpha,\mu)$. Then $\left(\bar{\xi}^{N},\bar{\beta}^{N},\bar{\iota}^{N}\right)\Rightarrow\left(\xi% ,\beta,\iota\right)$, where $\left(\xi,\beta,\iota\right)$ is the unique solution of the VMVSP with primitives $\left(\alpha,\mu\right)$. The proof is given in the next section. 2.2 Proof Recall the ORMT. To use this theorem, we wish to bring equations (14), (15) and (16) to a form compatible with the conditions (5) and (6). To this end, first define the error processes $$\displaystyle e^{i,N}\left(t\right)=\beta_{t}^{i,N}\left[0,\infty\right)+\iota% ^{i,N}\left(t\right)-\mu^{i,N}\left(t\right),\quad 1\leq i\leq K,$$ $$\displaystyle E^{ij,N}\left(t,x\right)=\gamma_{t}^{ij,N}\left[0,x\right]-P_{ij% }\gamma_{t}^{i,N}\left[0,x\right],\quad 1\leq i,j\leq K,x\in\mathbb{R}_{+}.$$ (20) Using (14) and normalizing, $$\displaystyle\bar{\xi}_{t}^{i,N}\left[0,x\right]$$ $$\displaystyle=\bar{\alpha}_{t}^{ij,N}\left[0,x\right]+\sum_{j=1}^{K}\bar{E}^{% ji,N}\left(t,x\right)+\sum_{j=1}^{K}P_{ji}\left(\bar{\mu}^{j,N}\left(t\right)+% \bar{e}^{j,N}\left(t\right)\right)-\bar{\mu}^{i,N}\left(t\right)-\bar{e}^{i,N}% \left(t\right)$$ $$\displaystyle\quad-\sum_{j=1}^{K}P_{ji}\left(\bar{B}_{t}^{j,N}\left[0,x\right]% -\bar{B}_{0-}^{j,N}\left[0,x\right]\right)+\bar{\beta}_{t}^{i,N}\left(x,\infty% \right)+\bar{\iota}^{i,N}\left(t\right)-\sum_{j=1}^{K}P_{ji}\left(\bar{\beta}_% {t}^{j,N}\left(x,\infty\right)+\bar{\iota}^{j,N}\left(t\right)\right).$$ In vector notation, with $\textbf{1}=(1,\ldots,1)^{T}\in{\mathbb{R}}^{K}$, $$\displaystyle\bar{\xi}_{t}^{N}\left[0,x\right]=$$ $$\displaystyle\bar{\alpha}_{t}^{N}\left[0,x\right]+\bar{E}^{N,T}\left(t,x\right% )\textbf{1}-R\left(\bar{\mu}^{N}\left(t\right)+\bar{e}^{N}\left(t\right)\right)$$ (21) $$\displaystyle-P^{T}\left(\bar{B}_{t}^{N}\left[0,x\right]-\bar{B}_{0-}^{N}\left% [0,x\right]\right)+R\left(\bar{\beta}_{t}^{N}\left(x,\infty\right)+\bar{\iota}% ^{N}\left(t\right)\right).$$ The matrix $R$ is an M-matrix as already argued, $\bar{\xi}^{N}$ is non-negative, and $\bar{\beta}_{t}^{N}\left(x,\infty\right)+\bar{\iota}^{N}$ is non-decreasing and non-negative. We can now invoke the ORMT, yielding for every $x$, $$\displaystyle\bar{\xi}^{N}\left[0,x\right]=\Gamma_{1}\left(\bar{\alpha}^{N}% \left[0,x\right]+\bar{E}^{N,T}\left(\cdot,x\right)\textbf{1}-R\left(\bar{\mu}^% {N}+\bar{e}^{N}\right)-P^{T}\bar{B}^{N}\left[0,x\right]+P^{T}\bar{B}_{0-}^{N}% \left[0,x\right]\right),$$ $$\displaystyle\bar{\beta}^{N}\left(x,\infty\right)+\bar{\iota}^{N}=\Gamma_{2}% \left(\bar{\alpha}^{N}\left[0,x\right]+\bar{E}^{N,T}\left(\cdot,x\right)% \textbf{1}-R\left(\bar{\mu}^{N}+\bar{e}^{N}\right)-P^{T}\bar{B}^{N}\left[0,x% \right]+P^{T}\bar{B}_{0-}^{N}\left[0,x\right]\right).$$ (22) Lemma 1. Let $E^{ij,N}\left(t,x\right)$ and $D^{i,N}\left(t\right)$ be as in (2.2) and (12). Then, for all $i,j,N,t,x$, $$\displaystyle\mathbb{E}\left[{E^{ij,N}\left(t,x\right)^{2}}\right]\leq 41% \mathbb{E}\left[{D^{i,N}\left(t\right)}\right].$$ Proof. We have $$\displaystyle E^{ij,N}\left(t,x\right)$$ $$\displaystyle=\gamma_{t}^{ij,N}\left[0,x\right]-P_{ij}\gamma_{t}^{i,N}\left[0,% x\right]$$ $$\displaystyle=\int_{\left[0,t\right]}\left(\hat{\theta}^{ij,N}\left(s\right)-P% _{ij}\right)d\gamma_{s}^{i,N}\left[0,x\right]$$ $$\displaystyle=\sum_{n=1}^{D^{i,N}\left(t\right)}\left(\theta^{ij,N}\left(n% \right)-P_{ij}\right)\left(\gamma_{\tau_{n}^{i,N}}^{i,N}\left[0,x\right]-% \gamma_{\tau_{n-1}^{i,N}}^{i,N}\left[0,x\right]\right).$$ To simplify notation in this proof, we omit in what follows the dependence on $i,j$ and $N$. Denote $$\displaystyle M_{k}\left(x\right)$$ $$\displaystyle=\sum_{n=1}^{k}\left(\theta\left(n\right)-P\right)\left(\gamma_{% \tau_{n}}\left[0,x\right]-\gamma_{\tau_{n-1}}\left[0,x\right]\right).$$ We show that $M_{k}(x)$ is a martingale on the filtration $$\displaystyle{\cal F}_{k}=\sigma\left(\theta\left(n\right):1\leq n\leq k,\ % \gamma_{\tau_{n}}\left[0,y\right]:1\leq n\leq k+1,y\geq 0,\ \tau_{n}:1\leq n% \leq k+1\right).$$ We first argue that $\theta\left(k+1\right)$ is independent of ${\cal F}_{k}$. By assumption, $\theta\left(k+1\right)$ is independent of $${\cal G}_{k}:=\left(\theta\left(n\right),S\left(t\right),\alpha_{t}\left[0,y% \right],\quad t,y\geq 0,1\leq n\leq k\right).$$ By construction, ${\cal G}_{k}\supset{\cal F}_{k}$. Therefore, $\theta\left(k+1\right)$ is independent of ${\cal F}_{k}$. The process $M_{k}\left(x\right)$ is adapted to $\{{\cal F}_{k}\}_{k\geq 1}$, it satisfies $\mathbb{E}\left[{\left|M_{k}\left(x\right)\right|}\right]\leq k<\infty$, and $$\displaystyle{\mathbb{E}}\left[\sum_{n=1}^{k+1}\left(\theta\left(n\right)-P% \right)\left(\gamma_{\tau_{n}}\left[0,x\right]-\gamma_{\tau_{n-1}}\left[0,x% \right]\right)\middle|{\cal F}_{k}\right]$$ $$\displaystyle=M_{k}\left(x\right)+\left(\gamma_{\tau_{k+1}}\left[0,x\right]-% \gamma_{\tau_{k}}\left[0,x\right]\right){\mathbb{E}}\left[\theta\left(k+1% \right)-P\middle|{\cal F}_{k}\right]$$ $$\displaystyle=M_{k}\left(x\right).$$ Consequently it is a martingale. Next, $D\left(t\right)$ is a stopping time with respect to $\{{\cal F}_{k}\}_{k\geq 1}$, because $\{D\left(t\right)\leq k\}=\{\tau_{k+1}>t\}$ and $\tau_{k+1}$ is ${\cal F}_{k}$-measurable. Therefore, $M_{k\land D\left(t\right)}$ is also a martingale, and by Fatou’s lemma and Burkholder’s inequality (see [17], Theorem 2.10) $$\displaystyle\mathbb{E}\left[{E^{2}\left(t,x\right)}\right]$$ $$\displaystyle=\mathbb{E}\left[{M_{D\left(t\right)}^{2}\left(x\right)}\right]$$ $$\displaystyle\leq\liminf_{k\to\infty}\mathbb{E}\left[{M_{k\land D\left(t\right% )}^{2}\left(x\right)}\right]$$ $$\displaystyle\leq 41\liminf_{k\to\infty}\mathbb{E}\left[{\sum_{n=1}^{k\land D% \left(t\right)}\left(\theta\left(n\right)-P\right)^{2}\left(\gamma_{\tau_{n}}% \left[0,x\right]-\gamma_{\tau_{n-1}}\left[0,x\right]\right)^{2}}\right]$$ $$\displaystyle\leq 41\mathbb{E}\left[{D\left(t\right)}\right].$$ ∎ Proof of Theorem 2. We write $c$ for a generic positive constant whose value may change from line to line. We first show that $\bar{\xi}^{N}\to\xi$ in probability. If $\alpha_{1},\alpha_{2}\in{\cal M}$ then $d_{\cal L}(\alpha_{1},\alpha_{2})\leq\sup_{x}|\alpha_{1}[0,x]-\alpha_{2}[0,x]|$. Consequently it suffices to show for every $T$ $$\displaystyle\sup_{x}\left\lVert{\bar{\xi}^{N}\left[0,x\right]-\xi\left[0,x% \right]}\right\rVert_{T}\to 0\text{ in probability}.$$ (23) By (7), (9) and (2.2), $$\displaystyle\sup_{x}\left\lVert{\bar{\xi}^{N}\left[0,x\right]-\xi\left[0,x% \right]}\right\rVert_{T}$$ $$\displaystyle\leq L\sup_{x}\left\lVert{\bar{\alpha}^{N}\left[0,x\right]-\alpha% \left[0,x\right]}\right\rVert_{T}+L\sup_{x}\left\lVert{\bar{E}^{N,T}\left(% \cdot,x\right)\textbf{1}}\right\rVert_{T}+L\sup_{x}\left\lVert{P^{T}\bar{B}^{N% }\left[0,x\right]+P^{T}\bar{B}_{0-}^{N}\left[0,x\right]}\right\rVert_{T}$$ $$\displaystyle      +L\left\lVert{R\left(\mu-\bar{\mu}^{N}\right)}\right\rVert_% {T}+L\left\lVert{R\bar{e}^{N}}\right\rVert_{T}.$$ We now show that each of the terms converges in probability to 0. • It follows directly from the definition of the Lévy metric that for $a_{1},a_{2}\in{\cal M}$, $$\displaystyle\sup_{x\in\mathbb{R_{+}}}\left|a_{1}\left[0,x\right]-a_{2}\left[0% ,x\right]\right|\leq d_{{\cal L}}\left(a_{1},a_{2}\right)+w\left(a_{2}\left[0,% \cdot\right],2d_{\cal L}\left(a_{1},a_{2}\right)\right).$$ (24) The fact that $\alpha^{i}\in\mathbb{D}_{\cal M}^{\uparrow}$ implies that for any $t\leq T$, $w\left(\alpha_{t}\left[0,\cdot\right],\delta\right)\leq w\left(\alpha_{T}\left% [0,\cdot\right],\delta\right)$. Since $\alpha_{T}\in{\cal M}$ does not have atoms by assumption, we have $w\left(\alpha_{T}\left[0,\cdot\right],\epsilon\right)\to 0$ as $\epsilon\to 0$. Denote $$\displaystyle d_{{\cal L},T}\left(\zeta_{1},\zeta_{2}\right)=\sup_{t\in\left[0% ,T\right]}\max_{1\leq i\leq K}d_{\cal L}\left(\zeta_{1,t}^{i},\zeta_{2,t}^{i}% \right),$$ for $\zeta_{1},\zeta_{2}\in\mathbb{D}_{{\cal M}}^{K}$. By the convergence $\alpha^{N}\Rightarrow\alpha$ and the continuity of $t\mapsto\alpha_{t}$, it follows that $d_{{\cal L},T}\left(\alpha^{N},\alpha\right)\rightarrow{0}$ in probability. Hence by (24), $$\displaystyle\sup_{x}\left\lVert{\bar{\alpha}^{N}\left[0,x\right]-\alpha\left[% 0,x\right]}\right\rVert_{T}\to 0\text{ in probability}.$$ • For the second term, it suffices to show $$\displaystyle\sup_{x}\sup_{t\in\left[0,T\right]}\max_{1\leq i,j\leq K}\left|% \bar{E}^{ji,N}\left(t,x\right)\right|\to 0\text{ in probability}.$$ To this end note that Lemma 1 implies $$\displaystyle\mathbb{E}\left[{\left|\bar{E}^{ij,N}\left(t,x\right)\right|^{2}}% \right]\leq 41N^{-2}\mathbb{E}\left[{D^{i,N}\left(t\right)}\right]\leq 41N^{-2% }\mathbb{E}\left[{\sum_{i=1}^{K}\alpha^{i,N}_{T}\left[0,\infty\right)}\right]% \leq 41N^{-1}CT\to 0.$$ •  The third term on the RHS is bounded by $2LK/N\to 0$. • Next, $\left\lVert{R\left(\mu-\bar{\mu}^{N}\right)}\right\rVert_{T}\leq c\left\lVert{% \mu-\bar{\mu}^{N}}\right\rVert_{T}\to 0$ in probability by the assumed convergence of $\bar{\mu}^{N}$ to $\mu$ and continuity of $\mu$. • Finally, by the definition of $\bar{e}^{N}$, $$\displaystyle\bar{e}^{i,N}\left(t\right)$$ $$\displaystyle=\bar{\beta}_{t}^{i,N}\left[0,\infty\right)+\bar{\iota}^{i,N}% \left(t\right)-\bar{\mu}^{i,N}\left(t\right)$$ $$\displaystyle=-\frac{1}{N}+\frac{B^{i,N}\left[0,\infty\right)}{N}+\frac{S^{i}% \left(N\bar{T}^{i,N}\left(t\right)\right)-N\bar{T}^{i,N}\left(t\right)}{N}.$$ Now, by the law of large numbers for renewal processes, for any constant $c$, $\sup_{t\in\left[0,c\right]}\left|\frac{S^{i}\left(Nt\right)-Nt}{N}\right|\to 0$ in probability. Moreover, for fixed $T$, $\bar{T}^{i,N}(T)$ are tight. It follows that $$\sup_{t\in[0,T]}\Big{|}\frac{S^{i}\left(N\bar{T}^{i,N}\left(t\right)\right)-N% \bar{T}^{i,N}\left(t\right)}{N}\Big{|}\to 0$$ in probability. Consequently, $\|\bar{e}^{N}\|_{T}\to 0$ in probability. This completes the proof of (23). We conclude that $\bar{\xi}^{N}\to\xi$. The convergence of $\bar{\iota}^{N}$ and $\bar{\beta}^{N}$ to $\iota$ and $\beta$ is shown similarly, by virtue of the Lipschitz property of $\Gamma_{2}$. We omit the details. This shows $(\bar{\xi}^{N},\bar{\beta}^{n},\bar{\iota}^{N})\to(\xi,\beta,\iota)$ in probability, and completes the proof. ∎ 3 Hard EDF networks In §3.1 the fluid model equations and the queueing model are presented, as well as the main results, asserting uniqueness and convergence. In §3.2 auxiliary lemmas are established. The proofs of the main results are presented in §3.3. 3.1 Model and main results 3.1.1 Fluid model The hard version of the policy differs from the soft version in that, when the deadline expires, fluid in the queue reneges. Therefore, we distinguish between the reneging process and the departure process: $$\displaystyle\beta^{is}\in\mathbb{D}_{{\cal M}}^{\uparrow}:$$ $$\displaystyle\beta_{t}^{is}\left[0,x\right]\text{ is the mass with deadlines % in $\left[0,x\right]$ that has left the $i$-th queue}$$ $$\displaystyle\text{by time $t$ due to service},$$ $$\displaystyle\beta^{ir}\in\mathbb{D}_{{\cal M}}^{\uparrow}:$$ $$\displaystyle\beta_{t}^{ir}\left[0,x\right]\text{ is the mass with deadlines % in $\left[0,x\right]$ that has left the $i$-th queue}$$ $$\displaystyle\text{by time $t$ due to reneging}.$$ The deadline is not fixed but is postponed by a constant $\epsilon>0$ whenever mass migrates to a new service station. We introduce $\gamma=\left(\gamma^{i}\right)$ defined by $$\gamma_{t}^{i}\left[0,x\right]=\beta_{t}^{i,s}\left[0,x-\epsilon\right],$$ (25) (recalling the convention $[a,b]=\emptyset$ if $a>b$), which expresses the mass that has left server $i$ by time $t$ with its updated deadline in $\left[0,x\right]$. The processes $\alpha$, $\mu$, $\xi$, $\iota$ are defined as before. We now describe the equations governing the fluid model. The fluid that is served and the fluid that reneges from the system sum up to the total fluid that leaves the queue, and the idle effort is the difference between the potential effort and the processed mass, as described by $$\displaystyle\beta_{t}^{i}\left[0,x\right]=\beta_{t}^{i,s}\left[0,x\right]+% \beta_{t}^{i,r}\left[0,x\right],$$ (26) $$\displaystyle\iota^{i}\left(t\right)=\mu^{i}\left(t\right)-\beta_{t}^{i,s}% \left[0,\infty\right).$$ (27) Reneging does not occur prior to deadline, as expressed by $$\beta_{t}^{r}\left(t,\infty\right)=0.$$ (28) The content of the $i$-th queue is given by $$\displaystyle\xi_{t}^{i}\left[0,x\right]$$ $$\displaystyle=\alpha_{t}^{i}\left[0,x\right]+\sum_{j=1}^{K}P_{ji}\gamma_{t}^{j% ,s}\left[0,x\right]-\beta_{t}^{i}\left[0,x\right].$$ (29) The work conservation and EDF conditions are expressed through $$\displaystyle\int\xi_{t}^{i}\left[0,x\right]d\iota^{i}\left(t\right)=0,$$ $$\displaystyle\quad 1\leq i\leq K,$$ (30) $$\displaystyle\int\xi_{t}^{i}\left[0,x\right]d\beta_{t}^{i}\left(x,\infty\right% )=0,$$ $$\displaystyle\quad 1\leq i\leq K.$$ (31) To complete the description we define the processes $$\displaystyle\rho^{i}\left(t\right)=\beta_{t}^{i,r}\left[0,t\right]=\beta_{t}^% {i,r}\left[0,\infty\right),$$ (32) $$\displaystyle\sigma^{i}\left(t\right)=\inf\text{supp}\left[\xi_{t}^{i}\right].$$ (33) Note that $\beta_{t}^{ir}\left[0,x\right]=\rho^{i}\left(t\land x\right)$. The quantity $\rho^{i}(t)$ expresses the cumulative reneging from buffer $i$. The significance of $\sigma^{i}(t)$, the left edge of the support of $\xi^{i}$, is that reneging from buffer $i$ occurs only at times $t$ when $\sigma^{i}(t)=t$. Thus $$\displaystyle\xi_{t}^{i}\left[0,t\right)=0,$$ $$\displaystyle\quad 1\leq i\leq K,$$ (34) $$\displaystyle\int\mathbbm{1}_{\left\{\sigma^{i}\left(t\right)>t\right\}}d\rho^% {i}\left(t\right)=0,$$ $$\displaystyle\quad 1\leq i\leq K.$$ (35) We refer to (25)–(35) as the fluid model equations. A solution to these equations for data $(\alpha,\mu)$ is a tuple $(\xi,\beta,\beta^{s},\beta^{r},\rho,\iota)$ for which these equations hold. Note that it is possible to recover $\beta$ and $\rho$ from $\beta^{s}$ and $\beta^{r}$, and vice versa. Therefore, we sometimes refer to a tuple $(\xi,\beta,\rho,\iota)$ or $(\xi,\beta^{s},\beta^{r},\iota)$ as a solution. Assumption 1. 1. $\alpha$ takes the form $\alpha_{t}(B)=\xi_{0-}(B)+\hat{\alpha}_{t}(B)$, where $\xi_{0-}\in{\cal M}_{\sim}^{K}$, and $\hat{\alpha}\in\mathbb{C}_{{\cal M}_{\sim}}^{\uparrow K}$ satisfies for every $1\leq i\leq K$ $$\displaystyle\hat{\alpha}_{t}^{i}(B)=\int_{0}^{t}a^{i}_{s}(B)ds,\quad t\geq 0,% B\in{\cal B}({\mathbb{R}}_{+}),$$ where for each $t$ and $i$, $a^{i}_{t}$ is a finite measure on $[t,\infty)$, and $t\mapsto a^{i}_{t}(B)$ is measurable. Moreover, $\lim_{\delta\downarrow 0}\sup_{s\in[0,t]}a^{i}_{s}[s,s+\delta]=0$. 2. $\mu$ takes the form $$\displaystyle\mu\left(t\right)=\int_{0}^{t}m\left(s\right)ds,\quad t\geq 0,$$ for a Borel measurable function $m:\left[0,\infty\right)\to\mathbb{R}_{+}^{K}$ satisfying $\min_{i}\inf_{s\in\left[0,t\right]}m^{i}\left(s\right)>0$ for every $t$. Theorem 3. Suppose $\left(\alpha,\mu\right)$ satisfies Assumption 1. Then there exists a unique solution $\left(\xi,\beta,\beta^{s},\beta^{r},\rho,\iota\right)$ in $\mathbb{D}_{{\cal M}}^{K}\times(\mathbb{D}_{{\cal M}}^{K\uparrow})^{3}\times(% \mathbb{D}_{+}^{K\uparrow})^{2}$ to the fluid model. Moreover, $\beta^{s}$ satisfies Assumption 1.1. We prove this result in §3.3. 3.1.2 Queueing model, scaling There are many ingredients of the model that are the same as in §2, which we will not repeat. The processes $\alpha_{t}^{i,N}$, $\mu^{i,N}\left(t\right)$, $\xi_{t}^{i,N}$, $\iota^{i,N}(t)$ have the same meaning as in the soft EDF model. $\beta_{t}^{i,s,N}\left[0,x\right]$ is the number of jobs that have left the $i$-th queue by time $t$ with deadlines in $\left[0,x\right]$ to get served, $\beta_{t}^{i,r,N}\left[0,x\right]$ is the number of jobs that have reneged from the $i$-th queue by time $t$ with deadlines in $\left[0,x\right]$ when their deadlines had expired. Also, the cumulative reneging count is denoted by $\rho^{i,N}\left(t\right)$ and satisfies $\rho^{i,N}\left(t\right)=\beta_{t}^{i,r,N}\left[0,\infty\right)$. As already mentioned, the deadline of a job is postponed by $\varepsilon$ every time it migrates from one station to another. It is possible that the deadline of a job at a station expires while it is in service at that station. In this case, it is assumed that the job does not renege (in agreement with the model from [4]). A more complicated scenario is when the work associated with this job is sufficiently large that it is not complete even $\varepsilon$ units of time after this deadline expires. At this time the deadline at the next station also expires. We prefer not to deal with this possibility because it complicates the notation. Because $\varepsilon$ is fixed whereas the service times are downscaled, an event like that becomes less and less probable as $N$ increases. Our assumption, that will allow us to avoid this scenario altogether, is that the service time distributions have bounded supports. This assures that for $N$ large enough this scenario does not occur. The description of the model equations is given for sufficiently large $N$. Next, the processes $S^{i}(t)$, $D^{i,N}(t)$, $\gamma_{t}^{ij,N}$, $B_{t}^{i,N}$ and $T^{i,N}(t)$ as well as the relations between them are as in the soft EDF model. Note that the assumption regarding service time distribution supports is really an assumption about the processes $S^{i}$. We turn to mathematically describe the relations between the processes. We have $$\displaystyle T^{i,N}\left(t\right)=\int_{0}^{t}B_{s}^{i,N}\left[0,\infty% \right)d\mu^{i,N}\left(s\right),$$ $$\displaystyle\iota^{i,N}\left(t\right)=\mu^{i,N}\left(t\right)-T^{i,N}\left(t% \right).$$ $$\displaystyle\beta_{t}^{i,N}\left[0,x\right]=\beta_{t}^{is,N}\left[0,x\right]+% \beta_{t}^{ir,N}\left[0,x\right],$$ (36) $$\displaystyle D^{i,N}\left(t\right)=\gamma^{i,N}_{t}\left[0,\infty\right)=S^{i% }\left(T^{i,N}\left(t\right)\right)-1,$$ (37) $$\displaystyle\gamma_{t}^{i,N}\left[0,x\right]=\beta_{t}^{i,s,N}\left[0,x-% \epsilon\right]-B_{t}^{i,N}\left[0,x-\epsilon\right]+B_{0-}^{i,N}\left[0,x-% \epsilon\right].$$ (38) $$\displaystyle\xi_{t}^{i,N}\left[0,x\right]$$ $$\displaystyle=\alpha_{t}^{i,N}\left[0,x\right]+\sum_{j=1}^{K}\gamma_{t}^{ji,N}% \left[0,x\right]-\beta_{t}^{i,N}\left[0,x\right].$$ (39) The work conservation condition, the EDF policy, the hard EDF condition and the latest reneging condition are expressed through $$\displaystyle\int$$ $$\displaystyle\xi_{t}^{i,N}\left[0,x\right]d\iota^{i,N}\left(t\right)=0,$$ $$\displaystyle\quad 1\leq i\leq K$$ (40) $$\displaystyle\int$$ $$\displaystyle\xi_{t}^{i,N}\left[0,x\right]d\beta_{t}^{i,N}\left(x,\infty\right% )=0,$$ $$\displaystyle\quad 1\leq i\leq K,$$ (41) $$\displaystyle\xi_{t}^{i,N}\left[0,t\right]=0,$$ $$\displaystyle\quad 1\leq i\leq K,$$ (42) $$\displaystyle\int$$ $$\displaystyle\mathbbm{1}_{\{\sigma^{i,N}\left(t\right)>t\}}d\rho^{i,N}\left(t% \right)=0,$$ $$\displaystyle\quad 1\leq i\leq K,$$ (43) where $\sigma^{i,N}(t)=\inf\text{supp}\,\xi_{t}^{i,N}$. Finally, the assumptions on $\pi^{i,N}\left(n\right)$, the definition of $\theta^{ij,N}\left(n\right)$, $\tau_{n}^{i,N}$, $\hat{\theta}^{ij,N}\left(t\right)$ and the relations between these processes are as in the soft EDF model. Let us define the error processes as $$\displaystyle e^{i,N}\left(t\right)=\beta_{t}^{i,s,N}\left[0,\infty\right)+% \iota^{i,N}\left(t\right)-\mu^{i,N}\left(t\right),\quad 1\leq i\leq K,$$ $$\displaystyle E^{ij,N}\left(t,x\right)=\gamma_{t}^{ij,N}\left[0,x\right]-P_{ij% }\gamma_{t}^{i,N}\left[0,x\right],\quad 1\leq i,j\leq K,x\in\mathbb{R}_{+}.$$ (44) Theorem 4. Assume $(\bar{\alpha}^{N},\bar{\mu}^{N})\Rightarrow(\alpha,\mu)$, where $(\alpha,\mu)$ satisfies Assumption 1. Then $$\displaystyle\left(\bar{\xi}^{N},\bar{\beta}^{N},\bar{\beta}^{s,N},\bar{\beta}% ^{r,N},\bar{\iota}^{N},\bar{\rho}^{N},\bar{e}^{N},\bar{E}^{N}\right)% \Rightarrow\left(\xi,\beta,\beta^{s},\beta^{r},\iota,\rho,0,0\right),$$ (45) where $\left(\xi,\beta,\beta^{s},\beta^{r},\iota,\rho\right)$ is the unique solution of the fluid model equations corresponding to $(\alpha,\mu)$. 3.2 Auxiliary lemmas When setting $K=1$ and $P=0$, the model coincides with the single server model as described in [4], for which uniqueness of the solution of the fluid model equation and convergence have been established in [4]. Lemma 2. Let $K=1$ and $P=0$. Suppose $\left(\alpha,\mu\right)$ satisfies Assumption 1. Then there exists a unique solution $(\xi,\beta,\iota,\rho)$ to the fluid model equations, and the solution lies in $\mathbb{C}_{{\cal M}_{\sim}}\times\mathbb{C}_{{\cal M}_{\sim}}^{\uparrow}% \times\mathbb{C}^{\uparrow}\times\mathbb{C}^{\uparrow}$. Assume, moreover, that $(\bar{\alpha}^{N},\bar{\mu}^{N})\Rightarrow(\alpha,\mu)$. Then $\left(\bar{\xi}^{N},\bar{\beta}^{N},\bar{\iota}^{N},\bar{\rho}^{N}\right)% \Rightarrow\left(\xi,\beta,\iota,\rho\right)$. Proof. This is the content of Theorems 4.10 and 5.4 in [4]. ∎ Lemma 3. Suppose $\left(\alpha,\mu\right)$ satisfies Assumption 1 and fix $\tau>0$. Let $\alpha^{\circ}\in\mathbb{D}_{\cal M}^{\uparrow}$ be defined by $\alpha_{t}^{\circ}\left(A\right)=\alpha_{t}\left(A\cap\left[0,\tau\right]\right)$ for all $A\in{\cal B}({\mathbb{R}}_{+})$ and $t\in{\mathbb{R}}_{+}$. Let $\left(\xi^{\circ},\beta^{\circ},\iota^{\circ},\rho^{\circ}\right)$ and $\left(\xi,\beta,\iota,\rho\right)$ be the unique solutions of the fluid model equations (for $K=1$) corresponding to $\left(\alpha^{\circ},\mu\right)$ and $\left(\alpha,\mu\right)$, respectively. Then for all $x\in[0,\tau]$, $t\in[0,\tau]$ one has $\rho\left(t\right)=\rho^{\circ}\left(t\right)$, $\beta_{t}^{r}\left[0,x\right]=\beta_{t}^{\circ,r}\left[0,x\right]$, $\beta_{t}^{s}\left[0,x\right]=\beta_{t}^{\circ,s}\left[0,x\right]$, and $\xi_{t}\left[0,x\right]=\xi_{t}^{\circ}\left[0,x\right]$. Proof. Recall from the proof of Theorem 1 the notation $\Gamma=(\Gamma_{1},\Gamma_{2})$ for the SM in the finite dimensional orthant. We denote by $\Gamma^{(1)}=(\Gamma_{1}^{(1)},\Gamma_{2}^{(1)})$ the special case where the dimension is 1. That is, $\Gamma^{(1)}$ is merely the SM on the half line. The proof uses crucially the non-anticipation property of this SM [11, p.165], stating that if $\varphi_{i}=\Gamma^{(1)}(\psi_{i})$ for $i=1,2$, and for some $T>0$, $\psi_{1}=\psi_{2}$ on $[0,T]$, then also $\varphi_{1}=\varphi_{2}$ on $[0,T]$. Next, recall the notation $\Theta$ for the VMVSM. Once again, in the special case $K=1$ we denote this map by $\Theta^{(1)}$. In this proof, $t$ is always assumed to be in $\left[0,\tau\right]$. From Lemma 2, using equations $\eqref{107}$-(33) to reach conditions 1-4 in Definition 1 with the data $\left(\alpha,\mu+\rho\right)$, it follows that $\left(\xi^{\circ},\beta^{\circ},\iota^{\circ},\rho^{\circ}\right)$ and $\left(\xi,\beta,\iota,\rho\right)$ are respectively the unique solutions of the following two problems (P0) and (P1): $$\displaystyle\begin{cases}(i)&\left(\xi^{\circ},\beta^{\circ},\iota^{\circ}% \right)=\Theta^{(1)}\left(\alpha^{\circ},\mu+\rho^{\circ}\right),\\ (ii)&\xi^{\circ}_{t}\left[0,t\right)=0,\\ (iii)&\int\mathbbm{1}_{\{\sigma^{\circ}\left(t\right)>t\}}d\rho^{\circ}\left(t% \right)=0,\quad\sigma^{\circ}\left(t\right)=\inf\text{supp}\left[\xi^{\circ}_{% t}\right].\end{cases}$$ (P0) $$\displaystyle\begin{cases}(i)&\left(\xi,\beta,\iota\right)=\Theta^{(1)}\left(% \alpha,\mu+\rho\right),\\ (ii)&\xi_{t}\left[0,t\right)=0,\\ (iii)&\int\mathbbm{1}_{\{\sigma\left(t\right)>t\}}d\rho\left(t\right)=0,\quad% \sigma\left(t\right)=\inf\text{supp}\left[\xi_{t}\right].\end{cases}$$ (P1) Consider now the data $\left(\alpha,\mu+\rho^{\circ}\right)\in\mathbb{D}_{\cal M}^{\uparrow}\times% \mathbb{D}^{\uparrow}$, and denote the associated unique solution $\left(\hat{\xi},\hat{\beta},\hat{\iota}\right)=\Theta^{(1)}\left(\alpha,\mu+% \rho^{\circ}\right)$, and $\hat{\sigma}\left(t\right)=\inf\text{supp}\left[\hat{\xi}_{t}\right]$. We will show that the tuple $\left(\hat{\xi},\hat{\beta},\hat{\iota},\rho^{\circ}\right)$ satisfies $(i)-(iii)$ in (P1). Uniqueness implies then $\left(\xi,\beta,\iota,\rho\right)=\left(\hat{\xi},\hat{\beta},\hat{\iota},\rho% ^{\circ}\right)$. $\left(\hat{\xi},\hat{\beta},\hat{\iota},\rho^{\circ}\right)$ satisfies condition (P1.i) by the definition of $\left(\hat{\xi},\hat{\beta},\hat{\iota}\right)$. To show that $\left(\hat{\xi},\hat{\beta},\hat{\iota},\rho^{\circ}\right)$ satisfies (P1.ii), use [4, Lemma 2.7] for $x\leq\tau$: $$\displaystyle\hat{\xi}\left[0,x\right]=\Gamma^{(1)}_{1}\left(\alpha\left[0,x% \right]-\mu-\rho^{\circ}\right)=\Gamma^{(1)}_{1}\left(\alpha^{\circ}\left[0,x% \right]-\mu-\rho^{\circ}\right)=\xi^{\circ}\left[0,x\right].$$ This implies immediately $\hat{\xi}_{t}\left[0,t\right)=\xi_{t}^{\circ}\left[0,t\right)=0$ by (P0.ii). We need to show now that $\int\mathbb{I}_{\{\hat{\sigma}\left(t\right)>t\}}d\rho^{\circ}\left(t\right)=0$. This will follow from (P0.iii) once we show that $$\displaystyle\{t\leq\tau:\quad\sigma^{\circ}\left(t\right)=t\}=\{t\leq\tau:% \quad\hat{\sigma}\left(t\right)=t\}.$$ We already showed that for $x$ and $t$ in $\left[0,\tau\right]$: $\hat{\xi}_{t}\left[0,x\right]=\xi_{t}^{\circ}\left[0,x\right]$. Consider any $t^{\prime}\in\{t\leq\tau:\sigma^{\circ}\left(t\right)=t\}$. Then $t^{\prime}=\inf\text{supp}\left[\xi^{\circ}_{t^{\prime}}\right]=\inf\text{supp% }\left[\hat{\xi}_{t^{\prime}}\right]$. So $t^{\prime}$ is in $\{t\leq\tau:\hat{\sigma}\left(t\right)=t\}$ and therefore $$\displaystyle\{t\leq\tau:\quad\sigma^{\circ}\left(t\right)=t\}\subseteq\{t\leq% \tau:\quad\hat{\sigma}\left(t\right)=t\}.$$ The other direction is proved similarly. To conclude, when we consider only times in the interval $\left[0,\tau\right]$, the tuple $\left(\hat{\xi},\hat{\beta},\hat{\iota},\rho^{\circ}\right)$ satisfies all the conditions in (P1). So, by uniqueness, $\rho\left(t\right)=\rho^{\circ}\left(t\right)$ for $t\leq\tau$. This implies all the other equalities: $\beta_{t}^{r}\left[0,x\right]=\rho\left(t\land x\right)=\rho^{\circ}\left(t% \land x\right)=\beta_{t}^{\circ}\left[0,x\right]$ for $x,t\in\left[0,\tau\right]$. Also, for $x,t\in\left[0,\tau\right]$, by the non-anticipation property, $$\displaystyle\xi_{t}\left[0,x\right]=\Gamma^{(1)}_{1}\left(\alpha\left[0,x% \right]-\mu-\rho\right)\left(t\right)=\Gamma^{(1)}_{1}\left(\alpha^{\circ}% \left[0,x\right]-\mu-\rho^{\circ}\right)\left(t\right)=\xi_{t}^{\circ}\left[0,% x\right].$$ Finally, for $x,t\in\left[0,\tau\right]$: $\beta_{t}\left[0,x\right]=\alpha_{t}\left[0,x\right]-\xi_{t}\left[0,x\right]=% \alpha_{t}^{\circ}\left[0,x\right]-\xi_{t}^{\circ}\left[0,x\right]=\beta_{t}^{% \circ}\left[0,x\right]$, and $\beta_{t}^{s}\left[0,x\right]=\beta_{t}\left[0,x\right]-\beta_{t}^{r}\left[0,x% \right]=\beta_{t}^{\circ}\left[0,x\right]-\beta_{t}^{\circ,r}\left[0,x\right]=% \beta_{t}^{\circ,s}\left[0,x\right]$. ∎ The next lemma states that the fluid model equations for $K=1$ ‘preserve’ Assumption 1. Lemma 4. Let $K=1$ and $P=0$ and suppose $\left(\alpha,\mu\right)$ satisfies Assumption 1. Let $\left(\xi,\beta,\iota,\rho\right)$ be the unique solution of the fluid model equations. Let $\gamma_{t}$ be defined via the relation $\gamma_{t}\left[0,x\right]=\beta_{t}^{s}\left[0,x-\epsilon\right]$. Then $\gamma$ also satisfies Assumption 1.1 with zero initial condition, i.e. $\gamma\in\mathbb{C}_{{\cal M}_{\sim}}^{\uparrow}$ $$\displaystyle\gamma_{t}(B)=\int_{0}^{t}g_{s}(B)ds,\quad t\geq 0,B\in{\cal B}({% \mathbb{R}}_{+}),$$ (46) where for each $t$, $g_{t}$ is a finite measure on ${\mathbb{R}}_{+}$ with $g_{t}[0,t)=0$, and the mapping $t\mapsto g_{t}(B)$ is measurable. Moreover, $\lim_{\delta\downarrow 0}\sup_{s\in[0,t]}g_{s}[s,s+\delta]=0$. Proof. The proof is based mainly on the following two facts $$\beta^{s}_{t}(B)\leq\alpha_{t}(B),\qquad B\in{\cal B}({\mathbb{R}}),$$ (47) $$\beta^{s}_{t}({\mathbb{R}}_{+})-\beta^{s}_{\tau}({\mathbb{R}}_{+})\leq\int_{% \tau}^{t}m(s)ds,\qquad 0\leq\tau<t,$$ (48) and uses disintegration. By the fluid model equations for $K=1$, using (29) (recalling $P=0$) and (26), for a Borel set $B$ we have $\alpha_{t}\left(B\right)=\xi_{t}\left(B\right)+\beta_{t}\left(B\right)$ as well as $\beta_{t}(B)=\beta^{s}_{t}(B)+\beta^{r}_{t}(B)$, proving (47). Next, by (27), the monotonicity of $\iota$ and Assumption 1.2, (48) holds. We first show that $\gamma$ belongs to $\mathbb{C}_{{\cal M}_{\sim}}^{\uparrow}$. From the axioms of our model $\beta^{s}\in\mathbb{D}_{{\cal M}}^{\uparrow}$. By (47) and the assumption $\alpha_{t}\in{\cal M}_{\sim}$ it follows that $\beta^{s}_{t}\in{\cal M}_{\sim}$ for every $t$. The continuity $t\mapsto\beta^{s}_{t}$ follows from (48). This shows that $\beta^{s}$, and in turn, $\gamma$, belongs to $\mathbb{C}_{{\cal M}_{\sim}}^{\uparrow}$. Consider now the space $\mathbb{R}_{+}^{2}$ with its Borel $\sigma$-algebra. Let $\lambda$ be the measure on this space determined by $\gamma$ via the relation $$\lambda\left(\left[x,y\right]\times\left[s,t\right]\right)=\gamma_{t}\left[x,y% \right]-\gamma_{s}\left[x,y\right],\qquad x<y,\ s<t.$$ By (48), the measure $\lambda({\mathbb{R}}_{+}\times dt)$ is dominated by the measure $m(t)dt$. Let $\mathbb{T}:\left(\mathbb{R}_{+}^{2},{\cal B}({\mathbb{R}}_{+}^{2})\right)\to% \left({\mathbb{R}}_{+},{\cal B}\left({\mathbb{R}}_{+}\right)\right)$ be defined by $\mathbb{T}(x,t)=t$. We use the disintegration theorem [10, Theorem 1], with $\mathbb{T}$ as the measurable map. According to this theorem there exists a family of finite measures $\tilde{g}_{t}$ on ${\mathbb{R}}_{+}$ such that $t\mapsto\tilde{g}_{t}(B)$ is measurable for every $B\in{\cal B}({\mathbb{R}}_{+})$, and for each nonnegative measurable $f$, $$\int f(x,t)\lambda(dx,dt)=\int f(x,t)\tilde{g}_{t}(dx)m(t)dt.$$ (49) Denote $\hat{g}_{t}(B)=\tilde{g}_{t}(B)m(t)$. It will be shown that $$\hat{g}_{t}[0,t+\varepsilon)=0\quad\text{ for a.e. }t.$$ (50) Once the above is established, we can set $g_{t}(B)=\hat{g}_{t}(B\cap[t+\varepsilon,\infty))$, $B\in{\cal B}({\mathbb{R}}_{+})$, by which we achieve for a.e. $t$, $g_{t}(B)=\hat{g}_{t}(B)$. Hence in view of (49) one has (46). As for the two final assertions made in the lemma, we certainly have $g_{t}[0,t)\leq g_{t}[0,t+\varepsilon)=0$, and for all small $\delta$, $\sup_{s}g_{s}[s,s+\delta]=0$, by which these assertions are true. We thus turn to showing (50). Fix $x\leq t\leq t_{0}$. From the hard EDF equations and the assumption that jobs arrive before their deadlines: $$\displaystyle 0=\xi_{t}\left[0,x\right]=\alpha_{t}\left[0,x\right]-\beta_{t}% \left[0,x\right],$$ $$\displaystyle\alpha_{t_{0}}\left[0,x\right]-\alpha_{t}\left[0,x\right]=0,$$ $$\displaystyle\beta_{t_{0}}^{r}\left[0,x\right]-\beta_{t}^{r}\left[0,x\right]=% \rho\left(x\land t_{0}\right)-\rho\left(x\land t\right)=\rho\left(x\right)-% \rho\left(x\right)=0,$$ (51) $$\displaystyle\Rightarrow\beta_{t_{0}}^{s}\left[0,x\right]-\beta_{t}^{s}\left[0% ,x\right]=\alpha_{t_{0}}\left[0,x\right]-\alpha_{t}\left[0,x\right]-\beta_{t_{% 0}}^{r}\left[0,x\right]+\beta_{t}^{r}\left[0,x\right]=0.$$ This, together with (25) that expresses the fact that deadlines are postponed by $\varepsilon$ after service, implies that every rectangular subset $\left[0,t+\varepsilon\right]\times\left[t,t_{0}\right]$ of the set $\{\left(x,t\right)\in\mathbb{R}_{+}^{2}:x<t+\varepsilon\}$ satisfies: $$\displaystyle\lambda\left(\left[0,t+\varepsilon\right]\times\left[t,t_{0}% \right]\right)$$ $$\displaystyle=\gamma_{t_{0}}\left[0,t+\varepsilon\right]-\gamma_{t}\left[0,t+% \varepsilon\right]$$ (52) $$\displaystyle=\beta_{t_{0}}^{s}\left[0,t\right]-\beta_{t}^{s}\left[0,t\right]$$ $$\displaystyle=0.$$ And, since $\{\left(x,t\right)\in\mathbb{R}_{+}^{2}:x<t+\epsilon\}$ is contained in a countable union of rectangles of this form, $$\displaystyle 0=\lambda\left(\{\left(x,t\right)\in\mathbb{R}_{+}^{2}:x<t+% \varepsilon\}\right)=\int_{0}^{\infty}\hat{g}_{s}\left[0,s+\varepsilon\right)ds,$$ and (50) follows. ∎ Remark. Note that for any finite collection $\{\gamma^{i}\}_{i=1}^{K}$ and a set of positive coefficients $\{P_{i}\}_{i=1}^{K}$, if each element of the collection satisfies Assumption 1, then does also $\sum_{i=1}^{K}P_{i}\gamma^{i}$. Lemma 4 considers a single server, but we will use the conclusion on the sum when we will show that the total arrival process, the sum of exogenous and the endogenous arrival processes, satisfies the assumptions. 3.3 Proof of main results This section opens with the proof of Theorem 3 regarding existence and uniqueness of solutions to the fluid model equations. Proof of Theorem 3. Denote $$\displaystyle\Xi=\left(\xi,\beta,\beta^{s},\beta^{r},\gamma,\iota,\rho\right),$$ $$\displaystyle\mathbb{X}=\mathbb{D}_{{\cal M}}^{K}\times\mathbb{D}_{{\cal M}}^{% \uparrow K}\times\mathbb{D}_{{\cal M}}^{\uparrow K}\times\mathbb{D}_{{\cal M}}% ^{\uparrow K}\times\mathbb{D}_{{\cal M}}^{\uparrow K}\times\mathbb{D}^{% \uparrow K}\times\mathbb{D}^{\uparrow K},$$ $$\displaystyle{\cal X}=\left\{\Xi\in\mathbb{X}:\quad\text{The components of $% \Xi$ satisfy \eqref{100}-\eqref{eq:renege after deadline},\eqref{eq:hard fluid% work cons}-\eqref{103}}\right\}.$$ Recall that the fluid model equations are (25)-(35). Equation (29), the only one excluded from the definition of ${\cal X}$, is the equation that couples between the different servers. Indeed, the remaining equations are fluid model equations for $K$ separate single server systems. This exclusion makes it convenient to use the single server results in our proof. The existence proof, provided first, will be complete once we construct a tuple $\Xi\in{\cal X}$ that satisfies (29). It is then shown that this tuple is unique. Both existence and uniqueness are proved by induction. First consider, for each $i$, the fluid model solution of a single server with primitives $\left(\alpha^{i}\left(\cdot\cap\left[0,\varepsilon\right)\right),\mu^{i}\right)$, denoted $\left(\xi^{i,(1)},\beta^{i,(1)},\iota^{i,(1)},\rho^{i,(1)}\right)$. Denote also $\gamma_{t}^{i,(1)}\left[0,x\right]=\beta_{t}^{is,(1)}\left[0,x-\varepsilon\right]$. Then, for $n>1$, denote the fluid model solution of a single server with primitives $\left(\alpha^{i}\left(\cdot\cap\left[0,n\varepsilon\right)\right)+\sum_{j=1}^{% K}P_{ji}\gamma^{j,(n-1)}\left(\cdot\cap\left[0,n\varepsilon\right)\right),\mu^% {i}\right)$ by $\left(\xi^{i,(n)},\beta^{i,(n)},\iota^{i,(n)},\rho^{i,(n)}\right)$ and $\gamma_{t}^{i,(n)}\left[0,x\right]=\beta_{t}^{i,(n)}\left[0,x-\varepsilon\right]$. This inductively defines the tuples $\Xi^{(n)}$. Note that these tuples belong to ${\cal X}$ as solutions to the fluid model of hard EDF servers. Note also that all $\beta^{is,(n)}$ satisfy Assumption 1. We now show that these tuples are consistent with each other in that for $\left(x,t\right)\in\left[0,n\varepsilon\right)^{2}$ and $m>n$, we have $\beta_{t}^{is,(n)}\left[0,x\right]=\beta_{t}^{is,(m)}\left[0,x\right]$, $\rho^{i,(n)}\left(t\right)=\rho^{i,(m)}\left(t\right)$ and $\xi_{t}^{i,(n)}\left[0,x\right]=\xi_{t}^{i,(m)}\left[0,x\right]$. It is proved by induction over $n$ that these identities hold for all $m>n$. If $\left(x,t\right)\in\left[0,\varepsilon\right)^{2}$, then $\gamma_{t}^{j,(m)}\left(\left[0,x\right]\cap\left[0,m\varepsilon\right)\right)% =\gamma_{t}^{j,(m)}\left[0,x\right]=0$ and $\alpha_{t}^{i}\left(\left[0,x\right]\cap\left[0,m\varepsilon\right)\right)=% \alpha_{t}^{i}\left(\left[0,x\right]\cap\left[0,\varepsilon\right)\right)$, so the data generating $\rho^{(1)}$, $\rho^{(m)}$, $\xi^{(1)}$, $\xi^{(m)}$, $\beta^{s,(1)}$ and $\beta^{s,(m)}$ coincide on $\left[0,\varepsilon\right)^{2}$ and Lemma 3 yields the desired conclusion for $n=1$. Assuming the claim is true for $n$, consider $m\geq n+1$ and $\left(x,t\right)\in\left[0,\left(n+1\right)\varepsilon\right)^{2}$. Then $$\displaystyle\alpha^{i}_{t}\left(\left[0,x\right]\cap\left[0,\left(n+1\right)% \varepsilon\right)\right)=\alpha^{i}_{t}\left(\left[0,x\right]\cap\left[0,m% \varepsilon\right)\right)=\alpha^{i}_{t}\left[0,x\right].$$ If $t<n\varepsilon$, then the induction assumption implies $$\displaystyle\gamma_{t}^{j,(n)}\left(\left[0,x\right]\cap\left[0,\left(n+1% \right)\varepsilon\right)\right)=\beta_{t}^{js,(n)}\left[0,x-\varepsilon\right% ]=\beta_{t}^{js,(m)}\left[0,x-\varepsilon\right]=\gamma_{t}^{j,(m)}\left(\left% [0,x\right]\cap\left[0,\left(m+1\right)\varepsilon\right)\right).$$ If $t\in\left[n\varepsilon,\left(n+1\right)\varepsilon\right)$, by (51), we have that whenever $t>x$: $\beta_{t}^{i,s}\left[0,x\right]=\beta_{x}^{i,s}\left[0,x\right]$. It follows that if $t\in\left[n\epsilon,\left(n+1\right)\epsilon\right)$, $$\displaystyle\gamma_{t}^{j,(n)}\left(\left[0,x\right]\cap\left[0,\left(n+1% \right)\varepsilon\right)\right)=\beta_{t}^{js,(n)}\left[0,x-\epsilon\right]=% \beta_{\left[x-\epsilon\right]^{+}}^{js,(n)}\left[0,x-\epsilon\right]\\ \displaystyle=\beta_{\left[x-\epsilon\right]^{+}}^{js,(m)}\left[0,x-\epsilon% \right]=\gamma_{t}^{j,(m)}\left[0,x\right]=\gamma_{t}^{j,(m)}\left(\left[0,x% \right]\cap\left[0,\left(m+1\right)\varepsilon\right)\right).$$ So, the data generating $\rho^{(n)}$, $\rho^{(m)}$, $\xi^{(n)}$, $\xi^{(m)}$, $\beta^{s,(n)}$ and $\beta^{s,(m)}$ coincide on $\left[0,\left(n+1\right)\varepsilon\right)^{2}$ and Lemma 3 yields the desired conclusion for $n+1$. The above shows that, for example, $\beta_{t}^{i,(n)}\left[0,x\right]$ becomes constant as $n$ increases. Hence we may extract from the sequence a tuple in the following way. Given $x$ and $t$, let $n>\left(x\lor t\right)/\varepsilon$ and let $\beta_{t}^{i}\left[0,x\right]=\beta_{t}^{i,(n)}\left[0,x\right]$, and $\xi_{t}^{i}\left[0,x\right]=\xi_{t}^{i,(n)}\left[0,x\right]$. For every fixed $t$, this uniquely defines measures by determining their values on all sets $[0,x]$, a collection generating the $\sigma$-algebra ${\cal B}({\mathbb{R}}_{+})$. Similarly, for each $t$, $\rho^{(n)}_{t}$ becomes constant as $n$ gets large, hence we let $\rho=\lim_{n}\rho^{(n)}$. We also define $\beta_{t}^{i,r}\left[0,x\right]=\rho^{i}\left(x\land t\right)$, $\beta^{i,s}=\beta^{i}-\beta^{i,r}$, $\sigma^{i}\left(t\right)=\inf{\rm supp}\left[\xi_{t}^{i}\right]$, $\iota^{i}=\mu^{i}-\beta^{i,s}\left[0,\infty\right)$ and $\gamma_{t}^{i}\left[0,x\right]=\beta_{t}^{i,s}\left[0,x-\varepsilon\right]$. The tuple $\left(\xi,\beta,\iota,\rho\right)$ thus constructed is our candidate, and we now show that it is indeed a solution to the fluid model equations. First, by construction, $\Xi$ satisfies (29). (25), (26), (27), (32) and $\eqref{101}$ hold by definition, and (34) is immediate by construction. (28) is also immediate by $\beta_{t}^{i,r}\left(t,\infty\right)=\rho^{i}\left(t\right)-\rho^{i}\left(t% \land t\right)=0$. Equations (30) and (31) are satisfied because, as will soon be shown, $\psi_{x}^{i,(n)}:=\beta^{is,(n)}\left(x,\infty\right)+\iota^{i,(n)}-\beta^{is}% \left(x,\infty\right)-\iota^{i}\in\mathbb{D}^{\uparrow}$ and $\xi_{t}^{i,(n)}\left[0,x\right]=\xi_{t}^{i}\left[0,x\right]$ for large enough $n$. Recall also that $\Xi^{(n)}\in{\cal X}$ and (40) and (41) . Therefore, $$\displaystyle\int\xi_{s}^{i}\left[0,x\right]d\beta_{s}^{i,s}\left(x,\infty% \right)+\int\xi_{s}^{i}\left[0,x\right]d\iota^{i}\left(s\right)$$ $$\displaystyle=\lim_{n\to\infty}\left|\int\xi_{s}^{i,(n)}\left[0,x\right]d\beta% _{s}^{is,(n)}\left(x,\infty\right)+\int\xi_{s}^{i,(n)}\left[0,x\right]d\iota^{% i,(n)}\left(s\right)-\int\xi_{s}^{i}\left[0,x\right]d\beta_{s}^{is}\left(x,% \infty\right)-\int\xi_{s}^{i}\left[0,x\right]d\iota^{i}\left(s\right)\right|$$ $$\displaystyle=\lim_{n\to\infty}\left|\int\xi_{s}^{i}\left[0,x\right]d\psi_{x}^% {i,(n)}\left(s\right)\right|$$ $$\displaystyle\leq\left\lVert{\xi^{i}\left[0,x\right]}\right\rVert_{T}\lim_{n% \to\infty}\left(\beta_{T}^{is,(n)}\left(x,\infty\right)+\iota^{i,(n)}\left(T% \right)-\beta_{T}^{is}\left(x,\infty\right)-\iota^{i}\left(T\right)\right)=0.$$ (53) For the equality in (53), note that $\beta_{T}^{i,s}\left[0,\infty\right)=\lim_{m}\lim_{n}\beta_{T}^{is,(n)}\left[0% ,m\right]$ while $\beta_{T}^{is,(n)}\left[0,m\right]$ is monotone in both $n$ and $m$, hence interchanging the limits is justified. Thus we have $\beta_{T}^{is,(n)}\left[0,\infty\right)\to\beta_{T}^{i,s}\left[0,\infty\right)$ and $\iota^{i,(n)}\left(T\right)\to\iota^{i}\left(T\right)$ and (53). Lemma 2.2 in [4] implies $\psi_{x}^{i,(n)}\in\mathbb{D}^{\uparrow}$ and the monotonicity in $n$ of $\beta_{t}^{is,(n)}\left[0,x\right]$, provided one shows that for all $n$ $$\displaystyle\alpha^{i}\left[x\land n\varepsilon,x\land\left(n+1\right)% \varepsilon\right)+\sum_{j=1}^{K}P_{ji}\left(\gamma^{j,(n)}\left[0,x\land\left% (n+1\right)\varepsilon\right)-\gamma^{j,(n-1)}\left[0,x\land n\varepsilon% \right)\right)\in\mathbb{D}^{\uparrow}.$$ The first term is in $\mathbb{D}^{\uparrow}$ by assumption, and the second term is now shown to belong to $\mathbb{D}^{\uparrow}$ by induction, where each step invokes Lemma 2.2 from [4]. Indeed, the lemma yields $$\displaystyle\beta^{is,(2)}\left[0,x\land 2\varepsilon\right)-\beta^{is,(1)}% \left[0,x\land\varepsilon\right)=\mu^{i}-\iota^{i,(2)}-\beta^{is,(2)}\left[x% \land 2\varepsilon,\infty\right)-\mu^{i}+\iota^{i,(1)}+\beta^{is,(1)}\left[x% \land\varepsilon,\infty\right)\in\mathbb{D}^{\uparrow}$$ because $\alpha^{i}\left[x\land\varepsilon,x\land 2\varepsilon\right)+\sum_{j=1}^{K}P_{% ji}\gamma^{j,(1)}\left[0,x\land 2\varepsilon\right)\in\mathbb{D}^{\uparrow}$. Also, $$\displaystyle\beta^{is,(n)}\left[0,x\land n\varepsilon\right)-\beta^{is,(n-1)}% \left[0,x\land\left(n-1\right)\varepsilon\right)\\ \displaystyle=\mu^{i}-\iota^{i,(n)}-\beta^{is,(n)}\left[x\land n\varepsilon,% \infty\right)-\mu^{i}+\iota^{i,(n-1)}+\beta^{is,(n-1)}\left[x\land\left(n-1% \right)\varepsilon,\infty\right)\in\mathbb{D}^{\uparrow}$$ if $\alpha^{i}\left[x\land\left(n-1\right)\varepsilon,x\land n\varepsilon\right)+% \sum_{j=1}^{K}P_{ji}\left(\gamma^{j,(n-1)}\left[0,x\land n\varepsilon\right)-% \gamma^{j,(n-2)}\left[0,x\land\left(n-1\right)\varepsilon\right)\right)\in% \mathbb{D}^{\uparrow}$. For (35), note that $$\displaystyle\{t:\sigma^{i}\left(t\right)>t\}\subset\bigcup_{n\geq m}\{t:% \sigma^{i,(n)}>t\}\quad\forall m.$$ This is due to the fact that if $\xi_{t}^{i}\left[0,x\right]>0$ then also $\xi_{t}^{i,(n)}\left[0,x\right]>0$ for all $n>\left(x\lor t\right)/\varepsilon$. Consider $t\in[0,T]$ for $T$ fixed. Recall that $\rho^{i}_{t}=\rho^{i,(n)}_{t}$ for all $t\in[0,T]$, $n>T/\varepsilon$. Using the above display and the union bound, we obtain for all large $n$, $$\displaystyle\int_{[0,T]}\mathbbm{1}_{\left\{\sigma^{i}\left(s\right)>s\right% \}}d\rho^{i}\left(s\right)\leq\sum_{n>T/\varepsilon}\int\mathbbm{1}_{\left\{% \sigma^{i,(n)}\left(s\right)>s\right\}}d\rho^{i}\left(s\right)=\sum_{n>T/% \varepsilon}\int\mathbbm{1}_{\left\{\sigma^{i,(n)}\left(s\right)>s\right\}}d% \rho^{i,(n)}\left(s\right)=0.$$ This shows $\Xi\in{\cal X}$ and completes the existence proof. The argument for uniqueness is as follows. We show that for every $x,t\in\mathbb{R_{+}}$, the quantities $\xi_{t}\left[0,x\right]$, $\beta_{t}\left[0,x\right]$ $,\rho\left(t\right)$ and $\iota\left(t\right)$ are uniquely determined by the primitives $\left(\alpha,\mu\right)\in\mathbb{D}_{{\cal M}}^{\uparrow K}\times\mathbb{D}^{% \uparrow K}$. This we do by arguing that the claim holds for every $\left(x,t\right)\in\left[0,n\varepsilon\right)^{2}$, by induction in $n$. First, directly from (29), for $x\in\left[0,\varepsilon\right)$, for the $i$-th server: $$\displaystyle\xi_{t}^{i}\left[0,x\right]$$ $$\displaystyle=\alpha_{t}^{i}\left[0,x\right]-\beta_{t}^{i}\left[0,x\right].$$ In addition, $\left(\xi^{i},\beta^{i},\iota^{i},\rho^{i}\right)\in{\cal X}$. By Lemma 3, $\xi_{t}^{i}\left[0,x\right]$, $\beta_{t}^{is}\left[0,x\right]$ and $\rho^{i}\left(t\right)$ coincide with the unique solution to the fluid model equations of a single server with primitives $\left(\alpha\left(\cdot\cap\left[0,\varepsilon\right)\right),\mu\right)$ on $\left[0,\varepsilon\right)^{2}$. In particular, for $t,x\in\left[0,\varepsilon\right)$, $\beta_{t}^{is}\left[0,x\right]$ can be written as required in Assumption 1.1. Next, assume that the uniqueness statement holds for $(x,t)\in\left[0,n\varepsilon\right)^{2}$. Let $\left(\xi^{i},\beta^{i},\iota^{i},\rho^{i}\right)$ denote the unique tuple satisfying for all $1\leq i\leq K$ and $(x,t)\in\left[0,n\varepsilon\right)^{2}$, $$\displaystyle\xi_{t}^{i}\left[0,x\right]=\alpha_{t}^{i}\left[0,x\right]+\sum_{% j=1}^{K}P_{ji}\beta_{t}^{j,s}\left[0,x-\varepsilon\right]-\beta_{t}^{i}\left[0% ,x\right],$$ $$\displaystyle\left(\xi^{i},\beta^{i},\iota^{i},\rho^{i}\right)\in{\cal X}.$$ Assume in addition that for those $x$ and $t$, $\beta_{t}^{is}\left[0,x\right]$ can be written as required in Assumption 1.1. Consider now $(x,t)\in\left[0,\left(n+1\right)\varepsilon\right)^{2}$. First, directly from Equation (29), for the $i$-th server: $$\displaystyle\xi_{t}^{i}\left[0,x\right]=\alpha_{t}^{i}\left[0,x\right]+\sum_{% j=1}^{K}P_{ji}\beta_{t}^{j,s}\left[0,x-\varepsilon\right]-\beta_{t}^{i}\left[0% ,x\right],$$ $$\displaystyle\left(\xi^{i},\beta^{i},\iota^{i},\rho^{i}\right)\in{\cal X}.$$ If $x$ is in $\left[0,\left(n+1\right)\varepsilon\right)$ then $x-\varepsilon<n\varepsilon$. Therefore, $\beta_{t}^{i,s}\left[0,x-\varepsilon\right]$ is uniquely determined by $(\alpha,\mu)$ for $t\in\left[0,n\varepsilon\right)$ by our induction assumption. Now for $t\in\left[n\varepsilon,\left(n+1\right)\varepsilon\right)$, by (51), we have that whenever $t>x$: $\beta_{t}^{i,s}\left[0,x\right]=\beta_{x}^{i,s}\left[0,x\right]$. It follows that if $t\in\left[n\epsilon,\left(n+1\right)\epsilon\right)$ then for $1\leq j\leq K$: $\beta_{t}^{j,s}\left[0,x-\epsilon\right]=\beta_{\left[x-\epsilon\right]^{+}}^{% j,s}\left[0,x-\epsilon\right]$, which is uniquely determined, in view of the induction assumption. By Lemma 3, $\left(\xi^{i},\beta^{i},\rho^{i}\right)$ coincides on $\left[0,\left(n+1\right)\epsilon\right)^{2}$ with the unique solution of the fluid model equations for a single server with primitives $\left(\alpha^{i}\left(\cdot\cap\left[0,\left(n+1\right)\varepsilon\right)% \right)+\sum_{j=1}^{K}P_{ji}\gamma^{j}\left(\cdot\cap\left[0,\left(n+1\right)% \varepsilon\right)\right),\mu\right)$. In particular, $\beta_{t}^{is}\left[0,x\right]$ can be written as required in Assumption 1.1 for $x$ and $t$ in $\left[0,\left(n+1\right)\varepsilon\right)$ as well. Finally, $\iota$ is uniquely determined for all $t$ via $\iota=\mu+\rho-\beta^{s}\left[0,\infty\right)$. ∎ Our final goal is to prove convergence to the fluid model equations. We first show that the sequence $\{\bar{\rho}^{N}\}$ is tight, and deduce from that tightness of the entire tuple appearing on the LHS of (45). Later we complete the proof by showing that any subsequential limit satisfies the fluid model equations. Lemma 5. The sequence $\{\bar{\rho}^{N}\}$ is $C$-tight. Consequently, the tuple $\left(\bar{\xi}^{N},\bar{\beta}^{s,N},\bar{\beta}^{r,N},\bar{\beta}^{N},\bar{% \gamma}^{N},\bar{\iota}^{N}\right)$ is C-tight. Proof. The second statement follows from the first by a simple inductive argument over the squares $\left[0,n\varepsilon\right)^{2}$ based on the continuous mapping theorem and Remark 5.1 of [4]. We omit the details. To prove the first claim, fix $T$. $C$-tightness is shown for $\{\bar{\rho}^{N}|_{[0,T]}\}$. Define $F_{\bar{\alpha}^{i,N}}\left(x\right)=\bar{\alpha}_{T}^{i,N}\left[0,x\right]$. To show $C$-tightness, note first that $\|\bar{\rho}^{N}\|_{T}$ is dominated by $\|\bar{\alpha}^{N}\|_{T}$, that is a tight sequence of RVs. Hence it remains to show that for every $\delta>0$, $w_{T}(\bar{\rho}^{N},\delta)\to 0$ in probability, as $N\to\infty$. To this end, we bound the aforementioned modulus of continuity in terms of $w_{\infty}\left(F_{\bar{\alpha}_{T}^{i,N}},\delta\right)$ using the following chain of inequalities. The justification of each step in this chain is given below. For $0\leq t\leq t+\delta\leq T$, $$\displaystyle\bar{\rho}^{N}\left(t+\delta\right)-\bar{\rho}^{N}\left(t\right)$$ $$\displaystyle\leq\bar{\alpha}^{N}_{t+\delta}\left[0,t+\delta\right]-\bar{% \alpha}^{N}_{t}\left[0,t\right]+\sum_{j=1}^{K}\bar{\gamma}_{t+\delta}^{ji,N}% \left[0,t+\delta\right]-\sum_{j=1}^{K}\bar{\gamma}_{t}^{ji,N}\left[0,t\right]$$ (54) $$\displaystyle\leq\bar{\alpha}_{T}^{i,N}\left[t,t+\delta\right]+\sum_{j=1}^{K}% \bar{\gamma}_{t+\delta}^{j,N}\left[t,t+\delta\right]$$ (55) $$\displaystyle\leq\bar{\alpha}_{T}^{i,N}\left[t,t+\delta\right]+\sum_{j=1}^{K}% \bar{\beta}_{t+\delta}^{js,N}\left[t-\varepsilon,t+\delta-\varepsilon\right]+N% ^{-1}$$ (56) $$\displaystyle\leq\bar{\alpha}_{T}^{i,N}\left[t,t+\delta\right]+\sum_{n=1}^{% \left\lfloor\frac{t+\delta}{\varepsilon}\right\rfloor}\sum_{j=1}^{K}\left(\bar% {\alpha}_{T}^{j,N}\left[t-n\varepsilon,t+\delta-n\varepsilon\right]+\frac{1}{N% }\right)+N^{-1}$$ (57) $$\displaystyle\leq\left(\frac{KT}{\varepsilon}+1\right)\left(\max_{i}w\left(F_{% \bar{\alpha}_{T}^{i,N}},\delta\right)+\frac{1}{N}\right).$$ The convergence $\bar{\alpha}^{N}\Rightarrow\alpha$ and the continuity of the path $t\mapsto\alpha_{t}$ we have $\bar{\alpha}^{N}_{T}\Rightarrow\alpha_{T}$. Since by Assumption 1, $\alpha_{T}$ has no atoms, the continuous, monotone, bounded function $x\mapsto F_{\alpha_{T}}(x)$ is uniformly continuous, and we have $\lim_{\eta\downarrow 0}\limsup_{N}P(w_{\infty}(F_{\bar{\alpha}_{T}^{i,N}},% \delta)>\eta)=0$. This shows that $\{\bar{\rho}^{N}\}$ are $C$-tight. It remains to prove the chain of inequalities. Inequality (54) follows from (39) with $x=t$, $\bar{\xi}_{t}^{i,N}\left[0,t\right]=0$, and $\bar{\beta}_{t}^{is,N}\left[0,t\right]\leq\bar{\beta}_{t+\delta}^{is,N}\left[0% ,t+\delta\right]$. For inequality (55), first, $\bar{\alpha}_{t+\delta}^{i,N}[0,t]=\bar{\alpha}_{t}^{i,N}\left[0,t\right]$ because, by assumption, no job enters the system with an overdue deadline (see Figure 1). Hence $$\displaystyle\bar{\alpha}^{N}_{t+\delta}\left[0,t+\delta\right]-\bar{\alpha}^{% N}_{t}\left[0,t\right]$$ $$\displaystyle=\bar{\alpha}^{N}_{t+\delta}\left[0,t+\delta\right]-\bar{\alpha}^% {N}_{t+\delta}\left[0,t\right]+\bar{\alpha}_{t+\delta}^{i,N}[0,t]-\bar{\alpha}% _{t}^{i,N}\left[0,t\right]$$ $$\displaystyle\leq\bar{\alpha}^{N}_{t+\delta}\left[t,t+\delta\right]$$ $$\displaystyle\leq\bar{\alpha}^{N}_{T}\left[t,t+\delta\right].$$ Then, use a similar property for $\bar{\gamma}^{is,N}$: $\bar{\gamma}_{t+\delta}^{is,N}\left[0,t\right]=\bar{\gamma}_{t}^{is,N}\left[0,% t\right]$. This is due to the fact that no job is served after its deadline has expired. To see this, fix any $x\leq t\leq t_{0}$, and recall that by (19) $$\displaystyle\bar{\gamma}_{t_{0}}^{ji,N}\left[0,x\right]-\bar{\gamma}_{t}^{ji,% N}\left[0,x\right]=\int_{\left(t,t_{0}\right]}\theta^{ji,N}\left(s\right)d\bar% {\gamma}_{s}^{i,N}\left[0,x\right]=0.$$ (58) It follows that $\bar{\gamma}_{t+\delta}^{ji,N}\left[0,t+\delta\right]-\bar{\gamma}_{t}^{ji,N}% \left[0,t+\delta\right]\leq\bar{\gamma}_{t+\delta}^{j,N}\left[t,t+\delta\right]$. Inequality (56) follows from Equation (38). For (57), we prove by induction that $$\displaystyle\sum_{j=1}^{K}\bar{\beta}_{T}^{js,N}\left[t,t+\delta\right]\leq% \sum_{n=0}^{\left\lfloor\frac{t+\delta}{\varepsilon}\right\rfloor}\sum_{j=1}^{% K}\left(\bar{\alpha}_{T}^{j,N}\left[t-n\varepsilon,t+\delta-n\varepsilon\right% ]+\frac{1}{N}\right).$$ If $t$ is such that $t+\delta<\varepsilon$, then the statement follows from Equation (39) and $\bar{\beta}_{T}^{jr,N}\left[t,t+\delta\right]\geq 0$. Now, assume that the desired inequality holds for $t$ such that $\left\lfloor\frac{t+\delta}{\varepsilon}\right\rfloor=m-1$. Then for $t$ such that $\left\lfloor\frac{t+\delta}{\varepsilon}\right\rfloor=m$, we have $$\displaystyle\sum_{j=1}^{K}\bar{\beta}_{T}^{js,N}\left[t-\varepsilon,t+\delta-% \varepsilon\right]\leq\sum_{n=1}^{m}\sum_{j=1}^{K}\left(\bar{\alpha}_{T}^{j,N}% \left[t-n\varepsilon,t+\delta-n\varepsilon\right]+\frac{1}{N}\right).$$ For these values of $t$, from (39), $$\displaystyle\bar{\beta}_{T}^{is,N}\left[t,t+\delta\right]\leq\bar{\alpha}_{T}% ^{i,N}\left[t,t+\delta\right]+\sum_{j=1}^{K}\bar{\gamma}_{T}^{ji,N}\left[t,t+% \delta\right].$$ Summing over all $j$, using (38) and the assumption, $$\displaystyle\sum_{i=1}^{K}\bar{\beta}_{T}^{is,N}\left[t,t+\delta\right]$$ $$\displaystyle=\sum_{i=1}^{K}\bar{\alpha}_{T}^{i,N}\left[t,t+\delta\right]+\sum% _{j=1}^{K}\bar{\gamma}_{T}^{j,N}\left[t,t+\delta\right]$$ $$\displaystyle\leq\sum_{i=1}^{K}\bar{\alpha}_{T}^{i,N}\left[t,t+\delta\right]+% \sum_{j=1}^{K}\bar{\beta}_{T}^{js,N}\left[t-\varepsilon,t+\delta-\varepsilon% \right]+\sum_{j=1}^{K}\frac{1}{N}$$ $$\displaystyle\leq\sum_{i=1}^{K}\bar{\alpha}_{T}^{i,N}\left[t,t+\delta\right]+% \sum_{n=1}^{m}\sum_{j=1}^{K}\left(\bar{\alpha}_{T}^{j,N}\left[t-n\varepsilon,t% +\delta-n\varepsilon\right]+\frac{1}{N}\right)+\sum_{j=1}^{K}\frac{1}{N}$$ $$\displaystyle=\sum_{n=0}^{m}\sum_{j=1}^{K}\left(\bar{\alpha}_{T}^{j,N}\left[t-% n\varepsilon,t+\delta-n\varepsilon\right]+\frac{1}{N}\right).$$ This completes the proof of the chain of inequalities and the result follows. ∎ This means that the tuple is also sequentially compact. We will use that fact soon to prove Theorem 4. Proof of Theorem 4. First, the convergence $\left(\bar{e}^{N},\bar{E}^{N}\right)\Rightarrow\left(0,0\right)$ follows by the same reasons as for the soft version. Next, from Lemma 5 and Prohorov’s theorem, every subsequence of the tuple has a convergent subsequence. We will show that any subsequential limit must be a solution to the fluid model; uniqueness implies then convergence of the entire sequence. Consider a convergent subsequence and denote by $(\rho,\alpha,\mu,\xi,\beta^{s},\beta^{r},\beta,\gamma,\iota)$ its limit. We appeal to Skorokhod representation theorem ([7, Thm. 6.7]), and assume without loss of generality convergence a.s. It is now argued that the tuple $(\rho,\alpha,\mu,\xi,\beta^{s},\beta^{r},\beta,\gamma,\iota)$ satisfies the fluid model equations. Equation (27) follows by $\bar{e}^{i,N}\Rightarrow 0$. Identities (25), (26), (29), and (32) will follow from the convergence of the tuple once it is shown that for every $t$, the measures $\xi_{t}$, $\beta_{t}$, etc., have no atoms. To show that these measures have no atoms, consider relation (39) with $x<\varepsilon$, in which case the term involving $\gamma^{N}$ is absent. Using the assumption that $\alpha\in\mathbb{C}_{{\cal M}_{\sim}}^{\uparrow K}$ and Portmanteau theorem, for every $0\leq a<b<\varepsilon$, a.s., $$\displaystyle\beta_{t}^{is}\left(a,b\right)\leq\liminf\bar{\beta}_{t}^{is,N}% \left(a,b\right)\leq\liminf\bar{\alpha}_{t}^{i,N}\left(a,b\right)=\alpha^{i}_{% t}(a,b).$$ Hence the fact that $\alpha^{i}_{t}$ has no atoms implies that the same is true for $\beta^{is}_{t}$, on the interval $[0,\varepsilon)$. Hence the same holds for $\gamma^{i}_{t}$ and a similar argument holds for $\xi^{i}_{t}$. An inductive argument over intervals $[0,n\varepsilon)$ used in (39) shows that these measures are all atomless on all of ${\mathbb{R}}_{+}$. (The induction argument is omitted). Moving now to show (30) and (31), note that by (38), (39), (40), (41), and (3.1.2), $$\displaystyle\left(\bar{\xi}^{i,N}\left[0,x\right],\bar{\beta}^{is,N}\left(x,% \infty\right)+\bar{\iota}^{i,N}\right)=$$ $$\displaystyle\Gamma^{(1)}\Bigg{(}\bar{\alpha}^{i,N}\left[0,x\right]+\sum_{j=1}% ^{K}P_{ji}\left(\bar{B}^{i,N}\left[0,x-\varepsilon\right]-\bar{B}^{i,N}_{0-}% \left[0,x-\varepsilon\right]\right)$$ $$\displaystyle\quad+\sum_{j=1}^{K}P_{ji}\bar{\beta}^{js,N}\left[0,x-\varepsilon% \right]-\bar{\rho}^{i,N}\left(\cdot\land x\right)-\bar{\mu}^{i,N}-\bar{e}^{i,N% }+\sum_{j=1}^{K}\bar{E}^{ji,N}\left(\cdot,x\right)\Bigg{)}.$$ Recalling that $\Gamma^{(1)}$ is continuous, using (24) and the fact that there are no atoms, one obtains (30) and (31) by the definition of the Skorokhod map. It remains to show that the limit satisfies the condition $\int\mathbbm{1}_{\{\sigma^{i}\left(t\right)>t\}}d\rho\left(t\right)=0$. The idea is similar to the proof in Section 5.1.4 in [4]; however, there are many details that are different. By Fatou’s lemma, it is enough to prove that the event $$\displaystyle E_{0}^{i}=\left\{\int_{0}^{T}\mathbbm{1}_{\left\{\sigma^{i}\left% (t\right)>t+\delta\right\}}d\rho^{i}\left(t\right)>0\right\}$$ occurs with probability zero for all $1\leq i\leq K$. We refer to Lemma 5.9 in [4] and note that there exists a $\left[0,T\right)\cup\{\infty\}$-valued random variable $\tau$ such that $\mathbb{P}\left(E_{0}^{i}\right)=\mathbb{P}\left(E_{1}^{i}\cap E_{2}^{i}\right)$ where $$\displaystyle E_{1}^{i}=\left\{\tau<T,\sigma^{i}\left(\tau\right)>\tau+\delta% \right\},$$ $$\displaystyle E_{2}^{i}=\left\{\rho^{i}\left(\tau+\delta\right)>\rho^{i}\left(% \tau\right),\forall\delta>0\right\}.$$ Define $$\displaystyle E_{3}^{i}=\left\{\exists\delta\left(\omega\right)>0:\rho^{i}% \left(\tau+\delta\right)=\rho^{i}\left(\tau\right)\right\}$$ and note that $\mathbb{P}\left(E_{1}^{i}\cap E_{2}^{i}\right)=\mathbb{P}\left(E_{1}^{i}\cap E% _{3}^{ic}\right)$. We wish to show that for any $\delta>0$, $\mathbb{P}\left(E_{0}^{i}\right)=0$, which is equivalent to showing $\mathbb{P}\left(E_{1}^{i}\cap E_{3}^{ic}\right)=0$ for all $i$. Note that it is enough to show this for any $\varepsilon>\delta>0$. In fact, we shall take $\delta<\varepsilon\land\delta_{0}$, where $\delta_{0}\in\left(0,1\right)$ satisfies $$\displaystyle a_{s}^{i}\left[s,s+2\delta_{0}\right]<m^{i}\left(s\right)\text{ % for all }s\in\left[0,T+1\right]\text{ and }i\in\{1,...,K\}.$$ (59) The existence of such $\delta_{0}$ is guaranteed by Assumption 1. By (39) and (42), for $b>a$, $$\displaystyle\bar{\rho}^{i,N}\left(b\right)-\bar{\rho}^{i,N}\left(a\right)+% \bar{\beta}_{b}^{is,N}\left(a,b\right]-\bar{\beta}_{a}^{is,N}\left(a,b\right]=% \bar{\alpha}_{b}^{i,N}\left(a,b\right]-\bar{\alpha}_{a}^{i,N}\left(a,b\right]+% \bar{\xi}_{a}^{i,N}\left(a,b\right]+\sum_{j=1}^{K}\left(\bar{\gamma}_{b}^{ji,N% }\left(a,b\right]-\bar{\gamma}_{a}^{ji,N}\left(a,b\right]\right).$$ Partition $\left(\tau,\tau+\delta\right]$ into $M\in{\mathbb{N}}$ subintervals $I_{m}=\left(t_{m-1},t_{m}\right]$, with $\delta_{M}=M^{-1}\delta$ and $t_{m}=\tau+m\delta_{M}$, $m=1,...,M$, and bound the increment of $\bar{\rho}^{i,N}$ by $$\displaystyle\bar{\rho}^{i,N}\left(\tau+\delta\right)-\bar{\rho}^{i,N}\left(% \tau\right)=\sum_{m=1}^{M}\left(\bar{\rho}^{i,N}\left(t_{m}\right)-\bar{\rho}^% {i,N}\left(t_{m-1}\right)\right)\leq C_{N,M}^{i}+D_{N,M}^{i}+G_{N,M}^{i},$$ where $$\displaystyle C_{N,M}^{i}=\sum_{m=1}^{M}\bar{\xi}_{t_{m-1}}^{i,N}\left(I_{m}% \right),$$ $$\displaystyle D_{N,M}^{i}=\sum_{m=1}^{M}\left(\bar{\alpha}_{t_{m}}^{i,N}\left(% I_{m}\right)-\bar{\alpha}_{t_{m-1}}^{i,N}\left(I_{m}\right)\right),$$ $$\displaystyle G_{N,M}^{i}=\sum_{m=1}^{M}\sum_{j=1}^{K}\left(\bar{\gamma}_{t_{m% }}^{ji,N}\left(I_{m}\right)-\bar{\gamma}_{t_{m-1}}^{ji,N}\left(I_{m}\right)% \right).$$ We first fix $M$ and let $N\to\infty$, and then let $M\to\infty$ to obtain $\mathbbm{1}_{E_{1}^{i}}\left(C_{N,M}^{i}+D_{N,M}^{i}+G_{N,M}^{i}\right)\to 0$ a.s., which implies $\mathbbm{1}_{E_{1}^{i}}\left(\rho^{i}\left(\tau+\delta\right)-\rho^{i}\left(% \tau\right)\right)=0$ a.s. This completes the proof by concluding $\mathbb{P}\left(E_{1}^{i}\cap E_{3}^{ic}\right)=0$ for all $i$. By assumption, $D_{N,M}$ converges as $N\to\infty$ to $$\displaystyle D_{M}=\sum_{m=1}^{M}\int_{t_{m-1}}^{t_{m}}a_{s}^{i}\left(I_{m}% \right)ds\leq\left(T+\delta\right)\sup_{s\in\left[0,T\right]}a_{s}^{i}\left[s,% s+\delta_{M}\right],$$ and, as $M\to\infty$, $\sup_{s\in\left[0,T\right]}a_{s}^{i}\left[s,s+\delta_{M}\right]\to 0$ as assumed in Assumption 1. Next, note that $C_{N,M}^{i}\leq M\max_{s\in\left[\tau,\tau+\delta\right]}\bar{\xi}_{s}^{i,N}% \left(\tau,\tau+\delta\right]$, and recall that $\bar{\xi}^{i,N}\to\xi^{i}\in\mathbb{C}_{{\cal M}_{\sim}}$ a.s. Hence $$\displaystyle\sup_{s\in\left[0,T\right]}\sup_{x}\left|\bar{\xi}_{s}^{i,N}\left% [0,x\right]-\xi_{s}^{i}\left[0,x\right]\right|\to 0\text{ a.s.}$$ This, together with $\mathbbm{1}_{E_{1}^{i}}\xi_{\tau}\left[\tau,\tau+\delta\right]=0$ and $\xi_{\tau}\left[0,\tau\right)=0$, implies $\mathbbm{1}_{E_{1}^{i}}\bar{\xi}_{\tau}^{i,N}\left[\tau,\tau+\delta\right]\to 0$ a.s. By virtue of the shift property of the Skorokhod mapping, $\xi_{\tau+\cdot}^{i}\left[0,\tau+\delta\right]=\Gamma_{1}^{(1)}\left(\psi^{i,% \tau,\delta}\right)$, where: $$\displaystyle\psi^{i,\tau,\delta}\left(t\right)=$$ $$\displaystyle\xi_{\tau}^{i}\left[0,\tau+\delta\right]+\alpha_{\tau+t}^{i}\left% [0,\tau+\delta\right]-\alpha_{\tau}^{i}\left[0,\tau+\delta\right]$$ $$\displaystyle+\sum_{j=1}^{K}P_{ji}\left(\beta_{\tau+t}^{js}\left[0,\tau+\delta% -\varepsilon\right]-\beta_{\tau}^{js}\left[0,\tau+\delta-\varepsilon\right]\right)$$ $$\displaystyle-\beta_{\tau+t}^{ir}\left[0,\tau+\delta\right]-\beta_{\tau}^{ir}% \left[0,\tau+\delta\right]-\mu^{i}\left(\tau+t\right)+\mu^{i}\left(\tau\right).$$ Notice that if $\delta$ is smaller than $\varepsilon$ then the sum over $j$ is just 0, as in (52) . Moreover, $$\displaystyle\alpha_{\tau+t}^{i}\left[0,\tau+\delta\right]-\alpha_{\tau}^{i}% \left[0,\tau+\delta\right]-\mu^{i}\left(\tau+t\right)+\mu^{i}\left(\tau\right)% =\int_{\tau}^{\tau+t}a_{s}^{i}\left[0,\tau+\delta\right]ds-\int_{\tau}^{\tau+t% }m^{i}\left(s\right)ds$$ is non-increasing for $t\in\left[0,\delta_{0}\right]$ by (59). Therefore, $\xi_{t}^{i,N}\left[0,\tau+\delta\right]=0$ for all $t\in\left[\tau,\tau+\delta\right]$, resulting with $\mathbbm{1}_{E_{1}^{i}}C_{N,M}\to 0$ a.s. As for $G_{N,M}^{i}$, we start with the bound $$\displaystyle G_{N,M}^{i}\leq\sum_{m=1}^{M}\sum_{j=1}^{K}\left(\bar{\gamma}_{t% _{m}}^{j,N}\left(I_{m}\right)-\bar{\gamma}_{t_{m-1}}^{j,N}\left(I_{m}\right)% \right)\leq\sum_{m=1}^{M}\sum_{j=1}^{K}\left(\bar{\beta}_{t_{m}}^{js,N}\left(I% _{m}\right)-\bar{\beta}_{t_{m-1}}^{js,N}\left(I_{m}\right)\right)+\frac{MK}{N}.$$ To bound it further, we prove the following. Fix $t_{2}>t_{1}$, then for any interval $A=\left[t_{3},t_{4}\right)\subset\left[0,T\right]$ such that $t_{2}\geq t_{4}$: $$\displaystyle\sum_{i=1}^{K}\left(\bar{\beta}_{t_{2}}^{is,N}\left(A\right)-\bar% {\beta}_{t_{1}}^{is,N}\left(A\right)\right)\leq\sum_{n=0}^{\left\lfloor t_{4}/% \epsilon\right\rfloor}\left(\sum_{i=1}^{K}\left(\bar{\alpha}_{t_{2}}^{i,N}% \left(A-n\epsilon\right)-\bar{\alpha}_{t_{1}}^{i,N}\left(A-n\epsilon\right)% \right)+\sum_{i=1}^{K}\bar{\xi}_{t_{1}}^{i,N}\left(A-n\epsilon\right)+\frac{K}% {N}\right)$$ This will be shown by induction over $t_{4}<n\epsilon$. From (39) and the monotonicity of $t\mapsto\bar{\beta}_{t}^{ir,N}\left(A\right)$, and $\bar{\xi}_{t_{2}}^{i,N}\left[0,t_{4}\right)=0$: $$\displaystyle\bar{\beta}_{t_{2}}^{is,N}\left(A\right)-\bar{\beta}_{t_{1}}^{is,% N}\left(A\right)$$ $$\displaystyle\leq\bar{\alpha}_{t_{2}}^{i,N}\left(A\right)-\bar{\alpha}_{t_{1}}% ^{i,N}\left(A\right)+\sum_{j=1}^{K}\left(\bar{\gamma}_{t_{2}}^{ji,N}\left(A% \right)-\bar{\gamma}_{t_{1}}^{ji,N}\left(A\right)\right)+\bar{\xi}_{t_{1}}^{i,% N}\left(A\right),$$ and by summing over all servers $$\displaystyle\sum_{i=1}^{K}\left(\bar{\beta}_{t_{2}}^{is,N}\left(A\right)-\bar% {\beta}_{t_{1}}^{is,N}\left(A\right)\right)$$ $$\displaystyle\leq\sum_{i=1}^{K}\left(\bar{\alpha}_{t_{2}}^{i,N}\left(A\right)-% \bar{\alpha}_{t_{1}}^{i,N}\left(A\right)+\bar{\xi}_{t_{1}}^{i,N}\left(A\right)\right)$$ $$\displaystyle\quad+\sum_{j=1}^{K}\left(\bar{\beta}_{t_{2}}^{js,N}\left(A-% \epsilon\right)-\bar{\beta}_{t_{1}}^{js,N}\left(A-\epsilon\right)\right)+\frac% {K}{N}.$$ If $t_{4}<\epsilon$ then $$\displaystyle\bar{\beta}_{t_{2}}^{is,N}\left(A\right)-\bar{\beta}_{t_{1}}^{is,% N}\left(A\right)\leq\bar{\alpha}_{t_{2}}^{i,N}\left(A\right)-\bar{\alpha}_{t_{% 1}}^{i,N}\left(A\right)+\bar{\xi}_{t_{1}}^{i,N}\left(A\right),$$ and the statement is true by summing over all servers. Assume that the statement is true for $t_{4}\leq n\epsilon$. If we take some $t_{4}\leq\left(n+1\right)\epsilon$ and use our induction assumption we get: $$\displaystyle\sum_{i=1}^{K}\left(\bar{\beta}_{t_{2}}^{is,N}\left(A\right)-\bar% {\beta}_{t_{1}}^{is,N}\left(A\right)\right)$$ $$\displaystyle\leq\sum_{i=1}^{K}\left(\bar{\alpha}_{t_{2}}^{i,N}\left(A\right)-% \bar{\alpha}_{t_{1}}^{i,N}\left(A\right)+\bar{\xi}_{t_{1}}^{i,N}\left(A\right)\right)$$ $$\displaystyle\quad+\sum_{n=1}^{\left\lfloor t_{4}/\epsilon\right\rfloor}\sum_{% i=1}^{K}\left(\bar{\alpha}_{t_{2}}^{i,N}\left(A-n\epsilon\right)-\bar{\alpha}_% {t_{1}}^{i,N}\left(A-n\epsilon\right)+\bar{\xi}_{t_{1}}^{i,N}\left(A-n\epsilon% \right)+\frac{1}{N}\right)+\frac{K}{N}$$ $$\displaystyle=\sum_{n=0}^{\left\lfloor t_{4}/\epsilon\right\rfloor}\sum_{i=1}^% {K}\left(\bar{\alpha}_{t_{2}}^{i,N}\left(A-n\epsilon\right)-\bar{\alpha}_{t_{1% }}^{i,N}\left(A-n\epsilon\right)+\bar{\xi}_{t_{1}}^{i,N}\left(A-n\epsilon% \right)+\frac{1}{N}\right)$$ We need this result for $t_{1}=t_{3}=t_{m-1}$ and $t_{2}=t_{4}=t_{m}$; if $M$ is big enough then $\bar{\xi}_{t_{m-1}}^{i,N}\left(A-n\epsilon\right)=0$ for $n\geq 1$, and then: $$\displaystyle\sum_{i=1}^{K}\left(\bar{\beta}_{t_{2}}^{is,N}\left(A\right)-\bar% {\beta}_{t_{1}}^{is,N}\left(A\right)\right)\leq\sum_{n=0}^{\left\lfloor t_{4}/% \epsilon\right\rfloor}\sum_{i=1}^{K}\left(\bar{\alpha}_{t_{2}}^{i,N}\left(A-n% \epsilon\right)-\bar{\alpha}_{t_{1}}^{i,N}\left(A-n\epsilon\right)+\frac{1}{N}% \right)+\sum_{i=1}^{K}\bar{\xi}_{t_{1}}^{i,N}\left(A\right).$$ It follows that $$\displaystyle G_{N,M}^{i}\leq\sum_{m=1}^{M}\frac{KT}{\epsilon}\max_{i,n}\left(% \bar{\alpha}_{t_{m}}^{i,N}\left(I_{m}-n\epsilon\right)-\bar{\alpha}_{t_{m-1}}^% {i,N}\left(I_{m}-n\epsilon\right)\right)+K\sum_{m=1}^{M}\bar{\xi}_{t_{m-1}}^{i% ,N}\left(I_{m}\right)+\left(\frac{T}{\epsilon}+1\right)\frac{MK}{N}.$$ The terms on the RHS vanish as $C_{N,M}^{i}$ and $D_{N,M}^{i}$ above. ∎ Acknowledgement. This research was supported in part by the ISF (grants 1184/16 and 1035/20). References [1] C. M. Aras, J. F. Kurose, D. S. Reeves, and H. Schulzrinne. Real-time communication in packet-switched networks. Proceedings of the IEEE, 82(1):122–139, Jan 1994. [2] R. Atar, A. Biswas, and H. Kaspi. Fluid limits of G/G/1+G queues under the nonpreemptive earliest-deadline-first discipline. Mathematics of Operations Research, 40(3):683–702, 2015. [3] R. Atar, A. Biswas, and H. Kaspi. Law of large numbers for the many-server earliest-deadline-first queue. Stochastic Processes and their Applications, 128(7):2270–2296, 2018. [4] R. Atar, A. Biswas, H. Kaspi, and K. Ramanan. A Skorokhod map on measure-valued paths with applications to priority queues. The Annals of Applied Probability, 28(1):418–481, 02 2018. [5] R. Atar and P. Dupuis. Large deviations and queueing networks: methods for rate function identification. Stochastic processes and their applications, 84(2):255–296, 1999. [6] R. Atar, H. Kaspi, and N. Shimkin. Fluid limits for many-server systems with reneging under a priority policy. Mathematics of Operations Research, 39(3):672–696, 2014. [7] P. Billingsley. Convergence of probability measures. Wiley Series in Probability and Statistics: Probability and Statistics. John Wiley & Sons Inc., New York, second edition, 1999. ISBN 0-471-19745-9. x+277 pp. A Wiley-Interscience Publication. [8] M. Bramson. Stability of earliest-due-date, first-served queueing networks. Queueing Systems, 39(1):79–102, Sep 2001. [9] G. C. Buttazzo. Hard real-time computing systems: predictable scheduling algorithms and applications, volume 24. Springer Science & Business Media, 2011. [10] J. T. Chang and D. Pollard. Conditioning as disintegration. Statistica Neerlandica, 51(3):287–317, 1997. [11] H. Chen and D. D. Yao. Fundamentals of Queueing Networks. Springer-Verlag New York, 2001. [12] J. G. Dai. On positive Harris recurrence of multiclass queueing networks: a unified approach via fluid limit models. The Annals of Applied Probability, pages 49–77, 1995. [13] L. Decreusefond and P. Moyal. Fluid limit of a heavily loaded EDF queue with impatient customers. Markov Processes and Related Fields, 14:131–157, 2008. [14] B. Doytchinov, J. Lehoczky, and S. Shreve. Real-time queues in heavy traffic with earliest-deadline-first queue discipline. The Annals of Applied Probability, 11(2):332–378, 2001. [15] H. C. Gromoll, A. L. Puha, and R. J. Williams. The fluid limit of a heavily loaded processor sharing queue. The Annals of Applied Probability, 12(3):797–859, 2002. [16] J. M. Harrison and M. I. Reiman. Reflected Brownian motion on an orthant. The Annals of Probability, 9(2):302–308, 1981. [17] C. Heyde and P. Hall. Martingale Limit Theory and its Application. Probability and Mathematical Statistics: A Series of Monographs and Textbooks. Academic Press, 1980. ix - xi pp. [18] X. Hu, S. Barnes, and B. Golden. Applying queueing theory to the study of emergency department operations: A survey and a discussion of comparable simulation studies. International Transactions in Operational Research, 25, 03 2017. [19] L. Kruk. Stability of two families of real-time queueing networks. Probability and Mathematical Statistics, 28, 01 2008. [20] L. Kruk. Invariant states for fluid models of EDF networks: Nonlinear lifting map. Probability and Mathematical Statistics, 30(2):289–315, 2010. [21] L. Kruk, J. Lehoczky, K. Ramanan, and S. Shreve. Double Skorokhod map and reneging real-time queues. Markov Processes and Related Topics: A Festschrift for Thomas G. Kurtz, Volume 4:169–193, 2008. [22] L. Kruk, J. Lehoczky, K. Ramanan, and S. Shreve. Heavy traffic analysis for EDF queues with reneging. The Annals of Applied Probability, 21(2):484–545, 04 2011. [23] L. Kruk, J. Lehoczky, S. Shreve, and S.-N. Yeung. Earliest-deadline-first service in heavy-traffic acyclic networks. The Annals of Applied Probability, 14(3):1306–1352, 08 2004. [24] J. P. Lehoczky. Real-time queueing theory. In Proceedings of the 17th IEEE Real-Time Systems Symposium, RTSS ’96, pages 186–, Washington, DC, USA, 1996. IEEE Computer Society. ISBN 0-8186-7689-2. [25] J. P. Lehoczky. Real-time queueing network theory. In Proceedings Real-Time Systems Symposium, pages 58–67, Dec 1997. [26] P. Moyal. On queues with impatience: stability, and the optimality of earliest deadline first. Queueing Systems, 75(2):211–242, Nov 2013. [27] S. Panwar, D. Towsley, and J. Wolf. Optimal scheduling policies for a class of queues with customer deadlines to the beginning of service. Journal of the ACM, 35(4):832–844, 10 1988. [28] S. Ramasubramanian. A subsidy-surplus model and the Skorokhod problem in an orthant. Mathematics of Operations Research, 25(3):509–538, 2000. [29] M. I. Reiman. Open queueing networks in heavy traffic. Mathematics of operations research, 9(3):441–458, 1984.
Knitting distributed cluster state ladders with spin chains R. Ronke${}^{1}$ [email protected]    I. D’Amico${}^{1}$ [email protected]    T. P. Spiller${}^{2}$ [email protected] ${}^{1}$ Department of Physics, University of York, York YO10 5DD, United Kingdom. ${}^{2}$ School of Physics and Astronomy, E C Stoner Building, University of Leeds, Leeds, LS2 9JT. (November 19, 2020) Abstract There has been much recent study on the application of spin chains to quantum state transfer and communication. Here we discuss the utilisation of spin chains (set up for perfect quantum state transfer) for the knitting of distributed cluster state structures, between spin qubits repeatedly injected and extracted at the ends of the chain. The cluster states emerge from the natural evolution of the system across different excitation number sectors. We discuss the decohering effects of errors in the injection and extraction process as well as the effects of fabrication and random errors. pacs: 03.67.Ac, 75.10.Pq, 81.07.Vb I Introduction With conventional information processing and communications, optics provides the bandwidth and robustness for long distance communication. However, for communication over short distances, for example within and between adjacent silicon chips, signals remain electrical, to avoid the energy and cost overhead of conversion of information between different physical embodiments. Similar thinking exists in the quantum arena. Whilst quantum states of light are widely regarded as the vehicle of choice for quantum communication over large distances, there has been much recent interest in the potential use of spin chains for quantum communication over much shorter distances. When the task at hand is communication within a quantum processor, or communication between adjacent processors or registers, it may well be that a chain of spins—the same hardware from which the processors and registers are constructed—can play an effective and useful role bose2007 ; kay2009 . In its simplest guise the term “spin chain” applies to any set of two-state quantum systems coupled to their nearest neighbours. Clearly qudits or even continuous variable oscillators could replace the qubits, but as most quantum information studies generally focus on qubits, most spin chain studies do likewise. A chain could literally comprise spins or magnetic moments, such as with a string of fullerenes twamley2003 or magnetic particles tejada2001 or nuclear spins in a molecule zhang2007 . But it could also describe a system of electrons or excitons damico2007 ; damico2006 ; niko2004 in a chain of interacting quantum dots, or other devices. If the ground or prepared state of a spin-($1/2$) chain is all spins down ($|0\rangle$), then a complete single-qubit excitation ($|1\rangle$) is made by flipping one spin up. An arbitrary qubit state can thus be injected into a spin chain by preparing the “injection site” qubit in the appropriate superposition of up and down. For quantum communication, the question is as to how well this state transfers—in terms of the fidelity of the initial state against that which emerges at the “extraction site” at a later time, resulting from the dynamics of the chain. Usually the injection and extraction sites are the opposing ends of a chain. Study of state transfer has been made for unmodulated chains bose2003 ; burgarth2009_2 , systems with unequal couplings chiara2005 , systems with controlled coupling at the ends wojcik2005 and parallel chains burgarth2005 . Of specific interest to our work here is the case of linear chains where the nearest neighbour couplings $J_{i,i+1}$ between spin sites $i$ and $i+1$ are engineered to effect perfect state transfer (PST) between the injection ($i=1$) and extraction ($i=N$) sites niko2004 ; chris2005 . For a chain of $N$ spins the PST couplings are given by chris2005 $$J_{i,i+1}=J_{0}\sqrt{i(N-i)}$$ (1) where $J_{0}$ is a coupling that characterises the whole system and sets the timescale for PST, or mirroring, as $t_{M}=\pi\hbar/2J_{0}$. In our work here we utilise such PST spin chains for a different purpose—the construction of distributed cluster state structures. One potential application of short range quantum communication is to build up distributed entangled resources, that can be used, or consumed, to subsequently enable distributed quantum processing through the concept of one-way computation raus2001 . In this approach, the cluster state entangled resource brie2001 is consumed by a sequence of measurements to effect the computation. Here we focus on the construction of entangled resources, which basically emerge from a suitable qubit injection and extraction protocol and the ability of PST spin chains to produce two-qubit entangling gates when operated across different excitation sectors yung2005 ; clark2005 ; clark2007 . For such an application, there is clearly merit in being able to generate the entangled resource as rapidly and effectively as possible. We shall demonstrate that an element of a cluster state ladder can be knit in time $t_{M}$, independent of the length $N$ of the chain footnote by suitable injection and extraction of qubits at the ends of the chain. The construction of cluster state resources using PST spin chains as an entangling bus, with assumed access to all spins in the chain, has been discussed in clark2005 ; clark2007 . Here we adopt the original concept of a spin chain, where access is restricted to the ends of the chain, and build the resource with a suitable injection and extraction protocol. We also consider the effects of realistic forms of decoherence acting in the spin chain and errors in the injection and extraction protocol. II Spin chain dynamics We start with the time-independent Hamiltonian that describes the natural dynamics of a length $N$ nearest-neighbour-coupled spin chain $$\displaystyle{\cal{H}}=\sum_{i=1}^{N}E_{i}|1\rangle\langle 1|_{i}+\sum_{i=1}^{% N-1}J_{i,i+1}[|1\rangle\langle 0|_{i}\otimes|0\rangle\langle 1|_{i+1}+$$ $$\displaystyle|0\rangle\langle 1|_{i}\otimes|1\rangle\langle 0|_{i+1}].$$ (2) We assume that the single site excitation (from down, $|0\rangle$, to up, $|1\rangle$) energies $E_{i}$ are independent of the site $i$, or are tuned to be so. The couplings $J_{i,i+1}$ are given by (1). Tuning of the energies and couplings, such as via local fields or manufacturing control, is required, as per the PST scenario (1). The total number of excitations $T$ in the chain is given by the expectation value of the operator $${\cal{T}}=\sum_{i=1}^{N}|1\rangle\langle 1|_{i}.$$ (3) We also define the mirror operator $M$ as that which reflects the state of the chain about its midpoint (which is spin $(N+1)/2$ for odd $N$ and the “gap” between spins $N/2$ and $(N/2)+1$ for even $N$). So operationally $M$ effects the following to each term in any arbitrary superposition state of the chain: $$M|a\rangle_{1}|b\rangle_{2}...|y\rangle_{N-1}|z\rangle_{N}=|z\rangle_{1}|y% \rangle_{2}...|b\rangle_{N-1}|a\rangle_{N}.$$ (4) Clearly, for parameters restricted to achieve PST, the Hamiltonian (II) commutes with both ${\cal{T}}$ and $M$ and so the system energy eigenstates $|\varepsilon_{k}\rangle$ are also eigenstates of both ${\cal{T}}$ and $M$. For a chain of size $N$ there are $2^{N}$ eigenstates in total, with $T$ ranging from zero to $N$ and each sector $T$ containing $N!/(N-T)!T!$ eigenstates. It is also helpful to define the total chain “spin flip” operator ${\cal{F}}$ by: $${\cal{F}}=\prod_{i=1}^{N}\left(|1\rangle\langle 0|_{i}+|0\rangle\langle 1|_{i}% \right).$$ (5) The eigenstates for excitation number $N-T$ then follow from those for $T$ by application of ${\cal{F}}$. Within this framework it is straightforward to understand how PST, or more generally state mirroring albanese2004 ; karbach2005 , occurs. Any initial state of a spin chain $|\Psi(0)\rangle$ can be decomposed into its even and odd (under $M$) parts, so $$|\Psi(0)\rangle=\frac{1}{\sqrt{2}}\left(|\Psi_{+}(0)\rangle+|\Psi_{-}(0)% \rangle\right)$$ (6) with $|\Psi_{\pm}(0)\rangle\equiv\frac{1}{\sqrt{2}}\left(|\Psi(0)\rangle\pm M|\Psi(0% )\rangle\right)$. Clearly the $M$ eigenstates can be decomposed as superpositions of even and odd energy eigenstates $|\Psi_{\pm}(0)\rangle\equiv\sum_{\pm k}c_{\pm k}|\varepsilon_{\pm k}\rangle$ and then for the evolved state at time $t_{M}$ to have unit fidelity against the mirrored initial state $M|\Psi(0)\rangle$, it must be of the form $$|\Psi(t_{M})\rangle=\frac{\exp(-i\theta)}{\sqrt{2}}\left(\sum_{+k}c_{+k}|% \varepsilon_{+k}\rangle-\sum_{-k}c_{-k}|\varepsilon_{-k}\rangle\right).$$ (7) It is therefore clear albanese2004 ; karbach2005 that quantum state mirroring places a requirement on the chain energy level spectrum so that the phases in the evolved state conspire to give the form (7) at the mirror time $t_{M}$, with the coupling choices given in (1) forming an example chris2005 that produces a suitable energy level spectrum. The overall phase $\theta$ in the evolved state (7) is potentially a hindrance when it comes to PST. In addition to its time dependence, this phase depends on both the chain length $N$ and the excitation number $T$. For the case of a single excitation ($T=1$) the phase factor at the mirror time is $\exp(-i\theta(t_{M}))=(-i)^{N-1}$ chris2005 and so it is recognised that $(N-1)$ needs to be a multiple of $4$ to eliminate it, although clearly in practice as $N$ will be known for a system where the couplings have been engineered to, for example, satisfy (1), this phase is a known correction, rather than unknown decoherence. III Two-qubit gates When it comes to remote quantum gates, which form the basis for construction of distributed entangled resources, this phase is the enabling effect yung2005 ; clark2005 ; clark2007 . For a given spin chain of length $N$, PST does not in general work for superpositions that include states from different $T$ sectors, due to the $T$-dependence of $\theta$, although it works for arbitrary superpositions within a fixed $T$ sector. However, this effect can be turned to advantage. Consider PST spin chains where qubit states are injected onto the two extremal spins of the chain ($i=1$ and $i=N$) at time $t=0$. The initial state is thus in general a superposition of the $T=0$, $1$ and $2$ sectors. For the simplest case of the diagonal energies in (II) equal to zero ($E_{i}=0$) and expressed in the basis $\{|0\rangle_{1}|0\rangle_{N},|0\rangle_{1}|1\rangle_{N},|1\rangle_{1}|0\rangle% _{N},|1\rangle_{1}|1\rangle_{N}\}$, at time $t=t_{M}$ the natural evolution of the chain effects a gate $G$ given by $$G=\left(\begin{array}[]{cccc}1&0&0&0\\ 0&0&(-i)^{N-1}&0\\ 0&(-i)^{N-1}&0&0\\ 0&0&0&(-1)^{N}\end{array}\right)$$ (8) on the initial two-qubit state. We note that for the specific case where $(N-1)$ is a multiple of $4$ the $T=1$ sector phase factors are unity chris2005 , but for all values of $N$ the gate $G$ is a maximally entangling gate, producing a concurrence of unity from an initial two-qubit product state given by an equal weight superposition of all four basis states. Thus the natural evolution of the PST spin chain enables a remote maximally-entangling two-qubit gate between the ends of the chain yung2005 ; clark2005 ; clark2007 , which cycles between four different variants with increasing $N$. Underlying all these gates is an effective phase flip that ensues in the doubly-excited sector, which can be understood as arising from the anticommutation of two non-interacting fermions as they pass through each other clark2005 . This picture is helpful in the multiple-excitation sector, where a phase factor of e${}^{-i\pi}$ results from every fermionic crossing. Experimentally, this can also be achieved via crossing beams of Rydberg atoms meeting in cavities lovett2011 , which therefore represent a potential alternative medium for this procedure. The dynamical two-qubit gate underpins the construction of the cluster state resource, so we first examine this basic gate, acting upon qubits each prepared in the state $|+\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)$ and injected simultaneously at the ends of the chain. The entangling action of the two-qubit gate is illustrated in Fig. 1, where the time evolution of the Entanglement of Formation (EoF) wootters1998 of the two end qubits is shown under the action of the PST spin chain dynamics. The EoF reaches unity at $t_{M}=\pi\hbar/2J_{0}$ and every $\pi\hbar/J_{0}$ thereafter. We see that with an increasing number of spins in the chain, the width of the EoF peak decreases. However this decrease is not linear, instead the dependence of the width of peaks in Fig. 1 with $N$ is well approximated by $1/\sqrt{N}$ within the range of chain lengths explored, demonstrating that even for very long chains recovery of the entangled qubits should be feasible. A single entangling gate essentially works for any length $N$ of a PST chain. However, in order to utilise additional qubit injections at the end of the chain, the length needs to be such that subsequent injections and extractions can be made independently. In order to understand this, it is helpful to consider the entropy of the end two qubits as a function of time. In order to calculate the EoF of the end two qubits, the spins comprising the rest of the chain ($i=2$ to $N-1$) are traced out, to leave the density matrix $\rho_{1,N}$ of the two end spins. If this is not pure, it provides a signature of entanglement between the two end spins and the rest of the chain. This is illustrated in Fig. 2, which shows the evolution of the entropy $S=-$Tr$(\rho_{1,N}\log_{2}\rho_{1,N})$ as a function of time. Clearly the entropy is zero at the initial injection time and all integer multiples of $t_{M}$. Of interest to us here is what happens in between. It can be seen from Fig. 2 that provided that $N$ is sufficiently large, the end two qubits disentangle from the rest of the chain, whilst the excitations are propagating and localised entirely in the middle region of the chain. The latter can be appreciated by looking at the site occupation probabilities, see Fig. 9(a). This results in S $\rightarrow$ 0 for windows (that widen for increasing $N$) centred on odd half integer multiples of $t_{M}$. Similar to Fig. 1, the width of the dip at $t=t_{M}$ decreases approximately as $1/\sqrt{N}$ within the range of chain lengths explored. For chains of length $N\geq 9$, it is possible to independently inject further qubits. This will form the basis of the cluster ladder construction. Before considering the injection and extraction protocol, though, we first consider the effects of various errors on the basic entangling gate and the evolving two-end-qubit entropy. IV Errors and decoherence For non-zero but site-independent excitation energies for the qubits in the chain (the diagonal terms in Eq. (II)), additional phases arise in the chain dynamics, but these do not affect the entangling capacity of the remote two-qubit gate. Of more interest is the effect of variations, or errors in the site energies. A simple estimate of the effect of errors in the energy level spectrum chris2005 suggests an overall error, or loss in fidelity for PST that scales linearly with $N$. Figure 3 illustrates the EoF evolution as a function of the spin chain length $N$, including a random modulation of the on-site energies $E_{i}$. The on-site energies are now given by $E_{i}=\varepsilon r_{i}$, where $0\leq r_{i}\leq 1$ is a random number from a uniform distribution. Each point in Fig. 3 corresponds to the EoF at $t=t_{M}$ averaged over 100 random realizations. Figure 3 confirms that the loss in EoF scales linearly with the number of spins in the chain. Moreover, it shows that the EoF decay is not linear for increasing $\varepsilon/J_{max}$, with $J_{max}=max_{i}\{J_{i,i+1}\}=1$. For small and medium perturbations of the on-site energies, EoF close to unity can still be achieved for all chain lengths considered. As we are dealing with multiple excitations in the chain, we also consider the effect of interaction between excitations in nearby sites by adding the following term to Eq. (II): $${\cal{H}}^{\prime}=\sum_{i=1}^{N-1}\gamma J_{0}|1\rangle\langle 1|_{i}\otimes|% 1\rangle\langle 1|_{i+1}.$$ (9) For example, ${\cal{H}}^{\prime}$ may correspond to a biexcitonic interaction in quantum dot-based chains damico2001 ; rinaldis2002 . Figure 4 shows the dependence of the EoF at $t=t_{M}$ on the magnitude of the maximum on-site energy perturbation $\varepsilon$ and of the site-interaction perturbation $\gamma$ for $N=9$. Even with perturbations as big as 20% of the characteristic coupling strength $J_{0}$, the EoF hardly suffers and remains above 90%, indicating that the system is extremely robust against these sorts of errors. The influence of $\gamma$ in Fig. 4 is limited, as three quarters of the initial input state contain no more than one excitation and are thus not subject to the effects of interactions between excitations. Next we explore the effect on the remote gate of unwanted longer range interactions avellino ; kay2006 , which could be an issue when considering pseudospins based on charge degrees of freedom. We add to Eq. (II) the perturbative term $$\displaystyle{\cal{H}}^{\prime\prime}=\sum_{i=1}^{N-2}J_{i,i+2}[|1\rangle% \langle 0|_{i}\otimes|0\rangle\langle 1|_{i+2}+$$ $$\displaystyle|0\rangle\langle 1|_{i}\otimes|1\rangle\langle 0|_{i+2}],$$ (10) with $J_{i,i+2}=\Delta(J_{i,i+1}+J_{i+1,i+2})/2$ to simulate the original coupling modulation. A discussion about the experimental relevance of this term has been given in me1 on the example of graphene and self-assembled quantum dots. ${\cal{H}}^{\prime\prime}$ commutes with both ${\cal{T}}$ and $M$, so that a set of common even and odd eigenstates still exists. The dependence of the EoF at $t=t_{M}$ as a function of $N$ is presented in Fig. 5, for three different values of $\Delta$. The EoF displays a decay linear in $N$. For small values of $\Delta$, relevant e.g. to graphene quantum dots me1 , the EoF is well conserved, although the effect is much more pronounced in very long chains as the increase in number of spins leads to more perturbation terms in Eq. (IV). For example, with a value of $\Delta$ as large as 0.1, a short chain ($N=9$) achieves an EoF of 0.79, with this dropping to 0.59 for a long chain ($N=30$), as shown in Fig. 5. We also note that for $\Delta=0.1$, while PST can be qualitatively measured at $t=t_{M}$, its subsequent periodicity is lost (not shown). This implies that in this case the gate should be put into effect at $t=t_{M}$. Another potential source of gate error for the remote two-qubit gate is a timing error in the “extraction” of the qubits that have undergone the gate. Clearly from the parabolic expansion of the concurrence around its maximum at $t=t_{M}$, this error is second order in any timing error $\delta t$, consistent with the PST timing error chris2005 . V Distributed cluster ladder construction Having discussed the building block gate in detail, we now turn our attention to the repeated use of this operation to generate a cluster ladder. A cluster state resource brie2001 between multiple qubits is realised by placing all qubits in state $|+\rangle$ and then applying a controlled-phase entangling gate between all pairs of qubits that are to be connected in the layout of the cluster state. The following protocol of injection and extraction for a PST spin chain generates a cluster ladder: 1. $t=0$: Inject $|+\rangle$ at each end qubit ($i=1$ and $i=N$) to a ground state chain (all zeros). This could be performed by a SWAP operation with register qubits adjacent to each end site prepared in state $|+\rangle$. (Clearly the SWAP operation has to be fast on the timescale set by $t_{M}$.) 2. $t=t_{M}/2$: Inject $|+\rangle$ at each end qubit, which at this time will be disentangled from the rest of the chain (see Fig. 2). 3. $t=t_{M}$: Extract the end qubits and inject $|+\rangle$ at each end qubit. 4. Repeat last step as many times as desired, at time intervals of $t_{M}/2$. 5. Extract the last two qubits in the ladder when they reach the ends of the chain. It is assumed that each time a pair of qubits in the cluster ladder is extracted, each qubit is shuttled along a register at each end, with the next pair of $|+\rangle$ states moved into positions ready to be swapped in. This is illustrated in Fig. 6. Step 2 of the protocol, the second injection of $|+\rangle$ states, relies on the fact that the end qubits are initially empty. As especially for short chains, this cannot be guaranteed, we would then have some unwanted entanglement between the end qubits and the auxiliary qubits which carried the $|+\rangle$ states. If we assume that the injection is done by a SWAP operation, we can however refocus the system by measuring the auxiliary qubits: finding no excitations corresponds to a successful injection and also collapses the unwanted entanglement, whereas finding two excitations in the auxiliary qubits means that we have extracted all the excitations which the chain contained previously and are now back to the state of the system as after step 1 of the protocol, allowing us to proceed and attempt another set of injections a time $t_{M}/2$ later. In the event that we measure a single excitation in only one of the auxiliary qubits, the system cannot easily be recovered to a form which would allow us to continue with our protocol and needs to be re-initialised. The probability of successfully injecting a pair of $|+\rangle$ states at $t_{M}/2$ is however extremely high: Fig. 7 shows the probability of a successful injection with increasing $N$ and we see that even for $N=9$, the success probability is already next to 99%, with the failure probability (the deviation from unity) further decreasing as $1/N$. These results allow us to be very confident about the efficiency of our proposed protocol. It must be noted that other injection methods, such as tunnelling for example, might not allow for a refocussing mechanism as described above. Injection methods which can result in one or both $|+\rangle$ states remaining in the auxiliary qubits will lead to retention of the unwanted entanglement between the chain and the auxiliary qubits as measurement of the auxiliary qubits (which, in the case of a failed injection, are in a $|+\rangle$ state) does not lead to conclusive results. In this work we will therefore only consider injection of the SWAP type with refocussing taking place immediately after injection. An equivalent refocussing for the potentially imperfect extraction of $|+\rangle$ states at $t_{M}$ (step 3 of the protocol) is however not possible. If the generated $|+\rangle$ states are extracted into empty storage qubits via a SWAP operation, while there is no unwanted entanglement between the chain and the storage qubits, there is no measurement we can perform to check whether the extracted states are indeed $|+\rangle$ states without destroying them. This might lead to some inaccuracies for very long cluster state ladders but results below suggest that this error does not limit practical use of the protocol. In order to now demonstrate our protocol, we consider the generation of a ‘crossed’ square cluster state, which simply involves injection of two pairs of qubits at $t=0$ and $t=t_{M}/2$ with subsequent refocussing and their extraction at $t=t_{M}$ and $t=3t_{M}/2$ (see Fig. 8). First of all, we are dealing with a chain of length $N$ such that $(N-1)$ is a multiple of 4, so the gate in action in this case is, from Eq. (8), $$G^{\prime}=\left(\begin{array}[]{cccc}1&0&0&0\\ 0&0&1&0\\ 0&1&0&0\\ 0&0&0&-1\end{array}\right).$$ (11) It is worth noting that $G^{\prime}$ is the product of a CZ gate, which leads to the desired entanglement, and a SWAP gate. In Fig. 8 (a), this is illustrated by the labels on the lines representing excitations swapping as they cross whereas Fig. 8 (b) shows how the entanglement between the individual qubits it built up. Let us now look at the generation of a crossed square cluster state that this figure illustrates. To construct a crossed square cluster state, we inject excitations 1 and 2 at sites $i=1$ and $i=N$ respectively and then wait for $t_{M}/2$ until we know that these sites are (nearly) completely disentangled from the rest of the chain. We then inject excitations 3 and 4 at sites $i=1$ and $i=N$ respectively and refocus. The change in occupation probability of the individual spins can be seen in Fig. 9(a) on the example of a 9-spin chain, for which the refocussing success probability is 0.9885. The occupation probability is defined as follows: let us assume that the set $\{|\phi_{i}\rangle\}$ of $k$ basis vectors forms the basis for our spin chain, such that any spin chain state $|\psi\rangle$ can be written as $|\psi\rangle=\sum^{k}_{i=1}c_{i}|\phi_{i}\rangle$, with $\sum^{k}_{i=1}|c_{i}|^{2}=1$. The basis vectors can be represented as $|\phi_{i}\rangle=|j_{1}j_{2}\cdots j_{N}\rangle$, with $j=\{0,1\}$. Those basis vectors which contribute to the occupation of a site $s$ are then $|\phi_{i,s}\rangle=|j_{1}j_{2}\cdots 1_{s}\cdots j_{N}\rangle$, where $1\leq s\leq N$. Each $|\phi_{i,s}\rangle$ is weighed by its coefficient $c_{i,s}$ and so the occupation probability of a site $s$ is given by $\sum_{i=1}^{k}|c_{i,s}|^{2}$. As a result, it is to be expected that the total area of the histogram representing the occupation probability is equal to the number of excitations in the system, so $\sum^{N}_{s=1}|c_{i,s}|^{2}=T$ for a specific excitation sector. In Fig. 9(a) we see how inaccuracies in manipulation of the spin chain lead to deviations from the ideal scenario. As the entropy of the depicted 9-spin chain does not reach zero at $t_{M}/2$ (see Fig. 2), the extremal spins $1$ and $N$ are not entirely decoupled from the rest of the chain. Despite refocussing after injection, which guarantees an occupation probability of exactly $0.5$ of spins $1$ and $N$ after injection at $t_{M}/2$ (frame (a)), we see in frame (b) that the occupation probability of spins $1$ and $N$ before extraction at $t=t_{M}$ is bigger than $0.5$. Consequently, in frame (c) the occupation probability of spins $1$ and $N$ before extraction at $3t_{M}/2$ is smaller than $0.5$. As we will see below, this will lead to a small loss in quality of the produced crossed square cluster state. After $t=t_{M}/2$, we have injected all the necessary excitations and wait for another $t_{M}/2$ until $t=t_{M}$ before using a SWAP operation to extract excitations 3 and 4 at their injection sites, where the swapping qubits which are not part of the chain are presumed initially empty and now serve as storage. At this stage, the four excitations are already entangled, as can be seen in Fig. 8 (b). Finally, at $3t_{M}/2$ we extract excitations 1 and 2 at their initial injection sites via another SWAP operation, thus completing the cluster state. Again, the auxiliary qubits used to operate this SWAP are presumed initially unexcited and now serve as storage. The dynamics of these qubits is included in the simulation. Figure 9(c) illustrates that following this operation, the chain is virtually empty and thus ready for use again. We note that the residual occupation in the chain is related to the imperfect disentanglement for a 9-spin chain of the end qubits at $t=t_{M}/2+nt_{M}$, with $n=0,1,2,\cdots$. The end qubits disentangle virtually perfectly as longer chains are considered, see Fig. 2. Monitoring the quality of even just this four-qubit cluster state cannot be achieved via the EoF, which is only suitable as a measure of bipartite entanglement, but we instead consider the fidelity $F$ against the ideal cluster state $\psi_{ideal}$ that we are hoping to achieve: $$F=|\langle\psi_{ideal}|e^{-i{\cal{H}}t/\hbar}|\psi_{ini}\rangle|^{2},$$ (12) so perfect cluster state construction is achieved when $F=1$. As in fact we are more interested in the true nature of the achieved state rather than simply in the amount of entanglement produced, this measure is very suitable for our purposes. Figure 10 shows the evolution of this fidelity during and after implementation of the construction protocol for a crossed square cluster state, without the final read-out step at $t=3t_{M}/2$ so that the evolution of the achieved cluster state continues periodically beyond this time, achieving the same maximum fidelity at every $2t_{M}$ after $t=3t_{M}/2$. Notice the step at $t=t_{M}/2$, which corresponds to the injection of the second pair of excitations. This slight discontinuity is most clearly visible for $N=9$, where we know from Fig. 2 that the dip in entropy is not as low as for longer chains. Nonetheless, $N=9$ achieves a fidelity of 0.9915 at $t=3t_{M/2}$ and the two longer chains pictured both achieve unity at this time. This confirms that a 9-spin chain is long enough to demonstrate the effects we wish to highlight and so we will use this chain as our default device when considering a single value of $N$ only. Refocussing for $N=13$ and $N=17$ is done with success probabilities of 0.9993 and 1.000 respectively, showing that modest increases in $N$ lead to both higher fidelity and higher refocussing success probability. As in Fig. 2, if no excitations are read out at $t=3t_{M/2}$, the evolution in Fig. 10 continues to be periodic over $2t_{M}$. We can confirm this result by also monitoring the entropy S, with the difference that instead of using the density matrix $\rho_{1,N}$ of the two end spins, we use the density matrix of the four storage qubits, i.e. the two end qubits plus the auxiliary qubits. This is illustrated in Fig. 11, which clearly shows the entropy S dipping to zero (or virtually zero for $N=9$). Again, the final read-out step at $t=3t_{M}/2$ is omitted, leading to a continuous evolution of the constructed cluster state which displays dips in entropy every $t_{M}$ after the initial dip at $t=3t_{M}/2$. Analogous to the dip width decrease observed in Fig. 2, the variation in width of the dip at $1.5t_{M}$ is approximated by $1/\sqrt{N}$. S displays an additional dip at $t=0.75t_{M}$, which is almost unnoticeable for short chains such as $N=9$ but tends to zero for very long chains such as $N=25$. This corresponds to a temporary decoupling of the end spins (sites 1 and $N$) as the excitations pass through each other (as indicated by the intersecting lines in Fig. 8). To illustrate this, we record the site occupation probabilities at $t=0.75t_{M}$ in Fig. 12 for a short chain ($N=9$ in panel (a)) and a long chain ($N=21$ in panel (b)). We see that for $N=9$, where we see no dip at $t=0.75t_{M}$ in Fig. 11, there is a high occupation probability for the ends spins, whereas for $N=21$, where we have a significant dip in entropy S at $t=0.75t_{M}$, the occupation probability of the end spins is very low, leaving the spins nearly decoupled from the rest of the chain. We also note that regardless of the chain length, the middle of the chain is empty at this time. VI Errors in the cluster knitting protocol Just like the two qubit gate in section II, the cluster knitting protocol we presented will be subject to various unpredictable errors and decoherences. Due to its higher complexity, the knitted cluster state is potentially more affected by defects similar to the ones discussed in section IV. Due to computational restrictions, the more involved nature of the numerical simulations presented in this section also puts a limit on the lengths of spin chains we can investigate. The longest spin chain we consider here will therefore have 25 spins and not 29 as in section IV. First of all, we re-consider the effect of random on-site energies $E_{i}$. Again, each point in Fig. 13 and 14 corresponds to an average taken over 100 realisations. Figure 13 shows that the loss in fidelity still scales linearly with the number of spins in the chain, but the effect observed is much more detrimental than it was for the simple two-qubit gate (see Fig. 3) due to the increased number of excitations (now 4 vs. 2 in the two-qubit gate). Again, the decay in fidelity is not linear for increasing $\varepsilon/J_{max}$. While for very small values of $\varepsilon$, fidelity is very well maintained even for very long chains, a perturbation of as little as 5% may already be very noticeable. Short chains up to $N=17$ achieve over 80% of the fidelity, whereas longer chains such as $N=25$ suffer losses of over 30%. Similarly, when $\varepsilon=0.1$ acceptable fidelity is only achievable for very short chains such as $N=9$, while chains of 17 spins and more become unsuitable for the proposed protocol as they achieve less than half of the desired fidelity. In Fig. 14, we see the combined influences of non-uniform on-site energies $E_{i}$, weighted by $\varepsilon$, and ${\cal{H}}^{\prime}$ as given by Eq. (9) added to the Hamiltonian (II). Compared to Fig. 4, the effect of $\varepsilon$ has become extremely detrimental, leading to a loss in fidelity of over 60% for large values of $\varepsilon=0.2$. Even though a direct comparison of Fig. 4 and 14 is not possible due to the different measures of state transfer quality, it is safe to say that the crossed square cluster state is much more affected by perturbations due to unwanted non-uniform on-site energies. However, for smaller values of $\varepsilon$ up to a few percent, the fidelity of the knitted crossed square cluster is still very good at about 90% of the ideal value. As the knitting protocol involves up to four excitations, the effect of ${\cal{H}}^{\prime}$ is visible slightly more clearly but remains a very minor factor in the transfer quality, affecting the fidelity by a few percent only. The accuracy of Fig. 4 and 14 is the same, the perceived roughness of data points in Fig. 4, which is due to the randomisation of the influence of $\varepsilon$, does also appear to the same extent in Fig. 14 but is less visible due to the different scale. Finally, we also re-consider the influence of next-nearest neighbour interaction, as given previously by Eq. (IV). Figure 15 shows that even for very small values of $\Delta$ below 5%, there is a noticeable loss in state fidelity. The decay scales now as $N^{2}$ for the shown values of $\Delta$ and $N$, resulting in fidelities above 90% only for $\Delta=0.01$ for chains of lengths up to 25 spins. For chains of 9 spins a next-nearest neighbour coupling $\Delta$ of up to 5% still gives a fidelity of almost 90% (not shown), while systems with 25 spins and $\Delta=0.03$ lose the vast majority of their fidelity. For larger values of $\Delta$, while the first fidelity peak at $t=3t_{M}/2$ might still be of acceptable value for short and medium length chains, it has to be noted that the periodicity is subsequently lost. Depending on the type of algorithm the obtained cluster state is intended for, this might pose a serious problem. For $\Delta=0.05$, chains of 21 spins or longer do not form fidelity peaks at $t=3t_{M}/2$ anymore, so that the protocol fails. This magnified detrimental effect of next-nearest neighbour interaction is again due to the increase in number of excitations, a phenomenon that can also be confirmed in non-entangling spin chains subject to this perturbation (not shown). This underlines the necessity for strict control of longer range interactions, particularly in long chains. Another potential issue in the fabrication of distributed cluster ladders is that of untimely or non-synchronised injection of excitations into the spin chain. The effect of the delay of a single qubit on a two-qubit gate has been discussed in Ref me1 , but as the knitting of a distributed cluster state will involve four excitations most of the time, the possible delays are more involved. We will investigate the effect on the formation of a crossed square cluster state, the quality of which shall be measured via the state fidelity, and consider four separate delay scenarios. Referring to Fig. 8, the system is subjected to a delay $\delta t$ as follows: • (A): Excitations 3 and 4 are both injected at $t_{M}/2+\delta t$ • (B): Excitation 4 is injected at $t_{M}/2+\delta t$ (while excitation 3 is on time) • (C): Excitation 2 is injected at $\delta t$ and excitation 4 is injected at $t_{M}/2+\delta t$ (this corresponds to all injections on one side of the chain being delayed by the same amount) • (D): Excitation 1 is injected at $\delta t$ and excitation 4 is injected at $t_{M}/2+\delta t$ Refocussing as discussed in section V is done immediately after each individual injection by measuring the auxiliary qubits. Despite an initial perturbation to the generation of the crossed square cluster state, we see in Fig. 16 (describing scenario A) that the evolution of the system continues to be essentially periodic if no read-out at $3t_{M}/2$ is undertaken. There is a clear kink in the plots of both entropy and state fidelity at $t_{M}/2+0.1t_{M}$, where $0.1t_{M}$ is the delay $\delta t$ of the injection of excitations 3 and 4. Consecutively, the fidelity peak occurs at $3t_{M}/2+\delta t$ and does not reach unity but a value of 0.9580, demonstrating the robustness of the system. Figure 17 shows the amplitude of the first fidelity peak at $3t_{M}/2+\delta t$ for a range of $\delta t$ and all four scenarios considered. First of all we note that none of the delay scenarios lead to fidelity of less than 90% for a delay time of 5% of the mirroring time $t_{M}$, while delay up to 10% of $t_{M}$ might lead to a loss of over 25% of fidelity. There is a clear discrepancy between scenario A and the other three scenarios, with scenario A performing much better and suffering just over 10% fidelity loss for a delay of 10% of $t_{M}$. Despite scenario B having fewer delayed excitations, it performs significantly worse. As such, it becomes clear that the system favours symmetrical input, also shown by scenario C with two excitations on the same end of the chain being delayed performing slightly better than scenario D, where excitations at opposite ends of the chain are delayed. VII Conclusions In conclusion, we have presented a method to knit distributed cluster states, using only a single spin chain set up for perfect state transfer and its natural dynamics. By closely observing the entropy of the chain, we have shown that the two end spins become decoupled from the rest of the chain at regular intervals, allowing us to inject further excitations into the chain without perturbing its existing excitation subspaces. The following natural chain dynamics lead the ensemble of excitations to entangle the end spins, which in turn decouple from the rest of the chain, allowing us to extract their states. As this routine of injections and extractions can be repeated without theoretical restrictions, we are thus producing a knitted cluster state consisting of an even arbitrary number of spins. A further outlook of our technique is the possibility to knit other topological arrangements of cluster states by varying the injection and extraction timings and rates. As demonstrated in Fig. 6 and 8, each crossing of excitations leads to a new edge in the final produced state. Provided that the chain used to knit an arbitrary arrangement is long enough to allow for the necessary localisation of excitations, the number of entanglement bonds produced is therefore freely controllable. Examples of topologically useful entangled states that could be achieved in this way are presented in reference lovett2011 . We would however expect more complicated structures which require very long chains, in particular those involving large numbers of excitations, to be more prone to the sources of decoherence we discussed. We also examined multiple potential causes of error when knitting states, both for the fundamental building block of our protocol, the two-qubit gate, as well as for the smallest knitted cluster state, the crossed square cluster state. Consistent with existing results on the influence of errors on state transfer in spin chains, the two-qubit gate is very robust against unwanted interactions between different excitations as well as perturbations in the on-site energies, maintaining high levels of over 90% of the desired entanglement even for large errors of 10% and very long spin chains. Longer range interactions on the other hand have potentially very detrimental effects, with large perturbations leading to a significant decay in entanglement as well as a loss of periodicity. Consequently, we observed analogous phenomena for the crossed square cluster state, amplified greatly by the increased number of excitations. Here, only non-uniformity in the on-site energies of up to 5% can be tolerated in order to ensure acceptable levels of over 70% of formed entanglement in long chains. Again, it is however the next-nearest neighbour interaction that deserves most attention in the fabrication process of a spin chain, as even small unwanted interactions of just a few percent can lead to significant fidelity losses, as well as loss of periodicity of the system. Additionally, we have considered the effect of the mistimed injection of one or more excitations on the formation of the desired crossed square cluster state and found that pairwise delay at the second injection time is much less detrimental than delay of a single excitation or delay of excitations at different injection times, but also that non-symmetric delay leads to a larger fidelity loss than delay restricted to the input on one end of the chain. Overall, the effect of delayed input as we considered is however quite limited and does not lead to a loss of more than 10% of the perfect state fidelity for delays up to 5% of $t_{M}$. Our studies have analysed and demonstrated a new protocol for the fabrication of distributed cluster states which requires no additional resources or mechanisms beyond the set-up of a spin chain for perfect state transfer and the associated SWAP operations used to inject and extract information. In the context of limited errors deviating less than 10% from the ideal set up, the knitting method proves to be a sound and stable method for generating distributed cluster states, making the utilisation of spin chains in quantum communication an ever more attractive prospect. RR was supported by EPSRC-GB and Hewlett-Packard. References (1) S. Bose, Contemp. Phys. 48, 1 (2007) and references therein. (2) A. Kay, Int. J. Quantum Inf. 8, 641 (2010) and references therein. (3) J. Twamley, Phys. Rev. A 67, 052318 (2003). (4) J. Tejada, E. M. Chudnovsky, E. del Barco, et al., Nanotechnology 12, 181 (2001). (5) J. Zhang, N. Rajendran, X. Peng and D. Suter, Phys. Rev. A 76, 012317 (2007). (6) I. D’Amico, in Semiconductor Research Trends, edited by K. G. Sachs (Nova Science Publishers, 2007). (7) I. D’Amico, Microelectronics Journal 37 (12), 1440-41 (2006). (8) G. M. Nikolopoulos, D. Petrosyan and P. Lambropoulos, J. Phys. Condens. Matter 16, 28 (2004). (9) S. Bose, Phys. Rev. Lett. 91, 207901 (2003). (10) D. Burgarth, K. Maruyama and F. Nori, Phys. Rev. A 79, 020305 (2009). (11) G. De Chiara, D. Rossini, S. Montangero and R. Fazio, Phys. Rev. A 72, 012323 (2005). (12) A. Wojcik, et al., Phys. Rev. A 72, 034303 (2005). (13) D. Burgarth and S. Bose, Phys. Rev. A 71, 052315 (2005). (14) M. Christandl, N. Datta, T. C. Dorlas, A. Ekert, A. Kay and A. J. Landahl, Phys. Rev. A 71, 032312 (2005). (15) R. Raussendorf and H. J. Briegel, Phys. Rev. Lett. 86, 5188 (2001). (16) H. J. Briegel and R. Raussendorf, Phys. Rev. Lett. 86, 910 (2001). (17) M.-H. Yung and S. Bose, Phys. Rev. A 71, 032310 (2005). (18) S. R. Clark, C. Moura Alves and D. Jaksch, New J. Phys. 7, 124 (2005). (19) S. R. Clark, A. Klein, M. Bruderer and D. Jaksch, New J. Phys. 9, 202 (2007). (20) Of course it must be appreciated that the chain length $N$ does enter the nearest neighbour coupling expression (1). (21) P. Karbach and J. Stolze, Phys. Rev. A 72, 030301 (2005). (22) C. Albanese, M. Christandl, N. Datta and A. Ekert, Phys. Rev. Lett. 93, 230502 (2004). (23) N. B. Lovett and B. T. H. Varcoe, arXiv:1101.1220v1 [quant-ph] (to be published in International Journal of Unconventional Computing - special issue: New Worlds of Computation) (24) W. Wootters, Phys. Rev. Lett. 80, 2245 (1998). (25) I. D’Amico and F. Rossi, Appl. Phys. Lett. 79, 1676 (2001). (26) S. De Rinaldis et al., Appl. Phys. Lett. 81, 4236 (2002). (27) A. Kay, Phys. Rev. A 73, 032306 (2006) (28) M. Avellino, A. J. Fisher and S. Bose, Phys. Rev. A 74, 012321 (2006). (29) R. Ronke, T. P. Spiller and I. D’Amico, Phys. Rev. A 83, 012325 (2011).
See pages 1-last of overview_article.pdf
Kriging-Based 3-D Spectrum Awareness for Radio Dynamic Zones Using Aerial Spectrum Sensors ††thanks: This work is supported in part by the NSF PAWR award CNS-1939334 and its associated supplement for studying National Radio Dynamic Zones (NRDZs). The authors would like to thank Wireless Research Center for measuring antenna patterns by using an anechoic chamber. The datasets and post-processing scripts for obtaining the results in this manuscript are publicly accessible at [1].††thanks: S. J. Maeng, Ozgur Ozdemir, İ. Güvenç, and Mihail L. Sichitiu are with the Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 27606 USA (e-mail: [email protected]; [email protected]; [email protected]; [email protected]). Sung Joon Maeng, Ozgur Ozdemir, Member, IEEE, İsmail Güvenç, Fellow, IEEE, and Mihail L. Sichitiu, Member, IEEE Abstract Radio dynamic zones (RDZs) are geographical areas within which dedicated spectrum resources are monitored and controlled to enable the development and testing of new spectrum technologies. Real-time spectrum awareness within an RDZ is critical for preventing interference with nearby incumbent users of the spectrum. In this paper, we consider a 3D RDZ scenario and propose to use unmanned aerial vehicles (UAVs) equipped with spectrum sensors to create and maintain a 3D radio map of received signal power from different sources within the RDZ. In particular, we introduce a 3D Kriging interpolation technique that uses realistic 3D correlation models of the signal power extracted from extensive measurements carried out at the NSF AERPAW platform. Using C-Band signal measurements by a UAV at altitudes between 30 m-110 m, we first develop realistic propagation models on air-to-ground path loss, shadowing, spatial correlation, and semi-variogram, while taking into account the knowledge of antenna radiation patterns and ground reflection. Subsequently, we generate a 3D radio map of a signal source within the RDZ using the Kriging interpolation and evaluate its sensitivity to the number of measurements used and their spatial distribution. Our results show that the proposed 3D Kriging interpolation technique provides significantly better radio maps when compared with an approach that assumes perfect knowledge of path loss. Index Terms: 3-D spectrum awareness, AERPAW, antenna radiation pattern, I/Q samples, LTE, Kriging interpolation, propagation modeling, RDZ, RSRP, UAV, USRP. I Introduction As the demand for advanced wireless communication services continues to grow, efficient use of spectrum resources is becoming increasingly vital for future wireless technologies. Therefore, the development, testing, and evaluation of effective mechanisms to improve spectrum efficiency and sharing have become imperative. Although there is a considerable body of literature that examines and analyzes spectrum sharing using theoretical models and simulations, there is a clear need to assess these approaches in real-world deployment scenarios, taking into account realistic propagation conditions. In this particular context, the concept of radio dynamic zones (RDZs) emerges as a new concept [2, 3, 4], where geographical areas with dedicated spectrum resources are effectively managed and controlled in real-time to test new wireless innovations. This management is achieved through the sensing of signals entering and leaving the zone [5]. RDZs serve as testing grounds for novel spectrum sharing concepts and emerging technologies aimed at improving spectrum efficiency within specific deployment scenarios. In RDZs, it becomes crucial to ensure minimal or no interference to existing incumbent users of the spectrum. Therefore, monitoring of signal leakage to passive or active receivers outside the RDZ becomes necessary. This requires installation and deployment of sensors within the RDZ. Monitoring scope can include both terrestrial areas and airspace, e.g., for coexistence with unmanned aerial vehicles (UAVs) and satellites. By monitoring and modeling the interference levels experienced by passive receivers in these aerial scenarios, more efficient spectrum sharing can be achieved. The use of radio environment maps (REMs) [6] presents an effective approach for constructing dynamic interference maps within an RDZ, which can be generated for each location and frequency of interest. These radio maps are generated by collecting signal power data from deployed sensors and incorporating their corresponding location information. However, it is often impractical to position sensors throughout the entire RDZ area. Instead, signal power at unknown locations can be predicted using signal processing techniques like Kriging [7], based on measurements from nearby sparsely deployed sensors. Kriging takes advantage of the spatial correlation between different locations to optimize the prediction of signal power. By employing Kriging, we can efficiently interpolate and generate a radio map of signal power using sparsely measured datasets from the sensors. In the existing literature, several studies have focused on modeling the spatial correlation of shadowing in received signals [8, 9], with experimental measurements provided in [10, 11]. The application of Kriging for generating radio maps of signal power has been validated using both simulated and real datasets [12]. The potential of Kriging for spectrum monitoring and interference management has been explored in  [13], while [14] extends Kriging interpolation to spectrum interpolation and analyzes it using measurement datasets. For ground-to-UAV communications in suburban environments, path loss and shadowing have been modeled based on measurement datasets [15, 16]. Additionally, the spatial correlation along the linear trajectory of a UAV has been investigated [17]. In our recent works, we introduce the RDZ concept and discuss its features and requirements [3]. Furthermore, we propose a leakage sensing algorithm using Kriging in the two-dimensional (2D) plane of the RDZ [18]. Notably, to the best of our knowledge, the literature does not address the use of Kriging to obtain a three-dimensional (3D) aerial radio map based on measurements obtained from unmanned aerial vehicles (UAVs). In this paper, we propose to develop and use a 3D radio map to effectively sense signal leakage from an RDZ to the receivers outside of the RDZ. We employ a UAV as a mobile aerial sensor, collecting signal power measurements from distinct receivers within the RDZ. The 3D interpolation of the collected signal power is performed using the Kriging technique. The proposed method is thoroughly analyzed and validated through a measurement campaign. The main contributions of this paper can be summarized as follows: • Modeling 3D Radio Propagation: Considering a 3D spectrum sensing scenario, we develop and analyze a path loss model that accounts for spatially correlated shadowing, two-ray wireless propagation, and measured antenna radiation patterns to accurately model 3D radio propagation. We integrate 3D antenna measurements obtained in an anechoic chamber and study improvements in model accuracy when compared to using dipole and omnidirectional antenna patterns. • Semi-Variogram Based Kriging Interpolation: We introduce a novel method for Kriging interpolation specifically designed for 3D spectrum monitoring. This approach leverages a semi-variogram technique to achieve accurate and efficient interpolation across a 3D volume using a limited set of measurements. • Comparison with Measurement Data: We evaluate and compare the accuracy of our proposed 3D propagation models with the measurement data collected using software-defined radios (SDRs) at various UAV altitudes. This analysis provides valuable insights into the performance and reliability of the proposed approach. The rest of this paper is organized as follows. In Section II, we present the system model for 3-D spectrum sensing, radio propagation, and spatial correlation in an RDZ, while in Section III, we introduce the Kriging-based signal interpolation method for generating a 3D radio map. In Section IV, we describe the details of our measurement campaigns for obtaining I/Q signal samples at a UAV from an LTE-based signal source on the ground, and our measurements in an anechoic chamber for characterizing the antenna radiation patterns. In Section V, we analyze the effectiveness of the proposed 3D path-loss models in predicting the received signal power at different UAV altitudes and locations. We present numerical results on Kriging-based 3D radio map interpolation for various scenarios in Section VI and the last section concludes the paper. II System Model In this section, we present the models utilized for spectrum sensing within an RDZ. Specifically, we consider a scenario where an aerial spectrum sensor traverses the area and captures received signals from a base station (BS). Radio propagation, correlation, and antenna radiation pattern models are also presented. II-A 3-D Spectrum Sensing with an Aerial Mobile Sensor An RDZ should protect incumbent users outside of the zone by controlling and managing interference signals radiating from inside the zone. The incumbent users may include smart devices and aerial vehicles, as well as sensitive scientific passive receivers such as satellites and ground-based radio astronomy receivers in radio quiet zones (RQZs) [19]. Our envisioned RDZ concept is illustrated in Fig. 1. The real-time spectrum sensing within the boundary of the RDZs is conducted by deployed fixed / mobile ground and aerial sensor nodes, which is an essential technique to manage dynamic spectrum usage. The UAV moves across the RDZ space along a multi-altitude trajectory, capturing signal data throughout. This paper primarily focuses on the study of real-time signal sensing in the volume of space to monitor the signal leakage from RDZs. Mobile aerial nodes, in the form of UAVs, collect signal power data as they follow predefined trajectories. Subsequently, the RDZ system leverages the collected dataset from the aerial nodes to generate a radio map depicting the signal power surrounding the RDZ space. The interpolation of this dataset facilitates the construction of a comprehensive representation of signal power distribution. II-B Radio Propagation Model The location of a BS and a UAV can be represented by $$\displaystyle\mathbf{l}^{\rm bs}$$ $$\displaystyle=(\psi^{\rm bs},\omega^{\rm bs},h^{\rm bs}),\;\mathbf{l}^{\rm uav}(t)=(\psi^{\rm uav},\omega^{\rm uav},h^{\rm uav}),$$ (1) where $\psi$, $\omega$, and $h$ denote the latitude, longitude, and altitude of the location. Note that although the location can be generally represented by x, y, z in 3D Cartesian coordinates, we express it by latitude, longitude, and altitude to use the information given by GPS sensors. The time-varying location of a UAV is given by $\mathbf{l}^{\rm uav}(t)$. The horizontal distance and the vertical distance between a BS and a UAV can be expressed as [20] $$\displaystyle d_{\rm h}(\mathbf{l}^{\rm bs},\mathbf{l}^{\rm uav})$$ $$\displaystyle=\arccos\left(\sin\psi^{\rm uav}\sin\psi^{\rm bs}\right.$$ $$\displaystyle\left.+\cos\psi^{\rm uav}\cos\psi^{\rm bs}\cos(\omega^{\rm bs}-\omega^{\rm uav})\right)\times A,$$ (2) $$\displaystyle d_{\rm v}(\mathbf{l}^{\rm bs},\mathbf{l}^{\rm uav})$$ $$\displaystyle=|h^{\rm bs}-h^{\rm uav}|,$$ (3) where $A$ is the radius of the earth ($\approx 6378137$ m). Then, the 3D distance between a BS and a UAV is given by $$\displaystyle d_{\rm 3D}(\mathbf{l}^{\rm bs},\mathbf{l}^{\rm uav})$$ $$\displaystyle=\sqrt{d_{\rm h}(l^{\rm bs},l^{\rm uav})^{2}+d_{\rm v}(l^{\rm bs},l^{\rm uav})^{2}}.$$ (4) Next, the elevation angle between a BS and a UAV can be expressed as $$\displaystyle\theta_{l}$$ $$\displaystyle=\tan^{-1}\left(\frac{d_{\rm v}}{d_{\rm h}}\right).$$ (5) To develop a propagation model, we make use of a first-order approximation and consider the rural environment in which we collect measurements. In this scenario, we employ the two-ray ground reflection model to represent the path loss between a BS and a UAV. This model accounts for a line-of-sight (LoS) path as well as a strong ground reflection path, both contributing to the received signal as the two dominant paths in an open area such as a rural environment. The path loss characterized by the two-ray ground reflection model can be expressed as follows [21, Chapter 2]: $$\displaystyle\mathsf{PL}_{\rm twm}(\mathbf{l}^{\rm bs},\mathbf{l}^{\rm uav})=\left(\frac{\lambda}{4\pi}\right)^{2}\bigg{|}\underbrace{\frac{\sqrt{\mathsf{G}_{\rm bs}(\phi_{l},\theta_{l})\mathsf{G}_{\rm uav}(\phi_{l},\theta_{l})}}{d_{\rm 3D}}}_{\text{LoS signal}}$$ $$\displaystyle+\underbrace{\frac{\Gamma(\theta_{r})\sqrt{\mathsf{G}_{\rm bs}(\phi_{r},\theta_{r})\mathsf{G}_{\rm uav}(\phi_{r},\theta_{r})}e^{-j\Delta\tau}}{r_{1}+r_{2}}}_{\text{ground reflected signal}}\bigg{|}^{2},$$ (6) where $\mathsf{G}_{\rm bs}(\phi,\theta)$, $\mathsf{G}_{\rm uav}(\phi,\theta)$, $\lambda$, $\phi$ denote the antenna gain of a BS, antenna gain of a UAV, wave-length, and azimuth angle, respectively, $\theta_{r}=\tan^{-1}\left(\frac{h^{\rm bs}+h^{\rm uav}}{d_{\rm h}}\right)$ represents ground reflection angle, and $\Delta\tau=\frac{2\pi(r_{1}+r_{2}-d_{\rm 3D})}{\lambda}$ indicates the phase difference between two paths. The distance and the angle parameters in the two-ray ground reflection model are illustrated in Fig. 2. The ground reflection coefficient with the vertically polarized signal is given by $$\displaystyle\Gamma(\theta_{r})$$ $$\displaystyle=\frac{\varepsilon_{0}\sin\theta_{r}-\sqrt{\varepsilon_{0}-\cos^{2}\theta_{r}}}{\varepsilon_{0}\sin\theta_{r}+\sqrt{\varepsilon_{0}-\cos^{2}\theta_{r}}},$$ (7) where $\varepsilon_{0}$ is the relative permittivity of the ground and the value depends on the type of the ground. Two signal components in (II-B) are received and combined with a phase difference. If we only consider the first LoS term in the path loss, we can obtain the free-space path loss model, given as $$\displaystyle\mathsf{PL}_{\rm fs}=\left(\frac{\lambda}{4\pi}\right)^{2}\left|\frac{\sqrt{\mathsf{G}_{\rm bs}(\theta_{l})\mathsf{G}_{\rm uav}(\theta_{l})}}{d_{\rm 3D}}\right|^{2}.$$ (8) Using (II-B), the received signal power of a UAV in dB scale can be expressed as $$\displaystyle r$$ $$\displaystyle=\mathsf{P}_{\rm Tx}-\mathsf{PL}_{\rm twm}^{(\rm dB)}+w,$$ (9) where $\mathsf{P}_{\rm Tx}$, $w$ denote transmit power and shadowing component, respectively. Note that the path loss term in (9) is converted to dB scale. The shadowing term generally follows a lognormal distribution and is modeled by a zero-mean Gaussian process with a spatial covariance [8]. The correlation between received signals at two different locations is generally characterized by the function of the distance between those locations. Note that we do not take into account small-scale fading in the received signal since we assume that the effect is eliminated by averaging the samples within the proper time interval [10]. II-C Spatial Correlation Model of Received Signal In this section, we focus on describing the correlation function between the received signals at different locations of a UAV. Since the spatial correlation primarily depends on the shadowing component ($w$) in the received signal in (9), we can capture the correlation between received signals ($r$) using the correlation between the shadowing components without loss of generality. It is well-known that the correlation between two different locations is characterized by a function of their physical distance. Typically, this correlation exponentially attenuates as the physical distance between the locations increases [10]. However, most existing works in the literature primarily focus on terrestrial networks and do not fully consider 3D topologies. Due to this limitation, the spatial correlation between two locations with different vertical positions (heights) has not been extensively studied to our best knowledge. Considering the unique characteristics of UAV-based scenarios, where altitude plays a crucial role, it becomes essential to investigate and understand the spatial correlation between locations at different vertical positions. This exploration will allow for a more comprehensive modeling of the correlation in 3D scenarios, considering the impact of vertical distance in addition to horizontal distance. In our work, we first model the spatial correlation as a function of the vertical distance ($d_{\rm v}$) as well as the horizontal distance ($d_{\rm h}$). Then, we define the correlation function between 3D locations as a function of both the vertical distance and the horizontal distance. The spatial correlation between two different locations of a UAV, i.e., between $l^{\rm uav}_{i}$ and $l^{\rm uav}_{j}$, can be expressed as $$\displaystyle R(l^{\rm uav}_{i},l^{\rm uav}_{j})=R(d_{\rm v},d_{\rm h})=\frac{\mathbb{E}\left[w(l^{\rm uav}_{i})w(l^{\rm uav}_{j})\right]}{\sigma_{w}^{2}},$$ (10) where $\sigma_{w}^{2}$ is the variance of shadowing. Once again, the proposed correlation is the function of both the vertical distance and the horizontal distance. II-D Antenna Radiation Model The antenna gain effect of a transmitter and a receiver in the received signal is captured in the path loss model in (II-B), using $\mathsf{G}_{\rm bs}(\phi,\theta)$, $\mathsf{G}_{\rm uav}(\phi,\theta)$. In typical terrestrial communications, the antenna gain is simply modeled by a constant gain. This is due to the fact that a dipole antenna is usually characterized as an omni-directional antenna radiation pattern in the azimuth angle domain, or sectored directional antennas make the antenna pattern mostly uniform in the azimuth angle domain. However, air-to-ground communications require considering the variation of the antenna gain in the elevation angle domain. The antenna pattern in the elevation domain is typically far from being uniform and therefore we should consider the elevation angle-dependent radiation pattern in modeling the antenna gain. III 3D Radio Map Interpolation using Kriging In this section, we introduce an efficient radio map interpolation technique using Kriging [13]. This method utilizes measurement data obtained from sparsely deployed spectrum sensors within an RDZ. The interpolation process allows us to estimate signal values at unsampled locations based on the available measurements. We first introduce how to calculate a semi-variogram, and subsequently, introduce our Kriging based interpolation approach for 3D RDZ scenarios. Different than the existing Kriging techniques in the literature, we consider the 3D geometry in spatial correlation with a portable aerial sensor, which enables us to interpolate the radio map in a 3D volume. III-A Semi-variogram In geostatistics, the semi-variogram represents the degree of spatial dependency on different locations which is utilized in Kriging interpolation. The semi-variogram between a UAV’s locations $l^{\rm uav}_{i}$, $l^{\rm uav}_{j}$ is defined as $$\displaystyle\gamma(l^{\rm uav}_{i},l^{\rm uav}_{j})=\frac{1}{2}\text{var}\left(r(l^{\rm uav}_{i})-r(l^{\rm uav}_{j})\right).$$ (11) If the covariance function of a stationary process exists, we can obtain the semi-variogram from the spatial correlation in (10) as follows for our considered 3D RDZ scenario [22]: $$\displaystyle\gamma(l^{\rm uav}_{i},l^{\rm uav}_{j})$$ $$\displaystyle=\frac{\sigma_{w}^{2}}{2}\left(R(l^{\rm uav}_{i},l^{\rm uav}_{i})+R(l^{\rm uav}_{j},l^{\rm uav}_{j})-2R(l^{\rm uav}_{i},l^{\rm uav}_{j})\right)$$ $$\displaystyle=\sigma_{w}^{2}\left(1-R(l^{\rm uav}_{i},l^{\rm uav}_{j})\right)=\sigma_{w}^{2}\left(1-R(d_{\rm v},d_{\rm h})\right),$$ (12) where $\sigma_{w}^{2}$ captures the variance of the shadowing term $w$ in (9) as defined earlier, and $R(l^{\rm uav}_{i},l^{\rm uav}_{i})$ is as defined in (10). We assume that $\sigma_{w}^{2}$ is constant at given set of locations while deriving (III-A). III-B Kriging Interpolation The ordinary Kriging is the optimal prediction method in squared-error loss from the observed data at known spatial locations where the error of the spatial prediction of an unknown location is minimized [22]. It interpolates the signal strength of the arbitrary locations by using the linear combination of the signal strength of the nearby locations. The ordinary Kriging problem can be formulated as follows [13]: $$\displaystyle\min_{\mu_{1},\dots,\mu_{M}}$$ $$\displaystyle\quad\mathbb{E}\left[\left(\hat{r}(\mathbf{l}^{\rm uav}_{0})-r(\mathbf{l}^{\rm uav}_{0})\right)^{2}\right],$$ (13) s.t. $$\displaystyle\quad\hat{r}(\mathbf{l}^{\rm uav}_{0})=\sum_{i=1}^{M}\mu_{i}r(\mathbf{l}^{\rm uav}_{i}),$$ (14a) $$\displaystyle\quad\sum_{i=1}^{M}\mu_{i}=1,$$ (15a) where $l^{\rm uav}_{0}$ is a location to predict an unknown parameter, $\mu_{i}~{}(i=1,\cdots,M)$ are weighting parameters and $M$ indicates the number of nearby measured samples to use. The above problem can be solved by following steps [13]. First, we convert the original problem to an equivalent Lagrange expression: $$\displaystyle\min_{\mu_{1},\dots,\mu_{M}}$$ $$\displaystyle\mathbb{E}\left[\left(r(l^{\rm uav}_{0})-\sum_{i=1}^{M}\mu_{i}r(l^{\rm uav}_{i})\right)^{2}\right]-\kappa\left(\sum_{i=1}^{M}\mu_{i}-1\right),$$ (16) where $\kappa$ denotes the Lagrange multiplier. After a few mathematical steps, the objective function in (16) can be reformulated as $$\displaystyle\sigma^{2}_{w}+2\sum_{i=1}^{M}\mu_{i}\gamma(l^{\rm uav}_{0},l^{\rm uav}_{i})-\sum_{i=1}^{M}\sum_{j=1}^{M}\mu_{i}\mu_{j}\gamma(l^{\rm uav}_{i},l^{\rm uav}_{j})$$ $$\displaystyle-\kappa\left(\sum_{i=1}^{M}\mu_{i}-1\right),$$ (17) where $\gamma(l^{\rm uav}_{i},l^{\rm uav}_{j})$ is as defined in (11). Finally, we can find the optimal solution that minimizes the objective function by the first derivative of (III-B) with respect to $\mu_{1},\dots,\mu_{M}$, which is given by $$\displaystyle\sum_{j=1}^{M}\mu_{j}\gamma(l^{\rm uav}_{i},l^{\rm uav}_{j})-\gamma(l^{\rm uav}_{0},l^{\rm uav}_{i})+\kappa^{\prime}=0.$$ (18) We can also express (18) as a linear matrix equation as: $$\displaystyle\left[\begin{array}[]{c c c c}\gamma(l^{\rm uav}_{1},l^{\rm uav}_{1})&\cdots&\gamma(l^{\rm uav}_{1},l^{\rm uav}_{M})&1\\ \gamma(l^{\rm uav}_{2},l^{\rm uav}_{1})&\cdots&\gamma(l^{\rm uav}_{2},l^{\rm uav}_{M})&1\\ \vdots&\vdots&\vdots&\vdots\\ \gamma(l^{\rm uav}_{M},l^{\rm uav}_{1})&\cdots&\gamma(l^{\rm uav}_{M},l^{\rm uav}_{M})&1\\ 1&\cdots&1&0\\ \end{array}\right]\left[\begin{array}[]{c}\mu_{1}\\ \mu_{2}\\ \vdots\\ \mu_{M}\\ \kappa^{\prime}\end{array}\right]$$ (29) $$\displaystyle=\left[\begin{array}[]{c}\gamma(l^{\rm uav}_{0},l^{\rm uav}_{1})\\ \gamma(l^{\rm uav}_{0},l^{\rm uav}_{2})\\ \vdots\\ \gamma(l^{\rm uav}_{0},l^{\rm uav}_{M})\\ 1\end{array}\right].$$ (35) Then, we can easily obtain the optimal $\mu_{1}^{\star},\dots,\mu_{M}^{\star}$ from (29) and interpolate the received signal powers of unknown location $l^{\rm uav}_{0}$ by $$\displaystyle\hat{r}(l^{\rm uav}_{0})=\sum_{i=1}^{M}\mu_{i}^{\star}r(l^{\rm uav}_{i}).$$ (36) Note that accurate characterization of the 3D semi-variogram in (11) is critical for the interpolation in (36). The next section describes our measurements that will be used to characterize the 3D semi-variogram. IV Measurement Campaign Overview In this section, we describe the details of our radio propagation measurements. We present our measurement setup, define UAV trajectory used, and describe our approach for characterizing antenna effects. IV-A Measurement Setup The measurement campaign was conducted at the Lake Wheeler Road Field Labs (LWRFL) site in Raleigh, NC, USA, which is one of the two sites in the NSF Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW). The experimental area, depicted in Fig. 3a, can be classified as an open rural environment, ensuring LoS conditions between a UAV and the BS throughout the entire duration of the experiments. Fig. 3b and Fig. 3c present photos of the base station (BS) tower and the drone used during the measurement campaign. The BS tower stands at a height of 10 meters and is equipped with a single dipole transmit antenna. On the other hand, the drone is equipped with a vertically oriented single dipole receiver antenna and a GPS receiver to accurately track its position. To facilitate the measurements, the srsRAN open-source Software Defined Radio (SDR) software was utilized to implement an LTE evolved NodeB (eNB) at the BS tower, as shown in Fig. 3b. The eNB continuously transmitted common reference symbols (CRSs) during the measurement campaign. During the measurement campaign, the drone collects raw I/Q data samples using a Software Defined Radio (SDR) that is attached to it. Specifically, the USRP B205mini from National Instruments (NI) is utilized as the SDR device, both at the BS tower and on the UAV. For post-processing the raw I/Q data, we employ Matlab’s LTE toolbox. Within this toolbox, we calculate the Reference Signal Received Power (RSRP) for each location of the UAV. To ensure efficient processing and analysis, we collect 20 ms segments of data out of every 100 ms. Within each 20 ms segment, we extract a 10 ms duration for subsequent post-processing. Throughout the paper, the terms “received signal” and “RSRP” are used interchangeably to refer to the measured signal strength. The major specifications of the transmitter and the receiver are listed in Table I. IV-B UAV Trajectory We conduct the experiments multiple times by changing the altitude (height) of the UAV from 30 m to 110 m at increments of 20 m. In each flight, the UAV flies an identical predefined trajectory with a different fixed height. In particular, the UAV flies on a zig-zag pattern through the experiment site, between south and north waypoints, and it eventually flies back to the starting point. The top view (at $h=110$ m) and the 3D view of the UAV trajectories along with measured RSRPs are illustrated in Fig. 4 for flight trajectories at $30$ m, $50$ m, $70$ m, $90$ m, and $110$ m. IV-C Antenna Radiation Pattern Characterization The dipole antenna used in our experiments generally exhibits omni-directional radiation patterns in the azimuth angle domain, but oval-shaped radiation patterns in the elevation angle domain. The radiation pattern also varies with the carrier frequency. We obtained the antenna pattern specifications for the Rx dipole antenna (SA-1400-5900) from the vendor’s specification sheet, and it shows a typical donut-shaped dipole pattern that remains consistent across different carrier frequencies [23]. Specifically, in the specification sheet, the antenna patterns for 1.4, 1.7, 2.4, 4.4, and 5.8 GHz frequencies are provided and all of them have similar dipole patterns. Therefore, we adopted the 2.4 GHz frequency antenna pattern from the specification sheet for our analysis. However, the Tx dipole antenna (RM-WB1-DN) exhibited different elevation angle domain patterns depending on the carrier frequency and had an asymmetric pattern that did not guarantee omni-directionality in the azimuth angle domain [24]. Furthermore, the specification sheet did not provide the radiation pattern for the specific carrier frequency (3.51 GHz) used in our experiments. To obtain the exact antenna radiation pattern for the 3.51 GHz frequency, we conducted separate measurements of the 3D antenna pattern using an anechoic chamber facility located at wireless research center (WRC), Wake Forest, NC. Fig. 5a shows a photo of the setup in the anechoic chamber during the measurement of the Tx antenna’s 3D pattern. Fig. 5b displays the output of the antenna measurement, visualizing the antenna pattern in 3D Cartesian coordinates. It can be observed that the antenna pattern is not purely omni-directional in the azimuth angle domain, and the directivity in the elevation angle domain is not straightforward. In contrast, Fig. 5c shows the elevation angle domain antenna pattern of the Rx antenna as provided in the specification sheet, where the antenna pattern is specified as omni-directional with uniform gain in the azimuth domain. Fig. 5d illustrates the combined antenna gain from the Tx and Rx antenna patterns from Fig. 5b and Fig. 5c, respectively, represented in the azimuth and elevation angle domain. For all UAV heights in our experiments, the LoS angles between the Tx tower and the UAV were within the angle space covered by the black rectangular area, while the ground reflection angles between Tx tower and the UAV were covered by the red rectangular area, which are illustrated in Fig. 2. This implies that the antenna pattern used for the analysis is limited to the angles within this space. V Air-to-ground Propagation Modeling and Analysis In this section, we review how we post-process the data for correcting errors in altitude reported by the UAV’s GPS. Subsequently, we model the measured RSRP using different 3D propagation models that take into account two-ray multipath model and 3D antenna pattern. V-A Post-measurement Correction of Altitude and RSRP During the measurements, we encountered calibration errors caused by limitations in the SDR hardware. Specifically, the Universal Software Radio Peripheral (USRP) mounted on the UAV exhibited a power level calibration error, resulting in a constant offset power throughout the experiment. To address this issue, we conducted a separate experiment to measure and determine the offset at the USRP, which was found to be 98 dB. Subsequently, we added this offset to the calculated RSRP values obtained from subsequent experiments, effectively compensating for the calibration offset. Additionally, the GPS receiver carried by the UAV exhibited an altitude mismatch. We observed an altitude drift of approximately 6 m after the UAV landed, when compared with the initial altitude of the UAV. To rectify this mismatch, we applied a linear compensation approach (see [25, Fig. 6]). This involved adjusting the altitude measurements such that the altitude at the end of the flight matched the altitude of the initial measurement. By applying this compensation, we aimed to ensure accurate altitude data throughout the experiment. V-B Antenna Radiation Pattern Effect in Path Loss Analysis In this subsection, we analyze the effect of antenna radiation patterns on the path loss fitting to the RSRP from the experiments. We consider three different antenna pattern setups for comparison: 1) Tx and Rx 3D antenna patterns described in Section IV-C and Fig. 5; 2) the donut shape dipole antenna pattern using the formulation for both Tx and Rx antennas; and 3) constant azimuth and elevation antenna gain for both Tx and Rx antennas. The dipole antenna pattern formula in the second case is given by [26] $$\displaystyle\mathsf{G}_{\rm bs}(\theta)$$ $$\displaystyle=\mathsf{G}_{\rm uav}(\theta)=\frac{\cos\left(\frac{\pi}{2}\cos\theta\right)}{\sin\theta}.$$ (37) Fig. 6 and Fig. 7 provide a comprehensive analysis of the RSRP fitting results using different antenna patterns and path loss models in (II-B), (8). In Fig. 6, the RSRP curves for a UAV height of 70 m are presented, along with the fitting results obtained from the free space and two-ray path loss models with different antenna patterns. It is observed that the antenna pattern described in Section IV-C provides the best fit to the RSRP curves, while the dipole pattern in (37) results in the worst fit. Additionally, Fig 6a highlights that the two-ray path loss model performs better than the free space path loss model in capturing the deep fading of RSRP. To further evaluate the performance, Fig. 7 presents the cumulative distribution function (CDF) of the RSRP for the 70 m height measurement, along with the fitting results obtained from the path loss models and different antenna patterns. The CDF of the two-ray path loss model with the antenna pattern in Section IV-C matches closest with the CDF of the measured RSRP, indicating a better fit. Fig. 7b shows the fitting error, which is calculated by subtracting the measured RSRP from the fitted RSRP using the path loss models. It is observed that the fitting error is the smallest when using the two-ray path loss model with the antenna pattern in Section IV-C. Fig. 8 also evaluates the fitting error with different antenna patterns in time, distance, and elevation domains. It is observed that the dipole antenna pattern has the largest fitting error in short and long distances. We also observe that the fitting error is relatively high in small elevation angles. It implies that the effect of scattering from the objects around the test site increases the variance of the error when the elevation angle is low. Overall, these results demonstrate that the choice of antenna pattern and path loss model significantly impacts the accuracy of RSRP fitting for air-to-ground communication links. The 3D antenna radiation pattern described in Section IV-C, combined with the two-ray path loss model, provides the best fit to the measured RSRP and minimizes the fitting error. V-C Path Loss Model Fitting with Measurement Fig. 9 illustrates the measured and fitted RSRP values as a function of 3D distance for different UAV heights ranging from 30 m to 110 m. We adopt measured antenna patterns in Section IV-C. The fitted curves follow the measured RSRP values reasonably closely. It is worth noting that the two-ray path loss model performs better in capturing the fluctuation of signal strength due to the ground reflected path compared to the free-space path loss model, especially when the UAV height is low. In the logarithmic scale of the distance domain, the RSRP is expected to decrease linearly. However, in the short distance range, a concave curve can be observed. This phenomenon is a result of the elevation-dependent antenna gain and the dramatic change in the elevation angle at short distances and high UAV altitudes. The 3D antenna pattern considered in the path loss models effectively captures this effect, leading to more accurate RSRP fitting. Overall, the results in Fig. 9 highlight the importance of considering the elevation-dependent antenna gain and the 3D antenna pattern in accurately modeling and fitting RSRP measurements in air-to-ground communications. Fig. 10 shows the relative fitting error in the distance domain for all heights with different antenna patterns. The error by the dipole antenna pattern is relatively higher than other antenna patterns, especially when the distance is around 100 m to 200 m due to the antenna pattern mismatch. We also observe that the fitting error for the omnidirectional antenna pattern is higher than the measured antenna pattern for a large distance. Overall, the use of the measured antenna pattern results in the best fit for the measured data. V-D Analysis of Shadowing Components from Measurement After we derive the two-ray path loss model, we can extract the shadowing component by subtracting the path loss model from measured RSRP using (9), as shown in Fig. 11. The shadowing component is known to follow a Gaussian distribution, and the measured shadowing distributions for different UAV heights are compared to the fitted curves. It is observed that the measured shadowing distributions can be modeled using a Gaussian distribution, though there are slight deviations. In particular, the measured distributions exhibit asymmetry with a heavier left tail compared to the symmetric Gaussian distribution. To achieve a better fit, an alternative approach is to use a skewed Gaussian (normal) distribution, which allows for introducing a desired level of skewness to the distribution [27]. The probability density function (PDF) of the skewed Gaussian distribution can be expressed as $$\displaystyle f(x)=2\phi\left(\frac{x-\xi}{\omega}\right)\Phi\left(\alpha\left(\frac{x-\xi}{\omega}\right)\right),$$ (38) where $\phi(\cdot)$, $\Phi(\cdot)$ indicates the PDF and the CDF of Gaussian distribution, respectively. The parameter $\alpha$ in (38) decides the skewness of the distribution. If $\alpha$ is a positive real value, it gives right-skewness, while left-skewness is introduced by a negative real value. In addition, the mean, the standard deviation of the shadowing, left-skewed Gaussian parameter $\alpha$, and normalized mean squared error (NMSE) of model fittings for all heights are listed in Table II. Note that the optimal $\alpha$ is decided by minimizing NMSE. It shows that the distributions as well as the value of variances in different heights are similar, and we can assume a stationary process in spatial data. VI Numerical Results on 3D Signal Interpolation In this section, we will first study the horizontal, vertical, and finally 3D correlation in the measured data. We will use the 3D correlation to calculate the semi-variogram, which will subsequently be used to analyze the 3D interpolation accuracy for various scenarios. VI-A Analysis of Correlation Function from Measurement VI-A1 Horizontal distance correlation In this subsection, we analyze the spatial correlation using the AERPAW datasets available at [1]. We obtain correlation functions between two different 3D locations by using measurements at different heights, and we use exponential and bi-exponential functions to model the correlations as discussed earlier. The mean and standard deviation values obtained by statistical analysis in Section V-D and the measured RSRP values are utilized in calculating the correlations. We analyze the spatial correlation depending on the horizontal distance ($d_{\rm h}$) with a zero vertical distance ($d_{\rm v}$) by using the experiment dataset. Since our experiments fix the height of the drone for a specific flight, the vertical distance between the samples in the same flight is zero. The analysis of the correlation by the horizontal distance is performed by following steps: i. Calculate the correlation among all samples in a flight, excluding the samples during the take-off and landing periods. ii. Sort the correlation from step (i) according to the horizontal distance between the sample pairs. This will ensure that the correlations are arranged in increasing order based on the horizontal distance. iii. Average the correlations every 2 m. Start from the smallest horizontal distance and group the correlations within a 2 m interval. Calculate the average correlation for each interval. Repeat this process for subsequent 2 m intervals until covering all the correlations. iv. Perform steps (i)-(iii) iteratively for each height (30 m, 50 m, 70 m, 90 m, 110 m). Then, we have correlations for each individual height. v. Average the correlation for every distance over all the heights. Take the correlations obtained in step (iv) for each height and distance, and compute the average correlation value across all heights for that specific distance. The correlation between two samples $w_{i}$, $w_{j}$ is calculated by $$\displaystyle R_{i,j}=\frac{(w_{i}-\nu_{i})(w_{j}-\nu_{j})}{\sigma_{w,i}\sigma_{w,j}},$$ (39) where $\nu$, $\sigma_{w}$ denote the mean and the standard deviation of the sample, which can be obtained from Table II. The obtained correlation function and fitted curves are shown in Fig. 12a. It is observed that the correlation is rapidly decayed as the horizontal distance increases. Although the correlation is generally modeled by an exponential function (also known as the Gudmundson model) [10], the bi-exponential model [28] fits better than the exponential model for our measurements, which is given as $$\displaystyle R(d_{\rm h})=ae^{-b_{1}d_{\rm h}}+(1-a)e^{-b_{2}d_{\rm h}},$$ (40) where $b_{1}$, $b_{2}$ are fitting parameters. We also observe that the correlation distance is 4.5 m when the correlation is 0.5. VI-A2 Vertical distance correlation We calculate the vertical distance correlation with a zero horizontal distance from measurements which is opposite to the above subsection. Since the trajectory of the UAV for flights at different heights is designed to be identical (see Fig. 4), we can obtain samples of the same 2D location (latitude, longitude) with different vertical distances. For example, if we want to obtain 20 m vertical distance samples, we can use the dataset from the 30 m and 50 m UAV flights and pick two samples from any overlapped trajectory (one from the 30 m height, the other from the 50 m height). The analysis of the correlation by the vertical distance is conducted by following steps: i. Choose two different height measurements datasets, such as the datasets from the 30 m and 50 m UAV flights; ii. Remove data where the two trajectories are not fully overlapped, using a threshold of $d_{\rm h}>3$ m. This ensures that we have data points with the same location across the trajectories; iii. Calculate the correlations between the two samples with the same location across the trajectories. Compute the correlation coefficient for each pair of samples and average them out. This will give you the correlation for a specific vertical distance (e.g., 20 m) between the two heights. iv. Repeat steps (i) to (iii) iteratively for pairs of measurements at different heights. For example, we can calculate correlations for the 50 m and 70 m flights, 70 m and 90 m flights, and so on. In step (ii), we exclude the samples that the trajectory is undesirably not overlapped by checking GPS readings. The correlations between different pairs of flights are listed in Table III. We also present the obtained correlation function from Table III and the fitted curve in Fig. 12b. It is observed that the correlation function based on the vertical distance fits best with the exponential model, which is expressed as $$\displaystyle R(d_{\rm v})=e^{-\frac{d_{\rm v}}{d_{\rm cor}}\text{ln}(2)},$$ (41) where the correlation distance is given by $d_{\rm cor}=11.24$ m. VI-A3 3D distance correlation To analyze the correlation when both horizontal distance and vertical distance are considered, we can process the dataset obtained from flights at two different heights. By comparing the measurements from these flights, you can determine the correlation between two different 3D coordinate locations. The processing steps for obtaining correlation with 20 m vertical distance are as follows: i. Choose a pair of measurement datasets where the height difference is 20 m. For example, select the dataset from the 30 m height flight and the dataset from the 50 m height flight. ii. Calculate the correlation between a sample from one height (e.g., 30 m) and a sample from the other height (e.g., 50 m) across all the samples in the datasets. iii. Sort the correlation from step (ii) by the horizontal distance and average the correlations for every 2 m of horizontal distance. iv. Repeat steps (i) to step (iii) iteratively by different pairs of the measurement datasets of the height. For example, you can repeat the analysis with the dataset from the 50 m height flight and the dataset from the 70 m height flight. By performing this iterative analysis for different pairs of measurement datasets with varying height differences, we can obtain the correlation values that capture the relationship between joint horizontal and vertical distances. This analysis helps in understanding how the signal strength correlation varies with changes in both horizontal and vertical distances, providing insights into the spatial characteristics of the wireless channel. The 3D distance correlation results with 20 m and 40 m vertical distances are shown in Fig. 13. We model and fit the correlation of joint horizontal and vertical distance by combining the correlation functions of the horizontal and the vertical distance in (40), (41). The proposed correlation model in 3D space is expressed as $$\displaystyle R(d_{\rm v},d_{\rm h})=e^{-\frac{d_{\rm v}}{d_{\rm cor}}\text{ln}(2)}\left(ae^{-b_{1}d_{\rm h}}+(1-a)e^{-b_{2}d_{\rm h}}\right),$$ (42) where $a=0.3$, and $b_{1}$, $b_{2}$ are tuning parameters. Note that when $d_{\rm h}=0$, the model is the same as (40), while when $d_{\rm v}=0$, the model is equivalent to (41). The fitted values of $b_{1}$, $b_{2}$ depending on the vertical distance ($d_{\rm v}$) are listed in Table IV. VI-B Analysis of Semi-variogram In Section III-A, we introduce earlier the concept of semi-variogram in (11) and derive the relation to the correlation function in (III-A). We analyze the semi-variogram by measurements results in Fig. 14 with respect to both the horizontal distance and vertical distance. The measurement results are directly obtained by the definition of the semi-variogram in (11) and the analysis results come from the correlation function in (42) which is then used in (III-A). The measurements and our analysis from (III-A) are closely overlapped for both distance conditions. VI-C Performance Evaluation with Kriging In this subsection, we evaluate the 3D interpolation performance of the Kriging technique described in Section III-B using the measurement dataset. We adopt cross-validation-based root mean square error (RMSE) evaluation [14], which compares the predicted RSRP with the measured RSRP to observe the error. In particular, the RMSE for performance evaluation can be expressed as $$\displaystyle\mathsf{RMSE}=\sqrt{\frac{1}{N_{0}}\sum^{N_{0}}_{i}\left(\hat{r}(l^{\rm uav}_{0,i})-r(l^{\rm uav}_{0,i})\right)^{2}},$$ (43) where $N_{0}$ denotes the number of samples for prediction. In our evaluation, the 30 m height measurement samples are predicted by 30 m, 50 m, and 70 m height measurement datasets. The cross-validation-based evaluation is conducted by following steps: i. Randomly select $M$ samples from the measurement dataset to use for the prediction. These samples will serve as the training set. ii. Randomly select $N_{0}$ samples from the 30 m measurement dataset as the validation set for cross-validation. iii. Use the Kriging technique described in Section III-B to predict the RSRP values for the $N_{0}$ validation samples based on the $M$ training samples. iv. Calculate RMSE between the predicted RSRP values and the actual measured RSRP values for the $N_{0}$ validation samples. The RMSE is calculated using (43). v. Repeat steps (i) to (iv) iteratively for a large number of times, such as 10,000 iterations and calculate the median for the RMSE values obtained from the iterations. The median value represents the overall prediction performance of the Kriging technique. In step (ii), after randomly selecting $M$ samples for prediction, exclude those samples from the dataset chosen for cross-validation. This ensures that the samples used for prediction are not used for validation. In addition, when we predict a sample by Kriging in step (iii), when predicting a sample using Kriging, consider only the nearby samples within a certain distance threshold ($r_{0}$). Limit the selection of neighboring samples to those within the $r_{0}$ radius circle around the target sample. These nearby samples will be used to predict the RSRP value for the target sample. The snapshot of the randomly chosen $M$ samples from 50 m height measurement and $N_{0}$ from 30 m height measurement is described in Fig. 15. The figure depicts the radius circle $r_{0}$ within which nearby samples are used to predict the target sample. To provide a benchmark for comparison, we consider the perfect path loss-based 3D interpolation. In particular, we assume that the BS has perfect knowledge of the exact path loss and transmit power for all locations. This represents the ideal condition for prediction without utilizing spatial correlation. The RMSE by the perfect path loss estimation is equivalent to the standard deviation of the shadowing component from (9) and (43) as follows: $$\displaystyle\mathsf{RMSE}_{\rm ple}=\sqrt{\mathbb{E}\left[\left(\hat{r}-r\right)^{2}\right]}=\sqrt{\mathbb{E}\left[w^{2}\right]}=\sigma_{w}.$$ (44) In Fig. 16, the RMSE performance of Kriging using measurements at different UAV altitudes is presented. The results show that the performance of Kriging varies depending on the altitude of the measurements used for prediction. When utilizing the 30 m and 50 m height measurement data for prediction, Kriging outperforms the perfect path loss estimation. This indicates that Kriging can leverage the spatial correlation present in the highly corrected data to achieve better prediction accuracy. However, in the case of 70 m height measurement, the perfect path loss estimation performs better than Kriging. This suggests that the correlation at a vertical distance of 60 m is too low to accurately predict using Kriging. In Fig. 16a, it is observed that the RMSE generally decreases as the number of samples used for prediction ($N$) increases. However, when $N$ exceeds 250, the performance of Kriging with an $r_{0}$ value of 200 m is the worst among the three different r0 values considered. This indicates that while a larger number of samples can improve performance, adding low-correlated samples can degrade the prediction accuracy. It is important to strike a balance and choose an appropriate number of samples ($M$) and radius ($r_{0}$). Furthermore, in Fig. 16b, the RMSE initially decreases and then increases for $r_{0}$ values of 70 m, 100 m, and 200 m. This suggests that if the correlation between samples is not sufficiently high, increasing the number of samples may not necessarily lead to improved performance. It highlights the importance of considering both the number of samples and the correlation when determining the optimal parameters for Kriging prediction. In conclusion, the choice of the number of samples ($M$) and radius ($r_{0}$) is crucial for achieving accurate predictions using Kriging. Utilizing a larger number of highly correlated samples can improve performance, while including low-correlated samples or selecting an inappropriate radius can degrade the prediction accuracy. VI-D 3D Interpolation by Kriging Fig. 17 displays the generated 3D radio map of RSRP using the Kriging interpolation technique with the available measurement data at 30 m and 50 m heights. The map provides a visual representation of the RSRP distribution in the 3D space. The dome shape of the 3D radio map provides valuable insights into monitoring the signal leakage in the three-dimensional volume of the RDZ. By examining the map, one can observe the spatial variations and signal strength levels within the monitored area. The dense 3D radio map obtained through Kriging interpolation enables efficient analysis and decision-making related to signal monitoring, interference management, and overall RF planning within the monitored area. In particular, a spectrum monitoring engine (SME) can estimate the received signal strength from each signal served within the RDZ on the surface of the dome. Subsequently, interference to sensitive receivers outside of the RDZ can be extrapolated, and if exceed a threshold, interfering signal services in the RDZ can take action (e.g. rescheduling to a different band or reducing power). VII Conclusion In this paper, we introduce the RDZ concept which efficiently manages and controls the spectrum usage by monitoring the signal occupancy and leakage in a real-time fashion. To monitor the signal leakage from an area, we need to develop a radio map of signal power surrounding the area, which is more challenging when considering a 3D space. We propose a signal power interpolation method in the 3D volume that uses Kriging. The correlation model between two different 3D locations is designed and the semi-variogram is defined and analyzed. In addition, we study the proposed 3D Kriging interpolation using an experimental dataset provided by the NSF AERPAW platform. We fit path loss and shadowing models to the RSRP measurements and study the performance of the Kriging interpolation technique for various scenarios. Our results show that significant gains are possible in received power estimation accuracy by utilizing the 3D correlation of the data when compared with using only a path loss based power estimation. References [1] S. J. Maeng, O. Ozdemir, I. Guvenc, M. Sichitiu, and R. Dutta, “LTE I/Q Measurement by AERPAW Platform for Air-to-Ground Propagation Modeling,” IEEE Dataport, 2022. [Online]. Available: https://dx.doi.org/10.21227/0p43-0d72 [2] M. Zheleva, C. R. Anderson, M. Aksoy, J. T. Johnson, H. Affinnih, and C. G. DePree, “Radio Dynamic Zones: Motivations, challenges, and opportunities to catalyze spectrum coexistence,” IEEE Commun. Mag., 2023. [3] S. J. Maeng, I. Güvenç, M. Sichitiu, B. Floyd, R. Dutta, T. Zajkowski, O. Ozdemir, and M. Mushi, “National radio dynamic zone concept with autonomous aerial and ground spectrum sensors,” in IEEE Int. Conf. Commun. Workshops (ICC Workshops), Seoul, Korea, Republic of, May 2022, pp. 687–692. [4] S. Tschimben, A. Aradhya, G. Weihe, M. Lofquist, A. Pollak, W. Farah, D. DeBoer, and K. Gifford, “Testbed for Radio Astronomy Interference Characterization and Spectrum Sharing Research,” in IEEE Aerosp. Conf., Big Sky, MT, USA, Mar. 2023, pp. 1–16. [5] “Spectrum innovation initiative: National radio dynamic zones (sii-nrdz),” NSF Program Solicitation, June 2022. [Online]. Available: https://www.nsf.gov/pubs/2022/nsf22579/nsf22579.htm [6] H. Zou, M. Jin, H. Jiang, L. Xie, and C. J. Spanos, “Winips: Wifi-based non-intrusive indoor positioning system with online radio map construction and adaptation,” IEEE Trans. Wireless Commun., vol. 16, no. 12, pp. 8118–8130, Dec. 2017. [7] H. B. Yilmaz, T. Tugcu, F. Alagöz, and S. Bayhan, “Radio environment map as enabler for practical cognitive radio networks,” IEEE Commun. Mag., vol. 51, no. 12, pp. 162–169, Dec. 2013. [8] F. Graziosi and F. Santucci, “A general correlation model for shadow fading in mobile radio systems,” IEEE Commun. Lett., vol. 6, no. 3, pp. 102–104, Mar. 2002. [9] S. S. Szyszkowicz, H. Yanikomeroglu, and J. S. Thompson, “On the feasibility of wireless shadowing correlation models,” IEEE Trans. Veh. Technol., vol. 59, no. 9, pp. 4222–4236, Nov. 2010. [10] M. Gudmundson, “Correlation model for shadow fading in mobile radio systems,” Electronics letters, vol. 23, no. 27, pp. 2145–2146, 1991. [11] R. He, Z. Zhong, B. Ai, and C. Oestges, “Shadow fading correlation in high-speed railway environments,” IEEE Trans. Veh. Technol., vol. 64, no. 7, pp. 2762–2772, Jul. 2015. [12] H. Braham, S. B. Jemaa, G. Fort, E. Moulines, and B. Sayrac, “Fixed rank kriging for cellular coverage analysis,” IEEE Trans. Veh. Technol., vol. 66, no. 5, pp. 4212–4222, May 2017. [13] K. Sato and T. Fujii, “Kriging-based interference power constraint: Integrated design of the radio environment map and transmission power,” IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 1, pp. 13–25, Mar. 2017. [14] K. Sato, K. Suto, K. Inage, K. Adachi, and T. Fujii, “Space-frequency-interpolated radio map,” IEEE Trans. Veh. Technol., vol. 70, no. 1, pp. 714–725, Jan. 2021. [15] A. Al-Hourani and K. Gomez, “Modeling Cellular-to-UAV Path-Loss for Suburban Environments,” IEEE Wireless Commun. Lett., vol. 7, no. 1, pp. 82–85, Feb. 2018. [16] J. Holis and P. Pechac, “Elevation dependent shadowing model for mobile communications via high altitude platforms in built-up areas,” IEEE Trans. Antennas Propag., vol. 56, no. 4, pp. 1078–1084, Apr. 2008. [17] M. Simunek, F. P. Fontán, and P. Pechac, “The UAV Low Elevation Propagation Channel in Urban Areas: Statistical Analysis and Time-Series Generator,” IEEE Trans. Antennas Propag., vol. 61, no. 7, pp. 3850–3858, Jul. 2013. [18] S. J. Maeng, İ. Güvenç, M. L. Sichitiu, and O. Ozdemir, “Out-of-zone signal leakage sensing in radio dynamic zones,” in Proc. IEEE Int. Conf. Commun. (ICC), Seoul, Korea, May 2022. [19] T. Kidd, “National radio quiet and dynamic zones,” CHIPS – The Department of Navy’s Information Technology Magazine, Apr.-June 2018. [Online]. Available: https://www.doncio.navy.mil/CHIPS/ArticleDetails.aspx?ID=10299 [20] N. R. Chopde and M. Nichat, “Landmark based shortest path detection by using A* and Haversine formula,” International Journal of Innovative Research in Computer and Communication Engineering, vol. 1, no. 2, pp. 298–302, Apr. 2013. [21] W. C. Jakes and D. C. Cox, Microwave mobile communications.   Wiley-IEEE press, 1994. [22] N. Cressie, Statistics for spatial data.   John Wiley & Sons, 2015. [23] Octane Wireless, “SA-1400-5900 Data Sheet.” [Online]. Available: https://www.octanewireless.com/product/sa-1400-5900-tri-band-stub-antenna/ [24] Mobile Mark, Inc, “RM-WB1 Series Radiation Pattern.” [Online]. Available: https://www.mobilemark.com/product/rm-wb1/ [25] S. J. Maeng, O. Ozdemir, İ. Güvenç, M. Sichitiu, R. Dutta, and M. Mushi, “AERIQ: SDR-Based LTE I/Q Measurement and Analysis Framework for Air-to-Ground Propagation Modeling,” in IEEE Aerosp. Conf., Big Sky, MT, USA, Mar. 2023, pp. 1–11. [26] S. J. Maeng, M. A. Deshmukh, I. Güvenç, A. Bhuyan, and H. Dai, “Interference analysis and mitigation for aerial IoT considering 3D antenna patterns,” IEEE Trans. Veh. Technol., vol. 70, no. 1, pp. 490–503, Jan. 2021. [27] C. K. Sung, S. Li, M. Hedley, N. Nikolic, and W. Ni, “Skew log-normal channel model for indoor cooperative localization,” in Proc. IEEE Int. Symp. Pers., Indoor, Mobile Radio Commun. (PIMRC), Montreal, Canada, Oct. 2017, pp. 1–5. [28] L. Liu, C. Tao, D. W. Matolak, T. Zhou, and H. Chen, “Investigation of shadowing effects in typical propagation scenarios for high-speed railway at 2350 mhz,” International Journal of Antennas and Propagation, vol. 2016, Oct. 2016.
Physics-based Motion Planning: Evaluation Criteria and Benchmarking Muhayyuddin Institute of Industrial and Control Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain, 11email: {muhayyuddin.gillani,aliakbar.akbari,jan.rosell}@upc.edu    Aliakbar Akbari and Jan Rosell This work was partially supported by the Spanish Government through the projects DPI2011-22471, DPI2013-40882-P and DPI2014-57757-R. Muhayyuddin is supported by the Generalitat de Catalunya through the grant FI-DGR 2014.Institute of Industrial and Control Engineering, Universitat Politècnica de Catalunya, Barcelona, Spain, 11email: {muhayyuddin.gillani,aliakbar.akbari,jan.rosell}@upc.edu Abstract Motion planning has evolved from coping with simply geometric problems to physics-based ones that incorporate the kinodynamic and the physical constraints imposed by the robot and the physical world. Therefore, the criteria for evaluating physics-based motion planners goes beyond the computational complexity (e.g. in terms of planning time) usually used as a measure for evaluating geometrical planners, in order to consider also the quality of the solution in terms of dynamical parameters. This study proposes an evaluation criteria and analyzes the performance of several kinodynamic planners, which are at the core of physics-based motion planning, using different scenarios with fixed and manipulatable objects. RRT, EST, KPIECE and SyCLoP are used for the benchmarking. The results show that KPIECE computes the time-optimal solution with heighest success rate, whereas, SyCLoP compute the most power-optimal solution among the planners used. 1 Introduction Robotic manipulation requires precise motion planning and control to execute the tasks, either for industrial robots, mobile manipulators, or humanoid robots. It is necessary to determine the way of safely navigating the robot from the start to the goal state by satisfying the kinodynamic (geometric and differential) constraints, as well as to incorporate the physics-based constraints imposed by possible contacts and by the dynamic properties of the world such as gravity and friction [1, 2]. These issues significantly increase the computational complexity because certain collision-free geometric paths may not be feasible in the presence of these constraints. Physics-based motion planning has emerged, therefore, as a new class of planning algorithms that considers the physics-based constraints along with the kinodynamical constraints, i.e. it is an extension to the kinodynamic motion planning [3] that also involves the purposeful manipulation of the objects by considering the dynamical interaction between rigid bodies. This interaction is simulated based on the principal of basic Newtonian physics and the results of simulation are used for planning. The performance of a physics-based planner largely depends on the choice of the kinodynamic motion planner, that is implicitly used for sampling the states and the construction of the solution path. The state propagation is performed using dynamic engines, like ODE [4], that incorporates the kinodynamical and physics-based constraints. Motion planning in its simplest form (i.e. as a geometric problem) is PSPACE-complete [5]. The incorporation of kinodynamic constraints and the physics-based properties make it even more complex and computationally intensive, and for complex systems even the decidability of the physics-based planning problem is questionable [6]. Therefore, to make the physics-based planning computationally tractable, it is crucial to use the most appropriate and computationally efficient kinodynamic planner. In previously proposed physics-based planning approaches, different kinodynamic motion planners and physics engines have been used. A few studies provided comparative analysis of the performance of some kinodynamical motion planners within different physics-based planning frameworks. For instance, the physics-based planning algorithm proposed in [7], that used nondeterministic tactics and skills to reduce the search space of physics-based planning, was evaluated (in term of planning time and tree length) using two different kinodynamic motion planners, Behavioral Kinodynamic Rapidly-exploring Random Trees (BK-RRT [8]) and Balanced Growth Trees (BGT [7]). The physics engine PhysX [9] was used as state propagator. Another physics-based planning approach [10] integrated the sampling-based motion planing with the discrete search using the workspace decomposition in order to map the planning problem onto a graph searching problem. This work evaluated the performance (in term of planning time) using RRT, Synergistic Combination of Layers of Planning (SyCLoP [11]) and a modified version of the SyCLoP as kinodynamic motion planners. The propagation step was performed using the Bullet [12] physics engine. A third approach proposed a physics-based motion planning framework that used manipulation knowledge coded as an ontology [13]. This approach performed a reasoning process over the knowledge to improve the computational efficiency and has shown a significant improvement in performance (in term of planning time and generated trajectory), as compared to the simple physics-based planning. Two kinodynamic motion planners were used, Kinodynamic Planning by Interior-Exterior Cell Exploration (KPIECE [14]) and RRT. The Open Dynamics Engine (ODE) was used as state propagator. All the above stated studies basically measured the time complexity of different kinodynamic motion planners. Since physics-based planning simultaneously evaluates the kinodynamical and the physics-based constraints, the evaluation based on just planning time may not be sufficient. New evaluation criteria is required because a number of other dynamical parameters (such as power consumed, action, smoothness) may significantly influence the planning decisions, like in the task planning approaches proposed in [15, 16] that use a physics-based reasoning process to determine the feasibility of a plan by evaluating the cost in terms of power consumed and the action. With this in mind, the present study proposes a new benchmarking criteria for the physics-based planning that incorporates the dynamical properties of the system (to determine the quality of the solution) as well as the computational complexity. It is used to compare different kinodynamic motion planners (RRT, EST, KPIECE and SyCLoP) within the physics-based planning framework presented in [13] based on a reasoning process over ontological manipulation knowledge. 2 Kinodynamic Motion Planning Motion planning problems deal with computing collision-free trajectories from a given start to a goal state in the configuration space ($\cal C$), the set of all possible configurations of the robot [17]. The geometrically accessible region of $\cal C$ is called $\cal C_{\textrm{free}}$ and the obstacle region is known as $\cal C_{\textrm{obs}}$. The sampling-based algorithms such as Probabilistic RoadMaps [18] and the Rapidly-exploring Random Trees [19] have shown significant performance when planning in high-dimensional configuration spaces. These algorithms connect collision-free configurations with either a graph or a tree to capture the connectivity of $\cal C_{\textrm{free}}$ and find a path along these data structures to connect the initial and the goal configurations. Kinodynamic motion planning refers to the problems in which the motion of the robot must simultaneously satisfy the kinematic constraints (such as joint limits and obstacle avoidance) as well as some dynamic constraints (such as bounds on the applied forces, velocities and accelerations [20]). Tree-based planners are best suited to take into account kynodynamic constraints [1], since the dynamic equations are used to determine the resulting motions used to grow the tree. The general functionality of sampling-based kinodynamic planners is to search a state space $S$ of higher dimensions that records the system’s dynamics. The state of a robot for a configuration $q\in$ $\cal C$ is defined as $s=(q,\dot{q})$. To determine a solution, the planning will be performed in state space, in a similar way as in $\cal C$. This section briefly reviews the existing most commonly used kinodynamic motion planners, that can be categorized into three classes: a) RRT and EST belong to the class of planning algorithms that sample the states; b) KPIECE belongs to the class that samples the motions or path segments; c) SyCLoP is an hybrid planner that splits the planning problem into a discrete and a continuous layer. 2.0.1 Rapidly Exploring Random Trees (RRT): It is a sampling-based kinodynamic motion planning algorithm [21] that has the ability to efficiently explore the high dimensional configuration spaces. The working mechanism of RRT-based algorithms is to randomly grow a tree rooted at the start state ($q_{start}\in$ $\cal C$), until it finds a sample at the goal state ($q_{goal}\in$ $\cal C$). The growth of the tree is based on two steps, selection and propagation. In the first step a sample is randomly selected ($q_{rand}$), and its nearest node in the tree is then searched ($q_{near}$). The second step applies, from $q_{near}$, random controls (that satisfy the constraints) during a certain amount of time. Among the configurations reached, the one nearest to $q_{near}$ is selected as $q_{new}$ and an edge from $q_{near}$ to $q_{new}$ is added to the tree. Using this procedure, all the paths on the tree will be feasible, i.e. by construction they satisfy all the kinodynamic constraints. 2.0.2 Expansive-Spaces Tree planner (EST): This approach constructs a tree-shaped road-map $T$ in the state$\times$time space [22] [23]. The idea is to select a milestone of $T$ and from there randomly apply sampled controls for a certain amount of time. If the final state is in free-space, it will be added as a milestone in $T$. The selection of the milestone for expansion is done in a way that the resultant tree should neither be too dense nor sparse. This kinodynamic planner works in three steps: milestone selection, control selection and endgame connection. In the first step, a milestone $m$ in $T$ is selected with probability inversely proportional to the number of neighboring milestones of $m$. In the second step, controls are randomly sampled and applied from the selected milestone $m$. Since by moving under kinodynamic constraints it may not be possible to reach exactly the goal state, the endgame connection step is the final step that defines a region around the goal in such a way that any milestone within this region is considered as the goal state. 2.0.3 Kinodynamic Motion Planning by Interior-Exterior Cell Exploration (KPIECE): This planner is particularly designed for complex dynamical systems. KPIECE grows a tree of motions by applying randomly sampled controls for a randomly sampled time duration from a tree node selected as follows. The state space is projected onto a lower-dimensional space that is partitioned into cells in order to estimate the coverage. As a result of this projection, each motion will be part of a cell, each cell being classified as an interior or exterior cell depending on whether the neighboring cells are occupied or not. Then, the selection of the cell is performed based on the importance parameter that is computed based on: 1) the coverage (the cells that are less covered are preferred over the others); 2) the selection (the cells that have been selected less number of times are preferred); 3) the neighbors (the cells that have less neighbors are preferred); 4) the selection time (recently selected cells are preferred); 5) the expansion (easily expanded cells are preferred over the cells that expand slowly). The cell that has maximum importance will be selected. The process continues until the tree of motion reaches the goal region. 2.0.4 Synergistic Combination of Layers of Planning (SyCLoP) This is a meta approach that considers motion planning as a search problem in a hybrid space (of a continuous and a discrete layer) for efficiently solving the problem under kinodynamical constraints. The continuous layer is represented by the state space (that is explored by a sampling-based motion planner like RRT or EST) and the discrete layer is determined by the decomposition of the workspace. The decomposition is used to compute a cost parameter called lead that guides the motion planner towards the goal. SyCLoP works based on the following steps: lead computation and region selection. The lead is computed based on the coverage and the frequency of the selection. The former is obtained by the sampling-based motion planner (continuous layer) and the latter is computed by determining how many times a cell has been selected from discrete space. The selection of the region will be performed based on the available free volume of the region (high free volume regions are preferred for the exploration). The process continues until the planner finds a sample in the goal region. SyCLoP will be recalled SyCLoP-RRT or SyCLoP-EST based on the planner used in the continuous layer. 2.1 Ontological Physics-Based Motion Planning Physics-based motion planning is composed of a new class of planning algorithms that basically go a step further towards physical realism, by also taking into account possible interactions between bodies and possible physics-based constraints (such as gravity and friction that conditions the actions and its results). The search of collision-free trajectories is not the final aim; now collisions with some objects may be allowed, i.e. these algorithms also consider the manipulation actions (such as push action) in order to compute the appropriate trajectory. The incorporation of the dynamic interaction (for manipulation) between rigid bodies and other physics-based constraints increase the dimensionality of the state space and the computational complexity. In some cases, particularly for the systems with complex dynamics, the problem may even be not tractable. The ontological physics-based motion planning is a recently proposed approach that tries to cope with aforementioned challenges [13]. This approach takes the advantage of Prolog-based reasoning process over the knowledge of objects and manipulation actions (this knowledge is represented in the form of an ontology). The reasoning process is used to improve the computational efficiency and to make the manipulation problem computationally tractable. It applies a hybrid approach consisting of two main layers which are a knowledge-based reasoning layer and the motion planning layer. The knowledge-based reasoning layer uses the manipulation ontology to derive a knowledge, called abstract knowledge, that contains information of the objects and their properties (such as their manipulatable regions, e.g. the regions from where an object can be pushed), and the initial and goal states of the robot. The abstract knowledge, moreover, categorizes the objects into fixed and manipulatable objects, being the manipulatable objects further divided into freely and constraint-oriented manipulatable ones (e.g. some objects can be pushed from any region while others may only be pushed from some given region and in some predefined directions). Furthermore, abstract knowledge also determines the geometrical positions of the objects to distinguish whether the goal state is occupied or not. The motion planning layer includes a reasoning process that infers from the abstract knowledge. The inferred knowledge is called instantiated knowledge and is a dynamic knowledge that is updated at each instance of time. Motion planning layer employs a sampling-based kinodynamic motion planner (like KPIECE or RRT) and a physics engine used as state propagator. After the propagation step, the new state is accepted by the planner if all the bodies satisfy the manipulation constraints imposed by the instantiated knowledge (e.g. a car-like object can only be pushed forward or backward and therefore any state that results with a collision with its lateral sides is disallowed). In this way the growing of the tree-like data structure of the planner is more efficient since useless actions are pruned. 3 Benchmarking Parameters A wide variety of the kinodynamic motion planners is available, the planning strategy of these algorithms is conceptually different from one another, and this can significantly affect the performance of the physics-based planning. A criteria is proposed to evaluate the performance of the kinodynamic motion planners for the physics-based planning. It is suggested that, along with the computational complexity, it is also necessary to evaluate the dynamical parameters that determine the quality of the computed solution path. This is determined by estimating the power consumed by the robot while moving along the computed path, by the total amount of action (i.e. dynamic attribute of trajectory explained later in this section) of the computed trajectory and by the smoothness of the trajectory. The computational complexity is computed based on the planning time and the average success rate of each planner. A planner is said to be most appropriate if it is optimal according to the above said criteria. The choice of physics engine (such as ODE, Bullet and PhysX) may not affect the simulation results because the design philosophy of all of them is based on the basic physics, and the performance and accuracy may vary a little but the simulation results of all the physics engines are almost the same. The following performance parameters have been established to evaluate different kinodynamic motion planners within the framework of the ontological physics-based motion planner. The trajectories given by the kinodynamic motion planners are described by a list of forces and their duration that have to be consecutively applied to move the robot (either in a collision-free way or possibly pushing some manipulatable objects): • Action: It is a dynamical property of a physical system, defined in the form of a functional $\mathcal{A}$ that takes a sequence of moves that define a trajectory as input, and returns a scalar number as output: $$\mathcal{A}=\sum\limits_{i}^{n}|\mathbf{f_{i}}|\Delta t_{i}\varepsilon_{i},$$ (1) where $\mathbf{f}_{i}$, $\Delta t_{i}$ and $\varepsilon_{i}$ are, respectively, the applied control forces, their duration and the resulting covered distances. • Power consumed: The total amount of power consumed $\mathcal{P}$ by the robot to move from start to the goal state is computed as: $$\mathcal{P}=\sum\limits_{i}^{n}\frac{\mathbf{f_{i}}\mathbf{d_{i}}}{\Delta t_{i% }},$$ (2) where $\mathbf{f}_{i}$, $\mathbf{d}_{i}$ and $\Delta t_{i}$ are, respectively, the applied control forces, the resultant displacement vectors and the time duration. • Smoothness: The smoothness $\mathcal{S}$ of a trajectory can be measured as a function of jerk [24], the time derivative of acceleration: $$\mathcal{J}(t)=\frac{d\,a(t)}{dt}.$$ (3) For a given trajectory $\tau$ the smoothness is defined as the sum of squared jerk along $\tau$: $$\mathcal{S}=\int_{t_{i}}^{t_{f}}\mathcal{J}(t)^{2}\,dt,$$ (4) where $t_{i}$ and $t_{f}$ are the initial and final time, respectively. • Planning time: It is the total time consumed by the ontological physics-based planner to compute a solution trajectory. • Success Rate: It is computed based on the number of successful runs. 4 Results and Discussion In this section the results of benchmarking of kinodynamical motion planners for ontological physics-based planning is presented. The benchmarking is performed using The Kautham Project [25], a C++ based open source platform for motion planning that includes geometric, kinodynamic, and ontological physics-based motion planners. It uses the planning algorithms offered by the Open Motion Planning Library (OMPL) [26], an C++ based open source motion planning library, and ODE as state propagator for the physics-based planning. 4.1 Simulation Setup The simulation setup consists of a robot (green sphere), free manipulatable bodies (blue cubes), constraint oriented manipulatable bodies (purple cubes), and the fixed bodies (triangular prisms, walls, and floor). The benchmarking is performed with the three different scenes shown in Fig. 1, that are differentiated based on the degree of clutter. The robot is depicted at its initial configuration, being the goal robot configuration painted as a yellow circle. Fig. 1-a describes the simplest scene that consists of a robot, free manipulatable bodies, and fixed bodies. The second scene, represented in Fig.1-b, consists of a robot, free manipulatable bodies, fixed bodies, and a constraint-oriented body. In this scene the narrow passage is occupied by the constraint-oriented manipulatable body (it can only be pushed vertically, along y-axis) that has to be pushed away by the robot in order to clear the path towards the goal. It is important to note that since there is not any collision-free path available from the start to the goal state, the geometric as well as the kinodynamic planners are not able to compute the path; only physics-based planners has the ability to compute the path by pushing the object away. The final scene is depicted in Fig. 1-c and has the highest degree of clutter. The goal is occupied with a constraint-oriented manipulatable body (it can only be pushed horizontally, along x-axis); in order to reach the goal region the robot needs to free it by pushing the body away. As before, no collision-free path exists. The same planning parameters are used for all the planners: goal bias equal to 0.05, sampling control range between -10N and 10N, and propagation step size equal to 0.07s. 4.2 Benchmarking Results The ontological physics-based motion planner is run 10 times for each scene and for each of the kinodynamic motion planners summarized in Section 2. The average values of the benchmarking parameters are presented in the form of histograms. Fig. 2 shows as sequence of snapshots of a sample execution of the ontological physics-based motion planner using KPIECE. In order to estimate the coverage of configuration space, and the solution trajectory Fig. 3 depicts the configuration spaces and solution paths computed by different kinodynamic planners. Fig.5 shows the average planning time (being the maximum allowed planning time set to 500 s). All the planners have been able to compute the solution within the maximum range of planning time except EST. Among all the planners KPIECE computed the solution most efficiently. The success rate of the planners is described in Fig. 5. It is computed for each scene individually based on the number of successful runs (i.e. how many times the planner computes the solution within the maximum limit of time), then the average success rate of each planner is determined by computing the average successful runs for the three scenes. Results show that the KPIECE has the highest overall success rate. The SyCLoP-RRT also shows an impressive success rate, whereas, EST has zero success rate. The results of the dynamical parameters, action and power, are shown in Fig. 6 and Fig. 7, respectively. Regarding action, the solution trajectory with minimum amount of action is considered as the most appropriate. Among the planners that computed the solution within the given time, on average, KPIECE has the minimum amount of action and SyCLoP-RRT has the maximum one. Regarding power, it is desirable that the robot should consume a minimum amount of power while moving along the solution. Our analysis shows that SyCLoP-RRT finds the power-optimal trajectory whereas KPIECE is the worst. The results for RRT and SyCLoP-EST are almost the same. Regarding trajectory smoothness, Fig. 8 shows the comparison. The SyCLoP-RRT computes the most smooth trajectory among all the planners whereas, KPIECE show the worst results in term of smoothness. Since EST was not able to compute the solution within the given time, the action, power and smoothness for EST are set to infinity and not shown in the histograms. 4.3 Discussion We proposed a benchmarking criteria for physics-based planning. Based on the proposed criteria, the performance of physics-based planning is evaluated using different kinodynamic motion planners. Our analysis shows that in terms of planning time, success rate and the action value, KPIECE is the most suitable planner. SyCLoP-RRT shows significant results in terms of smoothness of the computed trajectory and power consumed by the robot moving along the computed path. The planning time and success rate for SyCLoP-RRT is also impressive. RRT shows average results throughout the evaluation. SyCLoP-EST is good in terms of action value and power consumption but the value of planning time is very high and it has a low success rate. The EST was not able to compute the solution within time and has zero success rate. 5 Conclusion and Future Work This paper proposed an evaluation criteria for the physics-based motion planners. The proposed benchmarking criteria computes dynamical parameters (such as power consumed by the robot to move along the solution path) for the evaluation of the quality of the computed solution path. Further, based on the proposed benchmarking criteria, the performance of the ontological physics-based motion planner (using different kinodynamic motion planners) is evaluated and the computed properties of the each kinodynamic motion planner are discussed in detail. For now the evaluation criteria was implemented on simple scenes and with the push action as the sole manipulation action; as future work the proposed benchmarking criteria will be implemented for mobile manipulators to also benchmark the grasping and pick and place manipulation actions. References [1] Tsianos, K.I., Sucan, I.A., Kavraki, L.E.: Sampling-based robot motion planning: Towards realistic applications. Computer Science Review 1(1) (2007) 2–11 [2] Ladd, A.M., Kavraki, L.E.: Motion planning in the presence of drift, underactuation and discrete system changes. In: Robotics: Science and Systems. (2005) 233–240 [3] Zickler, S., Veloso, M.M.: Variable level-of-detail motion planning in environments with poorly predictable bodies. In: Proc. of the European Conf. on Artificial Intelligence Montpellier. (2010) 189–194 [4] Russell, S.: Open Dynamic Engine. http://www.ode.org/ (2007) [5] Reif, J.H.: Complexity of the move’s problem and generalizations. In: Proc. of the 20th Annual IEEE Conf. on Foundations of Computer Science. (1979) 421–427 [6] Cheng, P., Pappas, G., Kumar, V.: Decidability of motion planning with differential constraints. In: Proc. IEEE Int. Conf. on Robotics and Automation. (2007) 1826–1831 [7] Zickler, S., Veloso, M.: Efficient physics-based planning: sampling search via non-deterministic tactics and skills. In: Proc. of The 8th Int. Conf. on Autonomous Agents and Multiagent Systems-Volume 1. (2009) 27–33 [8] Zickler, S., Veloso, M.: Playing creative soccer: Randomized behavioral kinodynamic planning of robot tactics. In: RoboCup 2008: Robot Soccer World Cup XII. Springer (2009) 414–425 [9] NVIDIA: Physx. https://developer.nvidia.com/physx-sdk [10] Plaku, E.: Motion planning with discrete abstractions and physics-based game engines. In: Proc. of the Int. Conf. on Motion in Games. Springer (2012) 290–301 [11] Plaku, E., Kavraki, L., Vardi, M.: Motion planning with dynamics by a synergistic combination of layers of planning. IEEE Tran. on Robotics 26(3) (June 2010) 469–482 [12] Erwin, C.: Bullet physics library, http://bulletphysics.org (2013) [13] Muhayyudin, Akbari, A., Rosell, J.: Ontological physics-based motion planning for manipulation. In: Proc. of IEEE Int. Conf. on Emerging Technologies and Factory Automation (ETFA). (2015) [14] Sucan, I., Kavraki, L.E.: A sampling-based tree planner for systems with complex dynamics. IEEE Transactions on Robotics 28(1) (2012) 116–131 [15] Akbari, A., Muhayyudin, Rosell, J.: Task and motion planning using physics-based reasoning. In: Proc. of the IEEE Int. Conf. on Emerging Technologies and Factory Automation. (2015) [16] Akbari, A., Muhayyuddin, Rosell, J.: Reasoning-based evaluation of manipulation actions for efficient task planning. In: ROBOT2015: Second Iberian Robotics Conference:, Springer (2015) [17] Lozano-Pérez, T.: Spatial Planning: A Configuration Space Approach. IEEE Trans. on Computers 32(2) (1983) 108–120 [18] Kavraki, L., Svestka, P., Latombe, J.C., Overmars, M.: Probabilistic roadmaps for path planning in high-dimensional configuration spaces. IEEE Trans. on Robotics and Automation 12(4) (Aug 1996) 566–580 [19] Lavalle, S.M., Kuffner, J.J.: Rapidly-Exploring Random Trees: Progress and Prospects. Algorithmic and Computational Robotics: New Directions (2001) 293–308 [20] Donald, B., Xavier, P., Canny, J., Reif, J.: Kinodynamic motion planning. Journal of the ACM 40(5) (1993) 1048–1066 [21] LaValle, S.M., Kuffner, J.J.: Randomized kinodynamic planning. The Int. Journal of Robotics Research 20(5) (2001) 378–400 [22] Hsu, D., Latombe, J.C., Motwani, R.: Path planning in expansive configuration spaces. In: Proc. of the IEEE Int. Conf. on Robotics and Automation. Volume 3., IEEE (1997) 2719–2726 [23] Hsu, D., Kindel, R., Latombe, J.C., Rock, S.: Randomized kinodynamic motion planning with moving obstacles. The Int. Journal of Robotics Research 21(3) (2002) 233–255 [24] Hogan, N.: Adaptive control of mechanical impedance by coactivation of antagonist muscles. IEEE Trans. on Automatic Control 29(8) (1984) 681–690 [25] Rosell, J., Pérez, A., Aliakbar, A., Muhayyuddin, Palomo, L., García, N.: The Kautham Project: A teaching and research tool for robot motion planning. In: Proc. of the IEEE Int. Conf. on Emerging Technologies and Factory Automation. (2014) [26] Sucan, I., Moll, M., Kavraki, L.E., et al.: The open motion planning library. Robotics & Automation Magazine, IEEE 19(4) (2012) 72–82
Discovery of a 6.4 keV Emission Line in a Burst from SGR 1900+14 Tod E. Strohmayer and Alaa I. Ibrahim11affiliation: Department of Physics, George Washington University Laboratory for High Energy Astrophysics NASA’s Goddard Space Flight Center Greenbelt, MD 20771 Abstract We present evidence of a 6.4 keV emission line during a burst from the soft gamma-ray repeater SGR 1900+14. The Rossi X-ray Timing Explorer (RXTE) monitored this source extensively during its outburst in the summer of 1998. A strong burst observed on August 29, 1998 revealed a number of unique properties. The burst exhibits a precursor and is followed by a long ($\sim 10^{3}$ s) tail modulated at the 5.16 s stellar rotation period. The precursor has a duration of $\approx 0.85$ s and shows both significant spectral evolution as well as an emission feature centered near 6.4 keV during the first 0.3 s of the event, when the X-ray spectrum was hardest. The continuum during the burst is well fit with an optically thin thermal bremsstrahlung (OTTB) spectrum with the temperature ranging from $\approx 40$ to 10 keV. The line is strong, with an equivalent width of $\sim 400$ eV, and is consistent with Fe K-$\alpha$ fluorescence from relatively cool material. If the rest-frame energy is indeed 6.4 keV, then the lack of an observed redshift indicates that the source is at least $\sim 80$ km above the neutron star surface. We discuss the implications of the line detection in the context of models for SGRs. X-rays: bursts - stars: individual (SGR 1900+14) stars: neutron Accepted for Publication in the Astrophysical Journal Letters 1 Introduction Soft Gamma-ray Repeaters (SGRs) and Anomalous X-ray Pulsars (AXPs) are almost certainly young neutron stars with spin periods in the 5 - 10 s range and large spin down rates. The SGRs occasionally and unexpectedly produce short energetic ($>10^{41}$ ergs s${}^{-1}$) bursts of X-ray and $\gamma$-ray radiation and two of these sources (SGRs 0526-66 and 1900+14) have also produced so called ‘giant flares’, like the famous March 5th event, with luminosities upwards of $10^{44}$ ergs s${}^{-1}$. The persistent X-ray luminosities of both SGRs and AXPs are much larger than their spin down luminosities, implying some other source of energy powers the X-ray emission. The recent discoveries of X-ray pulsations from the SGR persistent X-ray counterparts (Kouveliotou et al. 1998, 1999; Hurley et al. 1999a) and their large period derivatives, $\dot{P}\approx 10^{-10}$ s s${}^{-1}$, when interpreted in the context of magnetic dipole radiation, have provided support for the hypothesis, first proposed by Duncan & Thompson (1992), that SGRs are magnetars, neutron stars with dipolar magnetic fields much larger than the quantum critical field $B_{c}=m_{e}c^{3}/e\hbar\approx 4.4\times 10^{13}$ G (see also Paczynski 1992, Thompson & Duncan 1995, 1996). It has also been suggested that AXPs may be magnetars, but in a later, less active evolutionary state (Kouveliotou 1998). Marsden, Rothschild & Lingenfelter (1999) suggest that the observed variations in $\dot{P}$ from SGR 1900+14 are inconsistent with magnetic dipole spin down and that the torque may be dominated by a relativistic wind, in which case a dipolar field may not have to be super-critical. More recently, Harding, Contopoulos & Kazanas (1999) have computed the torque due to an episodic particle wind and suggest that a magnetar field can still be consistent with the observed spin down rates and supernova remnant ages as long as the wind duty cycle and luminosity are within well defined limits. Attention has also been focused recently on accretion models for these sources (see Marsden et al. 1999; Chatterjee, Hernquist & Narayan 1999; and Alpar 1999). These models suggest the spin down torques may be due to disk accretion. However, direct evidence for such disks is extremely limited, and with daunting constraints on the presence of companions (Mereghetti 1999; and Mereghetti, Israel, & Stella 1998; Hulleman 2000) it seems unlikely that a binary companion could be the source of such material. These models also have difficulty explaining both the presence of bursts in SGRs and the apparently very quiet spin down of at least some of the AXPs (see Kaspi, Chakrabarty, & Steinberger 1999; Baykal et al. 2000). Although much evidence supports the magnetar hypothesis, recent findings have provided new challenges and it remains for continued observations to either vindicate or disprove the hypothesis. In this Letter we describe recent X-ray spectral analysis of an unusual burst from SGR 1900+14 which provides the first strong evidence for line emission in an SGR burst. Here we focus on the evidence for both spectral features and spectral evolution during a precursor event to the strong burst observed with RXTE and BATSE on August 29, 1998 UT (see Ibrahim et al. 2000). We note that this event was also seen by the BeppoSAX Gamma-ray Burst Monitor and Ulysses (Hurley et al. 2000). 2 RXTE Observations On 29 August 1998, at 10:16:32.5 UT a bright burst was observed during RXTE pointed observations of SGR 1900+14. The burst is unusual in that it showed a rather long ($\sim 1$ s) precursor and a long decaying tail modulated with strong 5.16 s pulsations, similar in this respect to the giant flare which occurred on August 27, 1998 (see Hurley et al. 1999b). Here we describe in detail the spectral properties of the precursor. We used data from the proportional counter array (PCA) in an event mode which provides the time of each good X-ray event to a resolution of 125 $\mu$-sec (1/8192 s) and its energy in one of 64 bins across the 2 - 90 keV PCA response. The event was also observed by BATSE (Ibrahim et al. 2000). Figure 1 shows the time history in the entire PCA bandpass of the precursor with 2 ms time resolution ($\Delta t=1/512$ s). The precursor is rather long ($\sim 0.85$ s) compared to typical SGR burst durations and is comprised of several peaks. Multi-peaked compositions are not atypical for such bursts. Not only is the precursor itself rather long, but the main burst itself lasted $\sim 3.5$ s, quite long for an SGR burst. In order to investigate the precursor’s spectrum we accumulatd data in 5 independent intervals. These are labelled in Figure 1. We chose 5 intervals as a trade-off between having sufficient signal in each interval and a desire to search for spectral changes within the burst. We further chose the interval boundaries so that each interval would contain the same number of counts and therefore yield spectra of similar statistical quality. We used a $\sim 1000$ s segment of preburst data as a background estimate. Thus our results describe the spectrum of the burst emission alone. We note, however, that there are very few background counts in the accumulated spectra. Thus, in terms of the derived spectral shape, including the presence of narrow spectral features, the background is essentially negligible. We began by fitting the accumulated spectra with an optically thin thermal bremsstrahlung (OTTB) model including photoelectric absorption (bremss $\times$ wabs in XSPEC v10.0), a spectral form which has often been used to characterize SGR bursts. We found that the OTTB model provides an adequate description of the continuum in all the intervals. In the first two intervals inspection of the residuals suggested a narrow excess in the vicinity of 6.4 keV. We then added these two intervals together and fit the combined spectrum with the same model. To quantitatively assess the significance of the excess we added a narrow gaussian emission line to the model and evaluated the significance of the change in $\chi^{2}$ using the F-test (see Bevington 1992). Since the width of the feature is of the same order as the PCA instrumental width we are justified in using a narrow line (ie, a zero-width gaussian). We also checked a posteriori that allowing a finite width did not produce a significant decrease in $\chi^{2}$. Our analysis indicates that the narrow line significantly improves $\chi^{2}$ and find a single-trial significance for the 6.4 keV feature of $3.8\times 10^{-5}$ using the spectrum accumulated during intervals 1 and 2. Table 1 summarizes the spectral fits to all 5 intervals, both with and without the line component, and includes the F-test probabilities. Figure 2 shows the residuals (Data - Model, in units of standard deviations) as a function of channel energy for the sum of intervals 1 and 2 using only the best fitting OTTB continuum model (a), and the continuum plus gaussian line at 6.4 keV (b). After modeling the feature at 6.4 keV we still see a weaker excess near $\approx 13$ keV. If we just model this feature while ignoring the 6.4 keV excess, we find an F-test significance for the additional 2 parameters of only 0.075, so statistically it is much less compelling than the 6.4 keV feature, however, we note that the fitted centroid is consistent with a harmonic relationship to the 6.4 keV feature. We did not find any evidence for significant excesses in the residuals from intervals 3 through 5. To emphasize this we show in figure 3 the residuals from a fit to the sum of intervals 4 and 5 using the OTTB model which reveal no evidence for an excess. We note that the OTTB continuum is much softer in these intervals than the first two (see Table 1 and Ibrahim et al. 2000). At the PCA countrates observed during the precursor deadtime effects are not entirely negligible. We have shown elsewhere (Ibrahim et al. 2000) that for the observed rates during the precursor the effects of deadtime, and in this case more crucially, pulse pile-up, are not sufficient to explain the observed changes in the spectral continuum, nor can pile-up itself produce a narrow spectral feature. The single-trial significance for the 6.4 keV feature is about $4\times 10^{-5}$. We did analyse a number of independent intervals, so in estimating an overall detection significance we must pay a trials penalty. Although the lines were detected in the first intervals examined we can be conservative and use 6 trials, one for each of the five independent intervals and one for the spectrum obtained by adding intervals 1 and 2. Even with this conservative number of trials we still have a significance of $2.4\times 10^{-4}$, a robust detection. That the line is present during only a portion of the burst, and not in intervals with similar count rates only tenths of seconds apart provides a solid argument that the feature is not instrumental. Further, the line has large equivalent width (EW), much larger in fact than any previously reported imperfections or residuals that could plausibly be attributed to the PCA response matrix (FTOOLS V4.2 and PCA response matrix generator MARFRMF V3.2.1; Jahoda et al. 2000). All these arguments provide very strong evidence that the line feature at 6.4 keV is real and therefore intrinsic to the source. 3 Summary and Discussion The spectral behavior reported above is unique in several ways. First, a narrow spectral feature in an SGR burst has not to our knowledge been previously reported, and second, the burst also shows a dramatic spectral softening. Strohmayer & Ibrahim (1998) demonstrated that some bursts from SGR 1806-20 also show significant spectral evolution. Interestingly, the earlier result from SGR 1806-20 is similar in that the trend is for hard to soft evolution. Finally, the line feature is only present during the hardest spectral intervals, suggesting a possible connection between these properties. The presence of line emission during an SGR burst raises a host of interesting questions concerning the production mechanism and has many implications for models of SGRs. It is beyond the scope of this paper to attempt an exhaustive description of possible models, rather, we qualitatively discuss several possibilities. Of the various models proposed for SGR bursts the most often discussed have been the magnetar model (Thompson & Duncan 1995) and models based on sudden accretion (see Colgate & Petschek 1981; Epstein 1985; Colgate & Leonard 1993; Katz, Toole & Uhruh 1994). To date there has been little direct evidence to support accretion scenarios. An intriguing possibility, however, is that the line is due to fluorescence of iron in relatively cool material located near the neutron star. Such features have been observed in a number of astrophysical systems, including accreting X-ray pulsars and magnetic CVs (see White, Nagase & Parmar 1993; Ezuka & Ishida 1999; and references therein). In the X-ray pulsars some portion of the line EW is correlated with the observed absorbing column, $n_{H}$, indicating that some of the line EW is due to fluorescence in the absorbing material, most likely the stellar wind from the secondary. In some circumstances there is evidence for an uncorrelated EW indicating an additional source of fluorescing material and it has been suggested that matter might accumulate in the Alfven shell (Inoue 1985; Basko 1980) or perhaps an accretion disk (see Bai 1980). The appearance of lines in the disk accretors, Her X-1 and Cen X-3 indicates that a disk can also be an efficient reprocessor. The fluorescent lines from accreting X-ray pulsars are strong with EWs greater than 100 eV. This is at least qualitatively similar to the feature described here. If the line is due to 6.4 keV iron emission in the rest frame then the fluorescing material cannot be the stellar surface itself because of the absence of any significant redshift. Assuming the rest-frame energy of the line is 6.4 keV we can place a 90 % confidence lower limit of $\approx 80$ km on the altitude of the fluorescing material above a typical neutron star ($M=1.4M_{sun}$ and $R=10$ km). Note that this estimate also ignores any possible effects of a magnetar strength field on the line spectrum. However, the question remains as to the source of the fluorescing material. SGR giant flares likely produce a hot, optically thick electron-positron plasma in the magnetosphere (Thompson & Duncan 1995). Energy from this radiating plasma will likely be conducted into the surface of the star, which increases the crust temperature, and may also blow off a baryon wind. (Thompson & Duncan 1995). It is conceivable that an ablation process caused either by the precursor itself, or perhaps by the giant flare which occurred on August 27th, 1998 may have ejected enough iron-rich material into the neutron star environment to produce the flourescence. Somewhat related processes have been discussed in the context of the putative iron lines from gamma-ray burst afterglows (see Piro et al. 1999; Ghisellini, Lazzati, & Campana 1999). The SGR phenomenon reported here could conceivably be similar but on a smaller scale. The apparent changes in spin down rate observed in SGR 1900+14 (Marsden, Rothschild & Lingenfelter 1999) have been used to argue for the presence of circumstellar material to produce the spin down torques. This material might be co-moving ejecta from the supernova explosion (Marsden et al. 1999), or perhaps a fall back accretion disk (see Chatterjee et al. 1999; Alpar 1999). It is possible that such material could be the source of a fluorescence line. Another clue would seem to be the dissapearance of the line as the continuum softens. This appears to be qualitatively consistent with fluorescence as the line mechanism, as the line strength will depend on the number of photons above the Fe K-edge absorbed in the fluorescing material, and as the spectrum hardens (softens) this quantity will increase (decrease). Although the weaker excess at $\sim 13$ keV is not compellingly significant on its own, the harmonic relationship with the 6.4 keV feature is suggestive of cyclotron emission from accreting pulsars (see Nagase 1989; Dal Fiume et al. 1999). For magnetar-strength fields electron cyclotron transitions would lie above an MeV, well above the bandpass of RXTE/PCA. However, Duncan & Thompson (2000) have recently pointed out the following mechanism involving proton and alpha particle cyclotron transitions. In the magnetar model, the giant flare on August 27th produces a hot ($T\sim$ MeV) trapped fireball in the stellar magnetosphere. Heavy nuclei are ejected from the stellar surface into this fireball, where they photodissociate and subsequently settle to form a thin layer of light elements on the stellar surface. In the August 29 precursor which follows, radiative heating of the star’s surface gives rise to emission at the cyclotron fundamental of each ion, separated by a factor two in frequency. If the 6.4 keV line is the He${}^{4}$ cyclotron fundamental, then the implied surface field strength is $2.6\times 10^{15}$ G. This estimate takes into account a redshift correction from the surface of a canonical neutron star. The alternative interpretation of the lines as first and second proton harmonics in $1.3\times 10^{15}$ G seems less plausible. In any case, the surface field strength somewhat exceeds the dipole field strength deduced from the spindown of SGR 1900+14; but note that the multi-peaked pulse profile in the tail of the Aug. 27th flare gives evidence for stronger higher-order multipoles (Thompson et al. 2000). More work is required to determine if this model can account for the observed line EW and other details, but it is an intriguing possibility. It is sometimes difficult to draw firm conclusions based on a single example, however, our results argue strongly for detailed spectral studies of the whole sample of bursts observed with RXTE from SGR 1900+14, a project we are pursuing. Studies with instruments possessing greater spectral resolution (such as Chandra and XMM, if they can handle the high fluxes) may hold great promise for further testing the magnetar hypothesis for SGRs. A. I. is grateful to Samar Safi-Harb and Eamon Harper for many useful discussions. We also thank Jean Swank, Craig Markwardt, Dave Marsden, Robert Duncan and Chris Thompson for their input and comments. References (1) (2) Alpar, M. A. 1999, astro-ph/9912228 (3) Bai, T. 1980, ApJ, 239, 328 (4) Basko, M. M. 1980, A&A, 87, 330 (5) Baykal, A., Swank, J. H., Stark, M. J. 2000, A&A submitted (6) Bevington, P. R. 1992, “Data Reduction and Error Analysis for the Physical Sciences,” (New York: McGraw-Hill), 2nd ed. (7) Bezchastnov, V. G. et al. 1996, Gamma-Ray Bursts, 3rd Huntsville symposium, AIP 384, 907 (8) Chatterjee, P., Hernquist, L., Narayan, R. 1999, astro-ph/9912137 (9) Colgate, S. A., & Petschek, A. G. 1981, ApJ, 248, 771 (10) Colgate, S. A., & Leonard, P. J. T. 1993, Proc. of BATSE Gamma-ray Burst Workshop, 21 (11) Dal Fiume et al. 1999, Adv. Space Res., in press (12) Duncan, R. C. & Thompson, C. 2000, private communication (13) Duncan, R. C. & Thompson, C. 1992, ApJ, 392, L9 (14) Epstein, R. I. 1985, ApJ, 291, 822 (15) Ezuka, H. & Ishida, M. 1999, ApJS, 120, 277 (16) Feroci, M. et al. 1999, ApJ, 515, L9 (17) Frail, D. A., Kulkarni, S. R., & Bloom, J. S. 1999, Nature, 398, 127 (18) Ghisellini, G., Lazzati, D. & Campana, S. 1999, A&AS, 138, 545 (19) Harding, A. K., Contopoulos, Ioannis, & Kazanas, D. 1999, ApJ, 525, L125 (20) Hulleman, F., van Kerkwijk, M. H., Verbunt, F. W. M., & Kulkarni, S. R. 2000, astro-ph/0002474 (21) Hurley, K. et al 2000, private communication (22) Hurley, K. et al. 1999a, ApJ, 510, L111 (23) Hurley, K. et al. 1999b, Nature, 397, 41 (24) Ibrahim, A. I. et al. 2000, ApJ, submitted (25) Inoue, H. 1985, SSRev, 40, 317 (26) Jahoda, K. 2000, private communication (27) Kaspi, V. M., Chakrabarty, D., & Steinberger, J. 1999, ApJ, 525, L33 (28) Katz, J. I., Toole, H. A., & Unruh, S. H. 1994, ApJ, 437, 727 (29) Kouveliotou, C. et al. 1998, Nature, 393, 235 (30) Kouveliotou, C. et al. 1999, ApJ, 510, L115 (31) Kulkarni, S. R., Frail, D. A., Kassim, N. E., Murakami, T., Vasisht, G. 1994, Nature, 368, 129 (32) Marsden, D. Rothschild, R. E., & Lingenfelter, R. E. 1999, ApJ, 520, L107 (33) Marsden, D. , Lingenfelter, R. E., Rothschild, R. E., & Higdon, J. C. 1999, astro-ph/9912207 (34) Mereghetti, S., Israel, G. L., & Stella, L. 1998, MNRAS, 296, 689 (35) Mereghetti, S. 1999, astro-ph/9911252 (36) Murakami, T. et al. 1994, Nature, 368, 127 (37) Nagase, F. 1989, PASJ, 41, 1 (38) Paczynski, B. 1992, Acta Astronomica, 42, 145 (39) Piro, L. et al. 1999, A&AS, 138, 431 (40) Thompson, C. and Duncan, R. C. 1992, ApJ, 392, L9 (41) Thompson, C. and Duncan, R. C. 1995, MNRAS 275, 255 (42) Thompson, C. and Duncan, R. C. 1996, ApJ, 473, 322 (43) Thompson, C., Duncan, R. C., Woods, P. M., Kouveliotou, C., Finger, M. H., & van Paradijs, J. 2000, astro-ph/9908086 (44) White, N. E., Nagase, F., & Parmar, A. N. 1993, in X-ray Binaries, ed. W.H.G. Lewin, J. van Paradijs & E.P.J. van den Heuvel (Cambridge Univ. Press), 1
Divisible operators in von Neumann algebras David Sherman Department of Mathematics University of Virginia P.O. Box 400137 Charlottesville, VA 22904 [email protected] Abstract. Relativizing an idea from multiplicity theory, we say that an element $x$ of a von Neumann algebra $\mathcal{M}$ is $n$-divisible if $W^{*}(x)^{\prime}\cap\mathcal{M}$ unitally contains a factor of type $\text{I}_{n}$. We decide the density of the $n$-divisible operators, for various $n$, $\mathcal{M}$, and operator topologies. The most sensitive case is $\sigma$-strong density in $\text{II}_{1}$ factors, which is closely related to the McDuff property. We make use of Voiculescu’s noncommutative Weyl-von Neumann theorem to obtain several descriptions of the norm closure of the $n$-divisible operators in $\mathcal{B}(\ell^{2})$. Here are two consequences: (1) in contrast to the reducible operators, of which they form a subset, the divisible operators are nowhere dense; (2) if an operator is a norm limit of divisible operators, it is actually a norm limit of unitary conjugates of a single divisible operator. This is related to our ongoing work on unitary orbits by the following theorem, which is new even for $\mathcal{B}(\ell^{2})$: if an element of a von Neumann algebra belongs to the norm closure of the $\aleph_{0}$-divisible operators, then the $\sigma$-weak closure of its unitary orbit is convex. Key words and phrases:divisible operator, von Neumann algebra, ultrapower, Voiculescu’s theorem, approximate equivalence, unitary orbit 2000 Mathematics Subject Classification: Primary 47C15; Secondary 47A65, 46L10 1. Introduction Let $\mathcal{B}(\mathfrak{H})$ be the algebra of bounded linear operators on a Hilbert space $\mathfrak{H}$, and let $x\in\mathcal{B}(\mathfrak{H})$. The operator $x\oplus x\in\mathcal{B}(\mathfrak{H}\oplus\mathfrak{H})$, which applies $x$ to each summand simultaneously, may be thought of as the “double” of $x$. Now the latter algebra is just $\mathbb{M}_{2}(\mathcal{B}(\mathfrak{H}))$, the $2\times 2$ matrices over $\mathcal{B}(\mathfrak{H})$, and it suggests how to double an operator $x$ in an unrepresented von Neumann algebra $\mathcal{M}$: let $x\oplus x\in\mathbb{M}_{2}(\mathcal{M})$ be the matrix with $x$ on the diagonal and zeroes elsewhere. Similarly one may take larger (even infinite) multiples of $x$. For any cardinal $n$, we employ Ernest’s notation ([E]) and set (1.1) $$\text{\textcircled{$n$}}x\overset{\text{def}}{=}1\otimes x\in\mathbb{M}_{n}% \overline{\otimes}\mathcal{M}.$$ Here we write $\mathbb{M}_{n}$ for the factor of type $\text{I}_{n}$, even when $n$ is infinite. Note that a multiple of $x\in\mathcal{M}$ belongs to an algebra which may not be isomorphic to $\mathcal{M}$. We will want to know when a given $x\in\mathcal{M}$ can be written as $\text{\textcircled{$n$}}y$ for some $y$. In other words, when are there an algebra $\mathcal{N}$ and an isomorphism (1.2) $$\pi:\mathcal{M}\overset{\sim}{\to}\mathbb{M}_{n}\overline{\otimes}\mathcal{N},% \qquad\pi(x)\in 1\otimes\mathcal{N}?$$ Clearly this would imply that the relative commutant $W^{*}(x)^{\prime}\cap\mathcal{M}$ unitally contains $\mathbb{M}_{n}$. And the converse is also valid, since the existence of such an $\mathbb{M}_{n}$ guarantees an “internal” isomorphism (1.3) $$\pi:\mathcal{M}\overset{\sim}{\to}\mathbb{M}_{n}\overline{\otimes}(\mathbb{M}_% {n}^{\prime}\cap\mathcal{M}),\qquad\pi(x)\in 1\otimes(\mathbb{M}_{n}^{\prime}% \cap\mathcal{M})$$ ([KR, Lemma 6.6.3 and Example 11.2.2]). We therefore make the following Definition 1.1. Let $n$ be a cardinal greater than 1, and let $\mathcal{M}$ be a von Neumann algebra. For $x\in\mathcal{M}$, we say that $x$ is $n$-divisible if the relative commutant $W^{*}(x)^{\prime}\cap\mathcal{M}$ unitally contains $\mathbb{M}_{n}$. Similarly, for a $C^{*}$-algebra $\mathcal{A}$, we say that a *-homomorphism $\rho:\mathcal{A}\to\mathcal{M}$ is $n$-divisible if the relative commutant $\rho(\mathcal{A})^{\prime}\cap\mathcal{M}$ unitally contains $\mathbb{M}_{n}$. We also say that such an $x$ or $\rho$ is divisible if it is $n$-divisible for some $n$. One visualizes such an $x$ as $\left(\begin{smallmatrix}y&0&\ldots\\ 0&y&\ldots\\ \vdots&\vdots&\ddots\end{smallmatrix}\right)$; these are the operators which can be “divided by $n$.” But there is no hope of defining an operator quotient: if $\pi$ solves (1.2), so does $(\text{id}\otimes\alpha)\circ\pi$ for any $\alpha\in\text{Aut}(\mathcal{N})$. Division, unlike multiplication, necessarily involves isomorphism classes. We give further discussion of this in an Appendix. We will repeatedly use the following consequence of the basic structure theory of von Neumann algebras. The unfamiliar reader may see [KR, Lemma 6.5.6 and Section 6.6]. Lemma 1.2. For $2\leq n<\aleph_{0}$, a von Neumann algebra unitally contains $\mathbb{M}_{n}$ unless it has a type $\text{I}_{k}$ summand for some $k$ not divisible by $n$. A von Neumann algebra is properly infinite if and only if it contains $\mathcal{B}(\ell^{2})$ unitally. Divisibility is really just a variant of multiplicity. Recall that a Hilbert space operator $x$ is said to have (uniform) multiplicity $n$ (or be homogeneous of order $n$) if the commutant $W^{*}(x)^{\prime}$ is a type $\text{I}_{n}$ algebra. See [Ha1, K1, A1, E] for some characterizations, extensions to representations, and applications, especially in regard to the problem of unitary equivalence. While not every Hilbert space operator has a multiplicity, the type decomposition of the von Neumann algebra $W^{*}(x)^{\prime}$ allows us to write $x$ as a direct sum of operators which do, plus an additional term whose commutant has no type I summand. This readily generalizes to operators in von Neumann algebras by considering the relative commutant $W^{*}(x)^{\prime}\cap\mathcal{M}$. (One loss is that $W^{*}(x)$ and $W^{*}(x)^{\prime}\cap\mathcal{M}$ may have different types. Multiplicity theory solves the problem of unitary equivalence for normal operators in $\mathcal{B}(\mathfrak{H})$, but it is insufficient for analogous questions in general von Neumann algebras. See [S1, Section 8] for more discussion of this, or [Bu] for a study of multiplicity as an (incomplete) invariant for abelian subalgebras of von Neumann algebras.) At least for $n$ finite, then, $n$-divisibility of $x$ is equivalent to saying that $W^{*}(x)^{\prime}\cap\mathcal{M}$ lacks a $\text{I}_{k}$ summand for all $k$ indivisible by $n$. Let us consider some basic questions about the set of $n$-divisible operators in $\mathcal{M}$. Can it be empty? Yes, if $\mathcal{M}$ has a nonzero type $\text{I}_{k}$ summand for some cardinal $k$ which is not a multiple of $n$ (by Lemma 1.2). The converse is also true, since otherwise the identity is $n$-divisible. Can it be all of $\mathcal{M}$? It cannot if $\mathcal{M}$ has separable predual, because then any maximal abelian *-subalgebra is generated by a single operator ([vN1]), and such an operator is not $n$-divisible. But there are large von Neumann algebras in which all elements are $n$-divisible; examples arise in Corollary 2.6. When is it dense in $\mathcal{M}$ with respect to the various topologies? This last question is the focus of Sections 2 and 3. See Table 1. Recall that a Hilbert space operator is said to be reducible if it has a reducing subspace, i.e., if it can be written as $y\oplus z$. This amounts to requiring that $W^{*}(x)^{\prime}$ contain a nontrivial projection, or by the double commutant theorem, that $W^{*}(x)\neq\mathcal{B}(\mathfrak{H})$. The eighth of Halmos’s ten famous operator theory problems from 1970 ([Ha3]) asked whether the reducible operators in $\mathcal{B}(\ell^{2})$ are norm dense. The affirmative answer is a consequence of Voiculescu’s noncommutative Weyl-von Neumann theorem ([V], see Theorem 3.5 below). Now divisible operators are apparently reducible – we have slightly upgraded our request of $W^{*}(x)^{\prime}$, from nontriviality to the containment of matrix units. Using Voiculescu’s theorem, we obtain several descriptions of the closures of the $n$-divisible operators in $\mathcal{B}(\ell^{2})$ (Theorems 3.8 and 3.9). An interesting consequence is that a Hilbert space operator which is a norm limit of divisible operators is actually a norm limit of unitary conjugates of a single divisible operator (Theorem 3.14). Finally we show that the set of all divisible operators in $\mathcal{B}(\ell^{2})$ is nowhere dense in the norm topology (Theorem 3.16). We derive this from more general statements, but its essence is simple: in any open ball in $\mathcal{B}(\ell^{2})$, there is an element $y$ such that $C^{*}(y)$ contains a rank one projection; this projection is not a norm limit of divisible operators, so $y$ cannot be either. Unfortunately we have made little headway on similar problems in other von Neumann algebras, where we lack an adequate substitute for Voiculescu’s theorem. (Giordano and Ng recently proved that a certain form of Voiculescu’s theorem is true only in injective algebras ([G]). This interesting result has not yet appeared in final form, and it seems not to have any direct implications for the questions of the present paper.) But if we work instead with the $\sigma$-strong*, $\sigma$-strong, or $\sigma$-weak topologies, our answers are rather complete and mostly determined by the type of the algebra. The most delicate case is $\sigma$-strong density in a $\text{II}_{1}$ algebra, which is closely related to the McDuff property (Theorem 2.8). This leads to a fairly simple (but poorly understood) numerical invariant which measures the “McDuffness” of a singly-generated $\text{II}_{1}$ factor (Remark 2.12). Seemingly unrelated is Question 1.3. When is the $\sigma$-weakly closed unitary orbit of an element in a von Neumann algebra convex? For $\mathcal{B}(\mathfrak{H})$, the answer is frequently no, even for self-adjoint operators ([AS]). But nonatomic factors exhibit different behavior: the answer is yes for all self-adjoint operators, and we know no operator for which the answer is no. One may view an affirmative answer as a noncommutative Lyapunov-type theorem ([AA]), so it is not surprising that noncommutative atomic measures are recalcitrant. We show in Section 4 that Question 1.3 has an affirmative answer whenever the element belongs to the norm closure of the $\aleph_{0}$-divisible operators (Theorem 4.2). Since the $\aleph_{0}$-divisible operators are not norm dense in a von Neumann algebra with a semifinite summand (Corollary 3.4), this does not give a full answer to Question 1.3. However, at present we do not know if they are dense in a type III algebra; if so, Question 1.3 has an affirmative answer for all operators in type III algebras. In fact this was our motivation for studying divisible operators in the first place. Let us review some of our assumptions and notations. All $C^{*}$-algebras, *-homomorphisms, and inclusions are assumed to be unital. Generic von Neumann algebras are denoted by $\mathcal{M}$ and $\mathcal{N}$, and like the $C^{*}$-algebras of this paper, they are not assumed to be represented on a Hilbert space. For this reason we use only intrinsic topologies: the norm, $\sigma$-strong*, $\sigma$-strong, and $\sigma$-weak topologies are symbolized by $\|\|$, $\sigma-s^{*}$, $\sigma-s$, and $\sigma-w$ respectively. (We frequently use that the $\sigma$-strong and $\sigma$-strong* topologies agree on finite algebras ([T1, Exercise V.2.5]).) We write $\mathcal{Z}(\mathcal{M})$ for the center and $\mathcal{U}(\mathcal{M})$ for the unitary group. As already mentioned, $\mathbb{M}_{n}$ stands for a factor of type $\text{I}_{n}$, but we prefer to write $\mathcal{B}(\ell^{2})$ for $\mathbb{M}_{\aleph_{0}}$. We let $\mathbb{F}_{n}$ be the free group on $n$ generators, and $L(G)$ be the von Neumann algebra generated by the left regular representation of the group $G$. The hyperfinite $\text{II}_{1}$ factor is denoted by $\mathcal{R}$. We write the normalized trace on a finite type I factor as “Tr”. For any faithful normal tracial state $\tau$ on a finite algebra $\mathcal{M}$, the Hilbert space $L^{2}(\mathcal{M},\tau)$ is obtained by endowing $\mathcal{M}$ with the inner product $\langle x\mid y\rangle=\tau(y^{*}x)$ and completing in the induced norm $\|x\|_{2}=\sqrt{\tau(x^{*}x)}$. It is easy to check that $L^{2}(\mathcal{M},\tau)$ is a contractive $\mathcal{M}-\mathcal{M}$ bimodule: $\|xyz\|_{2}\leq\|x\|\|y\|_{2}\|z\|$. This leads to the useful fact that on bounded subsets of $\mathcal{M}$, the $L^{2}$ norm determines the $\sigma$-strong topology ([Bl, Proposition III.2.2.17]). For $x\in\mathcal{M}$, the left (resp. right) support projection $s_{\ell}(x)$ (resp. $s_{r}(x)$) is defined to be the support of $xx^{*}$ (resp. $x^{*}x$). The unitary orbit is $\mathcal{U}(x)=\{uxu^{*}\mid u\in\mathcal{U}(\mathcal{M})\}$. If $x$ is normal and $E\subseteq\mathbb{C}$ is Borel, the corresponding spectral projection is $\chi_{E}(x)$. An isomorphism between operators in algebras $(x\in\mathcal{M})\cong(y\in\mathcal{N})$ means a *-isomorphism of algebras taking one operator to the other. The algebras may be omitted when they are understood or irrelevant. For $\mathcal{M}\cong\mathcal{N}\cong\mathcal{B}(\mathfrak{H})$, this is unitary equivalence, but in general it is a weaker equivalence relation. The following lemma collects some basic observations about divisibility. Lemma 1.4. Let $n$, $p$ and $r$ be cardinals satisfying $np=r$, and let $x,y,z$ be elements of von Neumann algebras, with $z$ a central projection in the same algebra as $x$. (1) $x\cong y\Rightarrow\text{\textcircled{$n$}}x\cong\text{\textcircled{$n$}}y$. (2) $\text{\textcircled{$n$}}(\text{\textcircled{$p$}}x)\cong\text{\textcircled{$r$% }}x$. (So $r$-divisibility implies $n$-divisibility.) (3) If $x$ is $n$-divisible, so is every element of $W^{*}(x)$. (4) In any of the topologies under discussion, if $x$ is a limit of $n$-divisible (or divisible) operators, so is $zx$. (5) If $x$ is a norm limit of $n$-divisible (or divisible) operators, so is every element of $C^{*}(x)$. (6) If $x$ is a $\sigma$-strong* limit of $n$-divisible (or divisible) operators, then it is a $\sigma$-strong* limit of a uniformly bounded net of $n$-divisible (or divisible) operators. (7) If $x$ is a $\sigma$-strong* limit of $n$-divisible (or divisible) operators, so is every element of $W^{*}(x)$. Proof. We prove only the last two statements. Part (6) goes by the technique of the Kaplansky density theorem, as follows. Suppose $\{x_{\alpha}\}$ converges $\sigma$-strong* to $x\in\mathcal{M}$, with $x_{\alpha}$ $n_{\alpha}$-divisible and $\{e^{\alpha}_{ij}\}_{1\leq i,j\leq n_{\alpha}}$ matrix units in the relative commutant. Then $\{(\begin{smallmatrix}0&x_{\alpha}\\ x_{\alpha}^{*}&0\end{smallmatrix})\}$ converges $\sigma$-strongly to $(\begin{smallmatrix}0&x\\ x^{*}&0\end{smallmatrix})$. Consider the continuous truncation function $$f:\mathbb{R}\to[-\|x\|,\|x\|];\qquad t\mapsto\begin{cases}-\|x\|,&t\leq-\|x\|;% \\ t,&-\|x\|\leq t\leq\|x\|;\\ \|x\|,&\|x\|\leq t.\end{cases}$$ Being continuous and bounded, this function is $\sigma$-strongly continuous from $\mathcal{M}_{sa}$ to the self-adjoint part of the closed ball of radius $\|x\|$, which it fixes pointwise. (This is already contained in Kaplansky’s original paper [Ka1]; note that the strong and $\sigma$-strong topologies agree for a suitable representation of $\mathcal{M}$.) Thus $$f((\begin{smallmatrix}0&x_{\alpha}\\ x_{\alpha}^{*}&0\end{smallmatrix}))\overset{\sigma-s}{\to}f((\begin{% smallmatrix}0&x\\ x^{*}&0\end{smallmatrix}))=(\begin{smallmatrix}0&x\\ x^{*}&0\end{smallmatrix}).$$ This implies that the 1,2-entries of $f((\begin{smallmatrix}0&x_{\alpha}\\ x_{\alpha}^{*}&0\end{smallmatrix}))$ converge $\sigma$-strong* to $x$. By construction these entries are bounded in norm by $\|x\|$. To finish the proof it suffices to note that these entries are $n_{\alpha}$-divisible, and that follows from the calculation $$\{f((\begin{smallmatrix}0&x_{\alpha}\\ x_{\alpha}^{*}&0\end{smallmatrix}))\}^{\prime}\supseteq\{(\begin{smallmatrix}0% &x_{\alpha}\\ x_{\alpha}^{*}&0\end{smallmatrix})\}^{\prime}\supseteq\{I_{2}\otimes e_{ij}^{% \alpha}\}.$$ For (7), suppose that $\{x_{\alpha}\}$ is a bounded net of divisible operators converging $\sigma$-strong* to $x$. Since multiplication is jointly $\sigma$-strong* continuous on bounded sets, for any noncommutative polynomial $p(x,x^{*})$ we have that $\{p(x_{\alpha},x_{\alpha}^{*})\}$ is a net of divisible operators converging $\sigma$-strong* to $p(x,x^{*})$. But these last are $\sigma$-strong* dense in $W^{*}(x)$. ∎ One would not expect that statement (5) can be strengthened to include all elements of $W^{*}(x)$, and in fact it cannot. The relevant example is postponed to Remark 3.10. We end this Introduction by pointing out some relations to the literature. In the proofs of Theorem 2.4(1,2), Proposition 2.5, and Theorem 3.6 below we use the fact (valid in the circumstances of these theorems, not in general) that an element is a limit of $n$-divisible operators if and only if it commutes arbitrarily well with some system of $n\times n$ matrix units. This is mentioned here for two reasons. First, it suggests central sequences, which are a key tool in Section 2. And second, it has a variant which has proved useful in the $K$-theory of $C^{*}$-algebras: a $C^{*}$-algebra is approximately divisible if any finite set of elements commutes arbitrarily well (in norm) with the unit ball of some finite-dimensional $C^{*}$-subalgebra having no abelian summand ([BKR, Definition 1.2]). Our Theorem 2.8 is in the same vein as [BKR, Example 4.8]. Halpern ([Hal]) considered elements $x\in\mathcal{M}$ for which $W^{*}(x)^{\prime}\cap\mathcal{M}$ contains two complementary projections which are equivalent in $\mathcal{M}$, not necessarily equivalent in $W^{*}(x)^{\prime}\cap\mathcal{M}$. This weaker condition is a form of 2-diagonalizability, not 2-divisibility. Actually, Halpern’s condition is satisfied by every self-adjoint operator in a von Neumann algebra lacking finite type $\text{I}_{2k+1}$ summands ([Hal, Remark p.134]). His results were later extended by Kadison ([K2]) and Kaftal ([Kaf2]). The $C^{*}$-version is subtler ([GrP]). The reader may wonder about a connection to the perturbation theory of operator algebras. The typical setup, as introduced in [KK], has the unit balls of two operator algebras uniformly close (i.e., small Hausdorff distance); one may often deduce structural similarities between the algebras. This is too strong a hypothesis for the present paper, but the weaker notion of convergence in the Effros-Maréchal topology ([Ef]) has some relevance. We only wish to mention that in the language of Haagerup-Winsløw ([HW, Definition 2.1]), $x\in\mathcal{M}$ is a $\sigma$-strong* limit of $n$-divisible operators if and only if $W^{*}(x)$ is contained in the “$\liminf$” of a net of subalgebras of $\mathcal{M}$ whose relative commutants contain $\mathbb{M}_{n}$. Density questions for divisible operators in matrices were carefully studied in a long paper by von Neumann ([vN2]). Since this work seems to have gotten little attention for many years and is now being rediscovered by the free probability community, we state a version of its main result: There is $\varepsilon>0$ such that for every $r\in\mathbb{N}$ there is a contraction $x_{r}\in\mathbb{M}_{r}$ which is at least $\varepsilon$ away from the set of divisible operators in the $L^{2}(\mathbb{M}_{r},\text{Tr})$ norm. In an appropriate context this says that divisible operators do not get arbitrarily dense in the strong topology as the size of the matrix algebra grows. Other questions raised in [vN2] still remain open. Acknowledgments. We are grateful to Chuck Akemann for uncountably many useful comments. Thanks are also due to Ken Dykema for pointing out some relevant references. 2. Closures in operator topologies In this section we determine whether the $n$-divisible operators are dense in the operator topologies ($\sigma$-weak, $\sigma$-strong, $\sigma$-strong*), for various algebras and $n$. Proposition 2.1. In a properly infinite algebra $\mathcal{M}$, the $\aleph_{0}$-divisible operators are $\sigma$-strong* dense. Proof. Let $x\in\mathcal{M}$. Informally, we produce an $\aleph_{0}$-divisible approximant by taking a big corner of $x$ and copying it down the diagonal $\aleph_{0}$ times. Choose a sequence of projections $\{p_{n}\}$ increasing to 1, with $p_{n}\sim(1-p_{n})$. Each $p_{n}$ is a minimal projection in some copy of $\mathcal{B}(\ell^{2})$ unitally included in $\mathcal{M}$, so as in (1.3) there is an isomorphism $\theta_{n}$ from $\mathcal{B}(\ell^{2})\overline{\otimes}p_{n}\mathcal{M}p_{n}$ to $\mathcal{M}$ such that $$\theta_{n}(e_{11}\otimes y)=y,\qquad\forall y\in p_{n}\mathcal{M}p_{n}\subset% \mathcal{M}.$$ Note that $$\theta_{n}(e_{11}^{\perp}\otimes p_{n})=p_{n}^{\perp}\overset{\sigma-s^{*}}{% \to}0.$$ Now $\theta_{n}(1\otimes p_{n}xp_{n})$ is $\aleph_{0}$-divisible, since $1\otimes p_{n}xp_{n}$ is. We compute $$\displaystyle x-\theta_{n}(1\otimes p_{n}xp_{n})$$ $$\displaystyle=x-\theta_{n}(e_{11}\otimes p_{n}xp_{n})-\theta_{n}(e_{11}^{\perp% }\otimes p_{n}xp_{n}))$$ $$\displaystyle=x-p_{n}xp_{n}-\theta_{n}(e_{11}^{\perp}\otimes p_{n}xp_{n})$$ $$\displaystyle=p_{n}^{\perp}x+xp_{n}^{\perp}-p_{n}^{\perp}xp_{n}^{\perp}-p_{n}^% {\perp}\theta_{n}(1\otimes p_{n}xp_{n})p_{n}^{\perp},$$ and each term in the last expression converges $\sigma$-strong* to zero. ∎ Lemma 2.2. All normal operators in a type II (resp. III) algebra belong to the norm closure of the $n$-divisible operators, for any $n<\aleph_{0}$ (resp. $n\leq\aleph_{0}$). Proof. We say that $x\in\mathcal{M}$ is a simple operator if $x=\sum_{j=1}^{n}\lambda_{j}p_{j}$, where $\{\lambda_{j}\}$ are distinct scalars and $\{p_{j}\}$ are projections adding to 1. In this case it is easy to check that $W^{*}(x)^{\prime}\cap\mathcal{M}=\oplus p_{j}\mathcal{M}p_{j}$. It then follows from Lemma 1.2 that a simple operator in a type II (resp. III) algebra is $n$-divisible for any $n<\aleph_{0}$ (resp. $n\leq\aleph_{0}$). The spectral theorem guarantees that any normal operator is well-approximated in norm by simple operators, finishing the proof. ∎ Proposition 2.3. For $n<\aleph_{0}$, the $n$-divisible operators are $\sigma$-weakly dense in any $\text{II}_{1}$ von Neumann algebra. Proof. From Lemma 2.2, we know that any normal operator is norm-approximated by $n$-divisible operators. The conclusion follows from the fact that unitaries are $\sigma$-weakly dense in the unit ball of any nonatomic von Neumann algebra ([Dy, Theorem 1]). ∎ Halmos ([Ha2, Proposition 1]) gave an elementary proof that the reducible operators are not dense in a finite type I factor, so the smaller set of $n$-divisible operators is not dense either. Of course this also follows from the difficult von Neumann result stated at the end of the Introduction. The next theorem (plus Proposition 2.1) handles all density questions for type I algebras. Theorem 2.4. Let $2\leq n<\aleph_{0}$. (1) The $n$-divisible operators are not $\sigma$-strongly dense in a finite type I algebra. (2) The $n$-divisible operators are not norm dense in a type I algebra. (3) The $n$-divisible operators are $\sigma$-weakly dense in a finite type I algebra if and only if it is nonatomic and only has type $\text{I}_{k}$ summands for $k$ in some subset of $\{2n,3n,4n,\dots\}$. Proof. (1) Given a finite type I algebra $\mathcal{M}$, let $\tau$ be a normal tracial state with (central) support $z$. Recall that the $L^{2}$-norm $\|x\|_{2}=\sqrt{\tau(x^{*}x)}$ determines the $\sigma$-strong topology on bounded subsets of $z\mathcal{M}$. In the next paragraph we work entirely inside $z\mathcal{M}$ and show that the $n$-divisibles are not $\sigma$-strongly dense, which is sufficient by Lemma 1.4(4). Let $p$ be a nonzero abelian projection, and suppose $\{x_{\alpha}\}$ were $n$-divisibles converging $\sigma$-strongly to $p$. By Lemma 1.4(6) we may assume that $\{x_{\alpha}\}$ are uniformly bounded. Let $\{e^{\alpha}_{ij}\}_{1\leq i,j\leq n}$ be matrix units commuting with $x_{\alpha}$. Then $$\|[p,e^{\alpha}_{ij}]\|_{2}=\|[p-x_{\alpha},e^{\alpha}_{ij}]\|_{2}\leq 2\|p-x_% {\alpha}\|_{2}\to 0$$ for any fixed $i$ and $j$. Using the fact that $p$ is abelian, we compute for any $i\neq j$, $$\displaystyle pe^{\alpha}_{ii}p$$ $$\displaystyle=pe^{\alpha}_{ij}e^{\alpha}_{jj}e^{\alpha}_{ji}p$$ $$\displaystyle\sim(pe^{\alpha}_{ij}p)(pe^{\alpha}_{jj}p)(pe^{\alpha}_{ji}p)$$ $$\displaystyle=(pe^{\alpha}_{ij}p)(pe^{\alpha}_{ji}p)(pe^{\alpha}_{jj}p)$$ $$\displaystyle\sim pe^{\alpha}_{ij}e^{\alpha}_{ji}e^{\alpha}_{jj}p$$ $$\displaystyle=0.$$ Here “$\sim$” represents an $L^{2}$ approximation which gets better as $\alpha$ increases; we conclude $pe^{\alpha}_{ii}p\to 0$ $\sigma$-strongly for any $i$. But then $$p=p\left(\sum_{i=1}^{n}e^{\alpha}_{ii}\right)p=\sum_{i=1}^{n}(pe^{\alpha}_{ii}% p)\overset{\sigma-s}{\to}0,$$ a contradiction. (2) The argument in the second paragraph of the proof of (1) also establishes (2), except that one uses the uniform norm instead of the $L^{2}$/$\sigma$-strong topology. (3) For the necessity we again appeal to Lemma 1.4(4), as follows. An algebra which fails to be nonatomic has a finite matrix algebra as a direct summand, and as mentioned before the theorem, the $n$-divisible operators are not dense in this summand. In a summand of type $\text{I}_{k}$, $k$ indivisible by $n$, there are no $n$-divisible operators at all. In a type $\text{I}_{n}$ summand, the $n$-divisible operators coincide with the center, which is already $\sigma$-weakly closed. It remains to establish sufficiency, i.e. $\sigma$-weak density of the $n$-divisibles in $\mathbb{M}_{mn}\otimes L^{\infty}(X,\mu)$ for any $m\geq 2$ and nonatomic $(X,\mu)$. We explain some reductions. It will be enough to show that any simple function $\sum(x_{j}\otimes\chi_{E_{j}})$ is a $\sigma$-weak limit of $n$-divisibles, as the simple functions are $\sigma$-weakly (even norm) dense. It will also be enough to show that any $x\otimes 1$ is a $\sigma$-weak limit, since this argument can then be applied to the sub-measure spaces $(E_{j},\mu\mid_{E_{j}})$ occurring in a simple function. Finally note that $L^{\infty}(X,\mu)$ can be written as the direct sum of $L^{\infty}$ algebras based on nonatomic probability spaces. (For details, see [T1, Proof of (iii) $\Rightarrow$ (i) in Theorem III.1.18]. Any finite measure can be rescaled to be a probability measure without changing the associated $L^{\infty}$ algebra.) It then suffices to work in $\mathcal{N}=\mathbb{M}_{mn}\otimes L^{\infty}(X,\mu)$, where $(X,\mu)$ is a nonatomic probability space. Next we claim that the linear span of the $n$-divisibles in $\mathbb{M}_{mn}$ is all of $\mathbb{M}_{mn}$. There are many ways to do this; here is a relatively short one. Let $p$ be a projection of rank $n$, and consider the linear span $L$ of the unitary orbit $\mathcal{U}(p)$. According to a result of Marcoux and Murphy ([MM, Theorem 3.3]), such a unitarily-invariant linear space is either contained in the center or contains all commutators. Since $p$ is not central, $L$ contains all commutators. Now write any $y\in\mathbb{M}_{mn}$ as $(y-\text{Tr}(y)1)+\text{Tr}(y)1$. The first summand has trace zero and so is a commutator ([Sho]). The second summand is a multiple of the identity, which is a sum of $m$ orthogonal conjugates of $p$. We conclude that both, and also $y$, belong to $L$. This establishes that $L=\mathbb{M}_{mn}$. But $L$ is contained in the linear span of the $n$-divisibles, which is therefore also all of $\mathbb{M}_{mn}$. We return to the goal of finding $n$-divisibles in $\mathcal{N}$ converging $\sigma$-weakly to $x\otimes 1$. Since $n$-divisibles are closed under scalar multiplication, by the preceding paragraph we can write $x$ as the average of some finite set of $n$-divisibles: $x=\frac{1}{J}\sum_{j=0}^{J-1}x_{j}$. The idea of our remaining step is to “spread the $x_{j}$ evenly” over $(X,\mu)$ so as to converge weakly to their average. For this we need to divide $X$ up into finer and finer pieces. Using the symbol “$\sqcup$” for disjoint union, recursively find subsets of $X$ indexed by finite strings in $\{0,1,\dots,J-1\}$, as follows: $$E_{\varnothing}=X;$$ $$E_{j_{1}j_{2}\dots j_{\ell-1}}=E_{j_{1}j_{2}\dots j_{\ell-1}0}\sqcup E_{j_{1}j% _{2}\dots j_{\ell-1}1}\sqcup\dots E_{j_{1}j_{2}\dots j_{\ell-1}(J-1)},$$ $$\mu(E_{j_{1}j_{2}\dots j_{\ell-1}j})=J^{-\ell}\quad(1\leq j\leq J).$$ Define $$y_{\ell}=\sum_{j=0}^{J-1}x_{j}\otimes\chi_{H^{\ell}_{j}},$$ where $H^{\ell}_{j}$ is the disjoint union of all sets $E$ labeled by strings of length $\ell$ ending in $j$. (To illustrate the idea, assume that $(X,\mu)$ is $([0,1],m)$. Then one may take $E_{j_{1}j_{2}\dots j_{\ell}}$ to be the set of numbers whose expansion in base $J$ begins with the string $j_{1}j_{2}\dots j_{\ell}$, so that $H^{\ell}_{j}$ is the set of numbers whose $\ell$th digit is $j$. Numbers in $[0,1]$ with nonunique expansion constitute a null set and may be distributed arbitrarily.) Note that $y_{\ell}$ is $n$-divisible. We assert that $\{y_{\ell}\}$ converges $\sigma$-weakly to $x\otimes 1$. To prove this assertion, consider the trace $\tau=\text{Tr}\otimes(\int\cdot\>d\mu)$. In $L^{2}(\mathcal{N},\tau)$ one can verify the following computations, assuming $\ell\neq\ell^{\prime}$: $$\langle y_{\ell},y_{\ell}^{\prime}\rangle=\langle y_{\ell},x\otimes 1\rangle=% \langle x\otimes 1,x\otimes 1\rangle=\frac{1}{J^{2}}\sum_{i,j}\text{Tr}(x_{i}^% {*}x_{j});$$ $$\langle y_{\ell},y_{\ell}\rangle=\frac{1}{J}\sum_{j}\text{Tr}(x_{j}^{*}x_{j}).$$ It follows from these that $\{y_{\ell}-(x\otimes 1)\}$ is an orthogonal sequence of constant norm and thus converges weakly to 0 in the Hilbert space $L^{2}(\mathcal{N},\tau)$. In particular, for $h\in\mathcal{N}\subset L^{2}(\mathcal{N},\tau)$, (2.1) $$\langle y_{\ell}-(x\otimes 1),h\rangle\to 0\quad\text{and so}\quad\langle y_{% \ell},h\rangle\to\langle x\otimes 1,h\rangle.$$ Since functionals of the form $\{\langle\cdot,h\rangle\mid h\in\mathcal{N}\}$ are norm dense in $\mathcal{N}_{*}$ ([T1, Theorem V.2.18]), (2.1) establishes the $\sigma$-weak convergence $y_{\ell}\to x\otimes 1$, finishing the proof. ∎ In the rest of this section $\mathcal{M}$ is a $\text{II}_{1}$ factor with separable predual and trace $\tau$, $2\leq n<\aleph_{0}$, and $\omega$ is a free ultrafilter on $\mathbb{N}$. We recall the construction of the tracial ultrapower $\mathcal{M}^{\omega}$, essentially due to Wright ([W, Theorems 2.6 and 4.1]; see [S4] for comments on the historical record). Let $\mathcal{I}_{\omega}\subset\ell^{\infty}(\mathcal{M})$ be the two-sided ideal of sequences $(x_{k})$ with $x_{k}$ converging $\sigma$-strongly to 0 as $k\to\omega$. We define $\mathcal{M}^{\omega}$ to be the quotient $\ell^{\infty}(\mathcal{M})/\mathcal{I}_{\omega}$, shown to be a $\text{II}_{1}$ factor by Sakai ([Sa, Section II.7]). Assuming the continuum hypothesis, $\mathcal{M}^{\omega}$ and the inclusion $\pi:\mathcal{M}\hookrightarrow\mathcal{M}^{\omega}$ (as cosets of constant sequences) do not depend on the choice of $\omega$ ([GH, Theorem 3.2]). Proposition 2.5. For $x\in\mathcal{M}$, the following conditions are equivalent: (1) $x$ is a $\sigma$-strong limit of $n$-divisible operators; (2) $\pi(x)\in\mathcal{M}^{\omega}$ is $n$-divisible. Proof. (1) $\Rightarrow$ (2): Appealing to Lemma 1.4(6) and the $L^{2}(\mathcal{M},\tau)$ topology, there is a sequence $\{x_{k}\}$ of $n$-divisible operators converging $\sigma$-strongly to $x$. Then each $x_{k}$ commutes with a system of matrix units $\{e^{k}_{ij}\}_{1\leq i,j\leq n}$. It is immediate that $\{(e^{k}_{ij})_{k}\}_{1\leq i,j\leq n}$ are matrix units in $\mathcal{M}^{\omega}$. A straightforward computation shows that they commute with $\pi(x)$: (2.2) $$\|[x,e^{k}_{ij}]\|_{2}=\|[x-x_{k},e^{k}_{ij}]\|_{2}\leq 2\|e^{k}_{ij}\|\|x-x_{% k}\|_{2}\overset{k\to\omega}{\longrightarrow}0.$$ (2) $\Rightarrow$ (1): Suppose that $\pi(x)$ commutes with matrix units in $\mathcal{M}^{\omega}$, and let the $ij$-unit have representing sequence $(f^{k}_{ij})_{k}$. For each $k$, the set $\{f^{k}_{ij}\}_{1\leq i,j\leq n}$ need not consist of matrix units in $\mathcal{M}$, but one may apply an argument of McDuff ([McD, Lemma 8]) to find true matrix units $\{e^{k}_{ij}\}_{1\leq i,j\leq n}$ such that $(f^{k}_{ij})_{k}=(e^{k}_{ij})_{k}$ for each $1\leq i,j\leq n$. Now set (2.3) $$x_{k}=\sum_{m=1}^{n}e^{k}_{m1}xe^{k}_{1m}.$$ It is easy to check that $x_{k}$ converges to $x$ $\sigma$-strongly as $k\to\omega$: $$\|x-x_{k}\|_{2}=\|\sum xe^{k}_{m1}e^{k}_{1m}-\sum e^{k}_{m1}xe^{k}_{1m}\|_{2}=% \|\sum[x,e^{k}_{m1}]e^{k}_{1m}\|_{2}\to 0.$$ (Compare [McD, Proof of Lemma 1].) By construction $x_{k}$ commutes with $\{e^{k}_{ij}\}_{1\leq i,j\leq n}$ and so is $n$-divisible. ∎ The next result produces large von Neumann algebras in which every element is $n$-divisible, as promised in the Introduction. (The condition on $\mathcal{M}$ is neither universal nor impossible, as we will see momentarily.) Corollary 2.6. The following conditions are equivalent for $\mathcal{M}$: (1) the $n$-divisible operators are $\sigma$-strongly dense in $\mathcal{M}$; (2) every element of $\pi(\mathcal{M})$ is $n$-divisible in $\mathcal{M}^{\omega}$; (3) every element of $\mathcal{M}^{\omega}$ is $n$-divisible. Proof. Proposition 2.5 gives the equivalence of (1) and (2), and (3) is clearly stronger than (2). So let us assume (1) and show (3). Let $(x_{k})$ represent an element of $\mathcal{M}^{\omega}$. For a fixed $k$, $x_{k}$ is a $\sigma$-strong limit of $n$-divisible operators, so as in (2.2) we may find matrix units $\{e_{ij}^{k}\}$ with $$\|[x_{k},e_{ij}^{k}]\|_{2}<\frac{1}{k},\qquad\forall 1\leq i,j\leq n.$$ Then $\{(e_{ij}^{k})\}_{1\leq i,j\leq n}$ are matrix units in $\mathcal{M}^{\omega}$ which commute with $(x_{k})$. ∎ We need to review some more terminology. A generator of a von Neumann algebra is $x\in\mathcal{N}$ satisfying $\mathcal{N}=W^{*}(x)$. The generator problem asks if every von Neumann algebra with separable predual has a generator; the only unresolved cases are certain $\text{II}_{1}$ factors, including in particular $L(\mathbb{F}_{3})$. A recent survey of the generator problem is [Sh]. We say that a $\text{II}_{1}$ factor $\mathcal{M}$ is McDuff if $\mathcal{M}\cong\mathcal{M}\overline{\otimes}\mathcal{R}$. The main result of [McD] can then be formulated as follows. Theorem 2.7. $($[McD, Theorem 3 and Lemma 7]$)$ $\mathcal{M}$ is McDuff if and only if $\pi(\mathcal{M})^{\prime}\cap\mathcal{M}^{\omega}$ is noncommutative, and in this case $\pi(\mathcal{M})^{\prime}\cap\mathcal{M}^{\omega}$ is type $\text{II}_{1}$. Theorem 2.8. Let $2\leq n<\aleph_{0}$. If $\mathcal{M}$ is McDuff, the $n$-divisible operators are $\sigma$-strongly dense. For singly-generated $\mathcal{M}$, the converse holds as well. Proof. If $\mathcal{M}$ is McDuff, we know from Theorem 2.7 that $\pi(\mathcal{M})^{\prime}\cap\mathcal{M}^{\omega}$ is type $\text{II}_{1}$. Then for any $x\in\mathcal{M}$, $$W^{*}(\pi(x))^{\prime}\cap\mathcal{M}^{\omega}\supseteq\pi(\mathcal{M})^{% \prime}\cap\mathcal{M}^{\omega}\supset\mathbb{M}_{n},$$ and the conclusion follows from Proposition 2.5. If $\mathcal{M}=W^{*}(x)$ and $x$ is a $\sigma$-strong limit of $n$-divisibles, then Proposition 2.5 again implies $$\pi(\mathcal{M})^{\prime}\cap\mathcal{M}^{\omega}=W^{*}(\pi(x))^{\prime}\cap% \mathcal{M}^{\omega}\supset\mathbb{M}_{n}.$$ By Theorem 2.7, the noncommutativity of $\pi(\mathcal{M})^{\prime}\cap\mathcal{M}^{\omega}$ implies that $\mathcal{M}$ is McDuff. ∎ Remark 2.9. Here is another way to see that $n$-divisible operators are $\sigma$-strongly dense in a McDuff factor $\mathcal{M}$. Let $\mathcal{M}_{k}=\mathcal{M}\otimes\mathbb{M}_{2^{k}}\subset\mathcal{M}% \overline{\otimes}\mathcal{R}\cong\mathcal{M}$ be a sequence of increasing subfactors with $\sigma$-strongly dense union. Note that each $\mathcal{M}_{k}$ has relative commutant $\cong\mathcal{R}$, so they consist entirely of $n$-divisible operators. Let $E_{k}$ be the trace-preserving conditional expectation from $\mathcal{M}$ onto $\mathcal{M}_{k}$. A simple martingale theorem (first proved in [U, Corollary 2.1]) shows that $E_{k}(x)\overset{\sigma-s}{\to}x$, for any $x\in\mathcal{M}$. The reader will admit the existence of McDuff factors: tensor any finite factor with $\mathcal{R}$. It may be less clear that there are $\text{II}_{1}$ factors in which the $n$-divisibles ($2\leq n<\aleph_{0}$) are not $\sigma$-strongly dense, so we now provide a variety of examples. Note that this is not intended as a complete list. The reader is referred to the sources for explanation of undefined terms. Corollary 2.10. For any $n>1$, the $n$-divisible operators are not $\sigma$-strongly dense in any of the following $\text{II}_{1}$ factors: (1) $L(SL(k,\mathbb{Z}))$ ($k\geq 3$ and odd) and $L(PSL(k,\mathbb{Z}))$ ($k\geq 4$ and even); (2) tensor products of two $\text{II}_{1}$ factors, neither McDuff and one with property $T$; (3) $L^{\infty}(X,\mu)\rtimes G$, where $(X,\mu)$ is a nonatomic probability space, $G$ is a countable discrete non-inner amenable group, and the action is free, ergodic, and measure-preserving; (4) factors which have $\Gamma$ but are not McDuff; (5) $L(\mathbb{F}_{m})$ ($m\geq 2$). Proof. For the first four classes, we simply explain why the factor is singly-generated and not McDuff. The conclusion then follows from Theorem 2.8. (1): They have property $T$ and so are not McDuff by [C2, CJ]. They are singly-generated by [GS]. (2): They are not McDuff by [M, Corollary 3.7]. Any tensor product of $\text{II}_{1}$ factors is singly-generated by [GP]. (3): They are not McDuff by [M, Proposition 3.9]. Any $\text{II}_{1}$ factor with a Cartan subalgebra is singly-generated by [P1]. (4): Factors with $\Gamma$ are singly-generated by [GP]. The first example of a non-McDuff factor with $\Gamma$ was constructed in [DL, Proposition 22]. (5): $L(\mathbb{F}_{m})$ is not known to be singly-generated for $m\geq 3$, so we use Proposition 2.5 instead. Let $\mathbb{F}_{m}$ have generators $\{g_{j}\}_{j=1}^{m}$, so that $L(\mathbb{F}_{m})$ has generators $\{\lambda_{g_{j}}\}$. Murray and von Neumann showed that these factors do not have $\Gamma$ and so are not isomorphic to $\mathcal{R}$ ([MvN, Section VI.6.2]). Their original estimates can be adapted to show that for a unitary $u\in L(\mathbb{F}_{m})$, (2.4) $$\|u-\tau(u)1\|_{2}\leq 14\max\{\|[x,\lambda_{g_{1}}]\|_{2},\|[x,\lambda_{g_{2}% }]\|_{2}\}.$$ (See [T2, Equation XIV(3.3)].) This implies that any sequence of unitaries which asymptotically commutes with $\lambda_{g_{1}}$ and $\lambda_{g_{2}}$ must be equivalent to a sequence of scalars. Note that $i\text{Log}(\lambda_{g_{j}})$ is self-adjoint, and set $$x=i\text{Log}(\lambda_{g_{1}})+\text{Log}(\lambda_{g_{2}}).$$ We then have $$W^{*}(\pi(x))^{\prime}\cap L(\mathbb{F}_{m})^{\omega}=W^{*}(\pi(\lambda_{g_{1}% }),\pi(\lambda_{g_{2}}))^{\prime}\cap L(\mathbb{F}_{m})^{\omega}=\mathbb{C}% \nsupseteq\mathbb{M}_{n}.\qed$$ Let $2\leq n<\aleph_{0}$, and consider the following conditions on a $\text{II}_{1}$ factor $\mathcal{M}$ with separable predual: (1) $\mathcal{M}$ is McDuff; (2) for every singly-generated subalgebra $\mathcal{N}\subseteq\mathcal{M}$, $\pi(\mathcal{N})^{\prime}\cap\mathcal{M}^{\omega}$ is type $\text{II}_{1}$; ($3_{n}$) for every singly-generated subalgebra $\mathcal{N}\subseteq\mathcal{M}$, $\pi(\mathcal{N})^{\prime}\cap\mathcal{M}^{\omega}$ unitally contains $\mathbb{M}_{n}$. Each of these conditions implies its successor, and the last is equivalent to $\sigma$-strong density of the $n$-divisible operators in $\mathcal{M}$. It seems natural to call condition (2) “locally McDuff.” Problem 2.11. Is either of the implications (2) $\Rightarrow$ (1), ($3_{n}$) $\Rightarrow$ (2) valid? Both of these implications would follow from an affirmative answer to the generator problem. This means that to disprove one of them, one would have to establish the existence of a von Neumann algebra with separable predual which is not singly-generated. In posing this problem, we are really asking if either implication can be proved directly, without resolution of the generator problem. For $\mathcal{N}\subset\mathcal{M}$, the algebra $\pi(\mathcal{N})^{\prime}\cap\mathcal{M}^{\omega}$ has received occasional attention in the literature; see [C1, Lemma 2.6], [M, Theorem 3.5], [P2, Lemma 3.3.2], and [FGL, Theorems 3.5 and 4.7]. It should not be confused with $\pi(\mathcal{M})^{\prime}\cap\mathcal{N}^{\omega}$, which was studied by Bisch ([B]). For $\mathcal{N}$ a factor, he showed that the latter algebra is noncommutative exactly when $[\mathcal{N}\subset\mathcal{M}]\cong[\mathcal{N}\overline{\otimes}\mathcal{R}% \subset\mathcal{M}\overline{\otimes}\mathcal{R}]$; in this case the inclusion is said to be McDuff. Remark 2.12. Let $\mathcal{M}$ be a singly-generated $\text{II}_{1}$ factor with separable predual. By Theorem 2.8, $\mathcal{M}$ is McDuff if and only if the 2-divisible operators are $\sigma$-strongly dense. There are a variety of ways to quantify this and so obtain a numerical invariant which measures “how far $\mathcal{M}$ is from being McDuff.” One way would be to find the supremum of distances (in the $\|\|_{2}$-metric) from $x$ to the 2-divisible operators, where $x$ runs over the unit ball. Here is a related approach. As shown in the proof of Proposition 2.5, if $x$ is strongly approximated by $2$-divisible operators, then approximants may be “built out of $x$” in the sense of (2.3). So we may instead ask for the distance to operators of the form $v^{*}vxv^{*}v+vxv^{*}$, with $v$ a partial isometry satisfying $v^{*}v+vv^{*}=1$. This gives the invariant (2.5) $$\sup_{x\in\mathcal{M}_{1}}\inf_{\begin{subarray}{c}v=vv^{*}v\\ v^{*}v+vv^{*}=1\end{subarray}}\|x-(vxv^{*}+v^{*}vxv^{*}v)\|_{2},$$ which is zero if and only if $\mathcal{M}$ is McDuff. At this point the author knows nothing interesting about this quantity when it is nonzero. One may define similar invariants based on $n$-divisibles for other $n$; the author also does not know how these numbers depend on $n$. 3. Closures in the norm topology The ostensible goal of this section is to describe the norm closure of the $n$-divisible operators in various von Neumann algebras, but our results are rather incomplete for algebras other than $\mathcal{B}(\mathfrak{H})$. To some extent this deficiency is due to the lack of a generalization of Voiculescu’s theorem – see [H5, DH, S1, S4] for discussion and partial results. In $\mathcal{B}(\mathfrak{H})$, at least, we arrive at clean descriptions and ultimately show that the $n$-divisibles are nowhere dense. A first hope might be to imitate the techniques of the previous section. There we saw that for singly-generated $\text{II}_{1}$-factors, $\sigma$-strong density of $n$-divisibles ($2\leq n<\aleph_{0}$) is equivalent to the existence of noncommuting central sequences. Central sequences of matrix units give a “universal formula” for producing $n$-divisible $\sigma$-strong approximants out of any element, as in (2.3). Can a similar construction work in the norm topology? The most natural setup is this. For $\mathcal{M}$ a von Neumann algebra, the quotient $(\ell^{\infty}(\mathcal{M})/c_{0}(\mathcal{M}))$ is a $C^{*}$-algebra. Let $\sigma:\mathcal{M}\hookrightarrow(\ell^{\infty}(\mathcal{M})/c_{0}(\mathcal{M}))$ be the inclusion as (cosets of) constant sequences. Then the “central sequence algebra” is $\sigma(\mathcal{M})^{\prime}\cap(\ell^{\infty}(\mathcal{M})/c_{0}(\mathcal{M}))$. If it were to contain $\mathbb{M}_{n}$ unitally, one could mimic (2.3) and use the matrix units to build $n$-divisible norm approximants for any operator. Unfortunately, this sort of central sequence algebra is always commutative. Proposition 3.1. Under the circumstances of the preceding paragraph, $$\sigma(\mathcal{M})^{\prime}\cap(\ell^{\infty}(\mathcal{M})/c_{0}(\mathcal{M})% )=\ell^{\infty}(\mathcal{Z}(\mathcal{M}))/c_{0}(\mathcal{Z}(\mathcal{M})).$$ Proof. If $(x_{n})$ represents an element of the left-hand side, then $$\|(\text{ad }x_{n})(y)\|=\|[x_{n},y]\|\to 0,\qquad\forall y\in\mathcal{M}.$$ By [El, Theorem 3.4], a sequence of derivations converges in the point-norm topology only if it converges in the norm topology, so $\|\text{ad }x_{n}\|\to 0$. We also have that $\|\text{ad }x_{n}\|=2\text{ dist}(x_{n},\mathcal{Z}(\mathcal{M}))$ (proved independently in [Ga] and [Z]). Therefore $(x_{n})$ can also be represented by a sequence from $\mathcal{Z}(\mathcal{M})$. ∎ Remark 3.2. In contrast to Proposition 3.1, Hadwin’s asymptotic double commutant theorem ([H3]) implies that for any separable subset $S\subset\mathcal{B}(\ell^{2})$, $\sigma(S)^{\prime}\cap(\ell^{\infty}(\mathcal{B}(\ell^{2}))/c_{0}(\mathcal{B}(% \ell^{2})))$ is noncommutative. Since infinite-dimensional von Neumann algebras are not norm separable and therefore never singly-generated (or even countably-generated) as $C^{*}$-algebras, Proposition 3.1 cannot be used to preclude the density of $n$-divisibles in the manner of Theorem 2.8. The symbol $\mathcal{K}$ will denote the closed *-ideal generated by the finite projections in any von Neumann algebra under discussion. Proposition 3.3. Let $\mathcal{M}$ be a properly infinite semifinite von Neumann algebra and $k\in\mathcal{K}$. Then the distance from $k$ to the $\aleph_{0}$-divisible operators is $\frac{\|k\|}{2}$. In particular, if $k$ is a norm limit of $\aleph_{0}$-divisible operators, then $k=0$. Proof. Let $y$ be $\aleph_{0}$-divisible, and fix $\varepsilon>0$. Then $p=\chi_{[\|y\|-\varepsilon,\|y\|]}(|y|)$ is also $\aleph_{0}$-divisible and therefore infinite. In order to mesh cleanly with the paper [Kaf1], represent $\mathcal{M}$ faithfully on a Hilbert space $\mathfrak{H}$. By [Kaf1, Theorem 1.3(d)], $k$ is not bounded below on $p\mathfrak{H}$, so there is a unit vector $\xi\in p\mathfrak{H}$ with $\|k\xi\|<\varepsilon$. Since $y$ is bounded below on $p\mathfrak{H}$ by $\|y\|-\varepsilon$, $$\|(y-k)\xi\|\geq\|y\xi\|-\|k\xi\|>\|y\|-2\varepsilon.$$ Now $\varepsilon$ is arbitrary, so $\|y-k\|\geq\|y\|$. Then $$\|y-k\|\geq\|y\|\geq\|k\|-\|y-k\|\quad\Rightarrow\quad\|y-k\|\geq\frac{\|k\|}{% 2}.$$ This shows that the distance from $k$ to the $\aleph_{0}$-divisibles is $\geq\frac{\|k\|}{2}$. For the opposite inequality, take any $\varepsilon>0$. By definition, $k$ is approximated within $\varepsilon$ by an operator $f$ whose supports are finite; let $q=s_{\ell}(f)\vee s_{r}(f)$ (which is finite). Then $$\|k-qkq\|\leq\|k-f\|+\|f-qkq\|=\|k-f\|+\|q(k-f)q\|<2\varepsilon.$$ By the finiteness of $q$, we can find a projection $r\geq q$ such that $r\sim r^{\perp}$. Note that $$\|k-rkr\|=\|k-qkq+r(qkq-k)r\|\leq\|k-qkq\|+\|r(qkq-k)r\|\leq 4\varepsilon.$$ Write $1=\sum_{j=1}^{\infty}r_{j}$, with $r=r_{1}$. Let $\{v_{j}\}$ be partial isometries with $v_{1}=r_{1}$ and $v_{j}v_{j}^{*}=r_{j}$, $v_{j}^{*}v_{j}=r_{1}$ for $j\geq 2$. Finally, consider the $\aleph_{0}$-divisible operator $\sum\frac{v_{j}kv_{j}^{*}}{2}$. We have $$\displaystyle\left\|k-\left(\sum\frac{v_{j}kv_{j}^{*}}{2}\right)\right\|$$ $$\displaystyle\leq\|k-rkr\|+\left\|rkr-\left(\sum\frac{v_{j}kv_{j}^{*}}{2}% \right)\right\|$$ $$\displaystyle\leq 4\varepsilon+\left\|\frac{rkr}{2}-\left(\sum_{j=2}^{\infty}v% _{j}\left(\frac{rkr}{2}\right)v_{j}^{*}\right)\right\|$$ $$\displaystyle\overset{(*)}{=}4\varepsilon+\frac{\|rkr\|}{2}$$ $$\displaystyle\leq 4\varepsilon+\frac{\|rkr-k\|+\|k\|}{2}$$ $$\displaystyle\leq 6\varepsilon+\frac{\|k\|}{2}.$$ (The equality $(*)$ is justified by noting that the summation in the previous expression is an orthogonal sum of operators unitarily conjugate to $\frac{rkr}{2}$.) ∎ Since there are no $\aleph_{0}$-divisible operators in a finite algebra, we deduce Corollary 3.4. The $\aleph_{0}$-divisible operators are not norm dense in any semifinite algebra. At this point all entries of Table 1 have been justified. We next obtain much more specific information for infinite type I factors, writing $0_{\infty}$ for the zero operator on $\ell^{2}$. We need to recall Voiculescu’s theorem and the relevant terminology. Two operators $x,y$ are said to be approximately equivalent when there is a sequence of unitaries $\{u_{n}\}$ with $u_{n}xu_{n}^{*}\to y$ in norm. (Sometimes this term implies also that the differences $u_{n}xu_{n}^{*}-y$ are compact, but we do not make this requirement here.) Similarly, two nondegenerate representations $\rho,\sigma$ of a $C^{*}$-algebra $\mathcal{A}$ are approximately equivalent when there is a net of unitaries $\{u_{\alpha}\}$ with $(\text{Ad }u_{\alpha})\circ\rho\to\sigma$ in the point-norm topology. We denote approximate equivalence by $\sim_{\text{a.e.}}$. It is clear that an approximate equivalence can be multiplied by an arbitrary cardinal, in the sense of Lemma 1.4(1). Note that when $\mathcal{A}=C^{*}(x)$, (3.1) $$\rho\sim_{\text{a.e.}}\sigma\iff\rho(x)\sim_{\text{a.e.}}\sigma(x).$$ Notation. Let $\rho:\mathcal{A}\to\mathcal{B}(\mathfrak{H})$ be a representation of a separable $C^{*}$-algebra on a separable Hilbert space. The set $$\mathcal{I}_{\rho}\overset{\text{def}}{=}\rho^{-1}(\rho(\mathcal{A})\cap% \mathcal{K})$$ is an ideal of $\mathcal{A}$. This allows us to write $\rho=\rho^{1}\oplus\rho^{2}$, where $\rho^{1}$ is the restriction of $\rho$ to the reducing subspace $\rho(\mathcal{I}_{\rho})\mathfrak{H}$, sometimes called the essential part of $\rho$ ([A2, p.341]). Of course $\rho^{1}$ or $\rho^{2}$ may be absent from this decomposition. Theorem 3.5. $($[V, Theorem 1.5]$)$ Let $\rho_{j}$ ($j=1,2$) be representations of a separable $C^{*}$-algebra $\mathcal{A}$ on a separable Hilbert space $\mathfrak{H}$. Then $\rho_{1}\sim_{\text{a.e.}}\rho_{2}$ if and only if (i) $\ker\rho_{1}=\ker\rho_{2}$, (ii) $\mathcal{I}_{\rho_{1}}=\mathcal{I}_{\rho_{2}}$, and (iii) $\rho_{1}^{1}$ and $\rho_{2}^{1}$ are unitarily equivalent when restricted to this ideal. Theorem 3.6. Let $\mathfrak{H}$ be an infinite-dimensional Hilbert space, $n<\aleph_{0}$, and $k\in\mathcal{K}\subset\mathcal{B}(\mathfrak{H})$. Then $k$ is a norm limit of $n$-divisibles if and only if $k\oplus 0_{\infty}$ is $n$-divisible. Proof. Assume that $k$ is a norm limit of $n$-divisibles, and let $k\oplus 0_{\infty}=\text{Re}(k\oplus 0_{\infty})+i\text{Im}(k\oplus 0_{\infty})$ be the decomposition into real and imaginary parts. Each is self-adjoint and compact, so we may list their (finitely or infinitely many) nonzero eigenvalues as follows, including multiplicity. $$\text{Re}(k\oplus 0_{\infty}):\qquad\lambda_{-1}\leq\lambda_{-2}\leq\dots<0<% \dots\leq\lambda_{2}\leq\lambda_{1}$$ $$\text{Im}(k\oplus 0_{\infty}):\qquad\mu_{-1}\leq\mu_{-2}\leq\dots<0<\dots\leq% \mu_{2}\leq\mu_{1}$$ We further set $$p_{j}=\chi_{\{\lambda_{j}\}}(\text{Re}(k\oplus 0_{\infty}));\qquad q_{j}=\chi_% {\{\mu_{j}\}}(\text{Im}(k\oplus 0_{\infty})).$$ We also have that $k\oplus 0_{\infty}$ is a norm limit of $n$-divisibles. (Just add the summand $0_{\infty}$ onto the $n$-divisible operators converging to $k$.) Let $\{e^{(m)}_{ij}\}_{i,j=i}^{n}$ be matrix units commuting with the $m$th operator in the sequence. It follows as in (2.2) that $\|[e_{ij}^{(m)},k\oplus 0_{\infty}]\|\to 0$. By repeated use of the triangle equality, $\|[e_{ij}^{(m)},x]\|\to 0$ for every $x\in C^{*}(k\oplus 0_{\infty})$, in particular the $p_{j}$ and $q_{j}$ considered above. For each $m$, $\{p_{1}e^{(m)}_{ij}p_{1}\}$ is an $n^{2}$-tuple in the unit ball of the finite dimensional space $p_{1}\mathcal{B}(\mathfrak{H})p_{1}$. Pick a subsequence (still denoted by $m$) which is convergent. Because $\|[e_{ij}^{(m)},p_{1}]\|\overset{m\to\infty}{\to}0$, $$(\lim_{m}p_{1}e^{(m)}_{ij}p_{1})(\lim_{m}p_{1}e^{(m)}_{kl}p_{1})=\delta_{jk}(% \lim_{m}p_{1}e^{(m)}_{il}p_{1}).$$ Therefore $\{\lim p_{1}e^{(m)}_{ij}p_{1}\}_{i,j=1}^{n}$ is a set of matrix units in $p_{1}\mathcal{B}(\ell^{2})p_{1}$. Now refine the subsequence so that $\{p_{-1}e^{(m)}_{ij}p_{-1}\}$ is a convergent $m$-tuple in $p_{-1}\mathcal{B}(\mathfrak{H})p_{-1}$ (with matrix units as limits). One may continue refining for $q_{1}$, then $q_{-1}$, then $p_{2},p_{-2},q_{2},q_{-2}$, etc. Extract a diagonal subsequence, still calling the index $m$. Let $r$ be the supremum of all $p_{j}$ and $q_{j}$, so that $r^{\perp}$ is the infinite-rank projection onto the nullspace $\ker(k\oplus 0_{\infty})\cap\ker(k^{*}\oplus 0_{\infty})$. For each $i$ and $j$, the strong limit of $re^{(m)}_{ij}r$ exists by the previous paragraph, and these limits form matrix units for $r\mathcal{B}(\mathfrak{H})r$. They fix all nonzero eigenspaces of $\text{Re}(k\oplus 0_{\infty})$ and $\text{Im}(k\oplus 0_{\infty})$, so they commute with $r(k\oplus 0_{\infty})r$ and $r(k\oplus 0_{\infty})^{*}r$. It follows that $r(k\oplus 0_{\infty})r$ is $n$-divisible. Now choose any matrix units for $r^{\perp}\mathcal{B}(\mathfrak{H}\oplus\ell^{2})r^{\perp}$, and add them to the corresponding matrix units for $r\mathcal{B}(\mathfrak{H})r$ constructed above. This produces matrix units for $\mathcal{B}(\mathfrak{H}\oplus\ell^{2})$ which commute with $k\oplus 0_{\infty}$, completing the proof of the forward implication. The opposite implication is trivial when $\mathfrak{H}$ has uncountable dimension, as then $k$ and $k\oplus 0_{\infty}$ are unitarily equivalent. (Remember that $k$ is compact.) For separable $\mathfrak{H}$ we claim $k\sim_{\text{a.e.}}k\oplus 0_{\infty}$, so that $k$ is a norm limit of $n$-divisible unitary conjugates of $k\oplus 0_{\infty}$. To prove the claim, let $\sigma$ be the representation of $C^{*}(k)$ on $\ell^{2}$ with $\sigma(1)=1$ and $\sigma(k)=0$. Apply Voiculescu’s theorem to conclude $\text{id}\sim_{\text{a.e.}}\text{id}\oplus\sigma$ as representations of $C^{*}(k)$. Then use (3.1). ∎ Remark 3.7. There is no variation of Theorem 3.6 for nonatomic factors which is both useful and true. Here are examples which show that the compactness of $k$ is indispensible for both its implications. • Take $k$ to be a (noncompact) projection of corank 1 in $\mathcal{B}(\ell^{2})$, so that $k\oplus 0_{\infty}$ is $n$-divisible. If $k$ were a norm limit of $n$-divisibles, then $1-k$ would be, too. (Just subtract the approximating sequence from $1$.) But $1-k$ is compact and is shown not to be a norm limit of $n$-divisibles by the theorem. • Take $k\in\mathcal{B}(\ell^{2}(\mathbb{Z}))$ to be the (noncompact) bilateral shift. It generates a maximal abelian *-subalgebra, so $k\oplus 0_{\infty}$ is not $n$-divisible. But the argument in Proposition 2.2 shows that $k$ is a norm limit of $n$-divisibles, as $k$ can be approximated by simple operators whose spectral projections are all infinite. Theorem 3.8. Let $x\in\mathcal{B}(\ell^{2})$, $n<\aleph_{0}$, and id be the identity representation of $C^{*}(x)$. The following conditions are equivalent: (1) $x$ is a norm limit of $n$-divisible operators; (2) $C^{*}(x\oplus 0_{\infty})\cap\mathcal{K}$ consists of $n$-divisible operators; (3) $\mbox{{id}}^{1}$ is $n$-divisible; (4) $x$ is approximately equivalent to an $n$-divisible operator. Proof. (4) $\Rightarrow$ (1): The hypothesis implies $x$ is a norm limit of unitary conjugates of a fixed $n$-divisible operator. (1) $\Rightarrow$ (2): If $x$ is a norm limit of $n$-divisible operators, the same holds for every element of $C^{*}(x)$. Now $C^{*}(x\oplus 0_{\infty})\cap\mathcal{K}=(C^{*}(x)\cap\mathcal{K})\oplus 0_{\infty}$, and the conclusion follows from Theorem 3.6. (2) $\Rightarrow$ (3): We have $$C^{*}(x\oplus 0_{\infty})\cap\mathcal{K}=\text{id}^{1}(\mathcal{I}_{\text{id}}% )\oplus\text{id}^{2}(0)\oplus 0_{\infty}.$$ Restricted to $\mathcal{I}_{\text{id}}$, $\text{id}^{1}$ is a direct sum of irreducible representations, the image of each being isomorphic to $\mathcal{K}$ or a matrix factor. Condition (2) says that these representations all occur with multiplicities divisible by $n$. But an irreducible representation of an ideal uniquely induces an irreducible representation of the ambient algebra ([A1, Theorem 1.3.4]). So on all of $C^{*}(x)$, $\text{id}^{1}$ is a direct sum of irreducible representations, each with multiplicity divisible by $n$. (Always $\text{id}^{1}(x)$ is a direct sum of irreducible operators, each occurring with finite multiplicity. Hadwin ([H1]) calls these irreducible operators the isolated reducing operator-eigenvalues with finite multiplicity, written $\Sigma_{00}(x)$. Condition (3) says exactly that every element of $\Sigma_{00}(x)$ has (finite) multiplicity divisible by $n$.) (3) $\Rightarrow$ (4): Write $\text{id}^{1}=\text{\textcircled{$n$}}\rho$ for some representation $\rho$. By Voiculescu’s theorem, $$\text{id}=\text{id}^{1}\oplus\text{id}^{2}=(\text{\textcircled{$n$}}\rho)% \oplus\text{id}^{2}\sim_{\text{a.e.}}(\text{\textcircled{$n$}}\rho)\oplus(% \text{\textcircled{$n$}}\text{id}^{2})=\text{\textcircled{$n$}}(\rho\oplus% \text{id}^{2}).$$ Plugging in $x$ as in (3.1), $x\sim_{\text{a.e.}}\text{\textcircled{$n$}}(\rho(x)\oplus\text{id}^{2}(x))$. ∎ Theorem 3.9. Let $x\in\mathcal{B}(\ell^{2})$ and id be the identity representation of $C^{*}(x)$. The following conditions are equivalent: (1) $x$ is a norm limit of $\aleph_{0}$-divisible operators; (2) $C^{*}(x)\cap\mathcal{K}=\{0\}$; (3) $\mbox{{id}}^{1}$ is void; (4) $x$ is approximately equivalent to an $\aleph_{0}$-divisible operator; (5) $x\sim_{\text{a.e.}}\text{\textcircled{$n$}}x$ for some (hence any) $2\leq n\leq\aleph_{0}$. Proof. The equivalence of conditions (1)-(4) is proved as in Theorem 3.8, with Proposition 3.3 used in place of Theorem 3.6. To see (4) $\Rightarrow$ (5), let $2\leq n\leq\aleph_{0}$ and compute (3.2) $$x\sim_{\text{a.e.}}\text{\textcircled{\smaller[5]$\aleph_0$\larger[5]}}y\quad% \Rightarrow\quad\text{\textcircled{$n$}}x\sim_{\text{a.e.}}\text{\textcircled{% $n$}}\text{\textcircled{\smaller[5]$\aleph_0$\larger[5]}}y\cong\text{% \textcircled{\smaller[5]$\aleph_0$\larger[5]}}y\sim_{\text{a.e.}}x.$$ In [H3, Proof of Corollary 4.3], Hadwin mentions that for $n=2$, the implication (5) $\Rightarrow$ (2) is a consequence of Voiculescu’s theorem. For the reader’s convenience we explicitly prove (5) $\Rightarrow$ (3). Seeking a contradiction, suppose that $x\sim_{\text{a.e.}}\text{\textcircled{$n$}}x$ for some $2\leq n\leq\aleph_{0}$, and that $\text{id}^{1}$ is not void. Then $$x\sim_{\text{a.e.}}\text{\textcircled{$n$}}x\sim_{\text{a.e.}}\text{% \textcircled{$n$}}(\text{\textcircled{$n$}}x)\dots,$$ so by (4) $\Rightarrow$ (3) of Theorem 3.8, $\text{id}^{1}$ is $n^{k}$-divisible for arbitrarily large integer $k$. But this is impossible, as the range of $\text{id}^{1}$ contains nonzero finite rank operators. ∎ Remark 3.10. We now give an example which shows that if $x$ is a norm limit of $n$-divisibles, the same need not be true for elements of $W^{*}(x)$. This was mentioned after Lemma 1.4. Let $x$ be a diagonal operator on $\ell^{2}$ whose eigenvalues are simple and dense in $[0,1]$, so that $W^{*}(x)$ contains a rank one projection $p$. Since $C^{*}(x)\cap\mathcal{K}=\{0\}$ and $C^{*}(p)\cap\mathcal{K}\ni p$, it follows from Theorem 3.8 ($2\leq n<\aleph_{0}$) or 3.9 ($n=\aleph_{0}$) that $x$, but not $p$, is a norm limit of $n$-divisible operators. (It is not hard to argue this directly.) Corollary 3.11. In $\mathcal{B}(\ell^{2})$, we have (3.3) $$\overline{\{\text{$\aleph_{0}$-divisible operators}\}}=\bigcap_{n<\aleph_{0}}% \overline{\{\text{$n$-divisible operators}\}}$$ On the other hand (3.4) $$\{\text{$\aleph_{0}$-divisible operators}\}\subsetneqq\bigcap_{n<\aleph_{0}}\{% \text{$n$-divisible operators}\},$$ although this inclusion is dense. Proof. Equation (3.3) follows from the second conditions in Theorems 3.8 and 3.9, plus the fact that no compact operator is $n$-divisible for all finite $n$. The inequality in (3.4) results from considering $x\in\mathcal{B}(\ell^{2})$ with $W^{*}(x)^{\prime}$ of type $\text{II}_{1}$, while density follows from (3.3). ∎ Although we will not need Corollary 3.11 in the sequel, we will use Proposition 3.12. In $\mathcal{B}(\ell^{2})$, we have (3.5) $$\overline{\bigcup_{n\leq\aleph_{0}}\{\text{$n$-divisible operators}\}}=\bigcup% _{n\leq\aleph_{0}}\overline{\{\text{$n$-divisible operators}\}}.$$ Proof. We only need to show the inclusion “$\subseteq$” of (3.5). Let $x\in\mathcal{B}(\ell^{2})$, and let id be the identity representation of $C^{*}(x)$. If $\text{id}^{1}$ is absent, then $C^{*}(x)\cap\mathcal{K}=\{0\}$. By Theorem 3.9, $x$ is a norm limit of $\aleph_{0}$-divisible operators and so belongs to both sides of (3.5). In the remainder of the proof we assume that $\text{id}^{1}$ is not absent. This entails that $C^{*}(x)$ contains a finite rank projection $q$, say of rank $m$. Suppose that $x$ belongs to the left-hand side of (3.5). Since closure commutes with finite unions, (3.6) $$x\in\overline{\{2\text{-divs}\}}\cup\overline{\{3\text{-divs}\}}\cup\dots\cup% \overline{\{m\text{-divs}\}}\cup\overline{\bigcup_{n>m}\{n\text{-divs}\}}.$$ Seeking a contradiction, assume that $x$ does not belong to the right-hand side of (3.5). Then it would have to belong to the union at the far right of (3.6), and because $q\in C^{*}(x)$, $q$ would belong to this union as well (by a variant of Lemma 1.4(5)). In particular there must be an operator $d$, $n$-divisible for some $n>m$, with $\|d-q\|=\delta<\frac{1}{2}$. By considering the real part, we may assume that $d$ is self-adjoint. From elementary invertibility considerations $\text{sp}(d)\subseteq[-\delta,\delta]\cup[1-\delta,1+\delta]$. Setting $p=\chi_{[1-\delta,1+\delta]}(d)$, we compute (3.7) $$\|p-qp\|\leq\|p-q\|\|p\|=\|p-q\|\leq\|p-d\|+\|d-q\|\leq 2\delta<1.$$ Now $\|p-q\|<1$ implies that $p\neq 0$. Together with the $n$-divisibility of $p$ (because $p\in C^{*}(d)$) and $n>m$, this gives $$\text{rank}(qp)\leq\text{rank}(q)=m<\text{rank}(p),$$ so that $qp$ must have nontrivial kernel in $p\ell^{2}$. Thus $\|p-qp\|=1$, contradicting (3.7). In passing we note that the distance from $q$ to the $n$-divisible operators ($n>m$) is exactly $\frac{1}{2}$; consider the $n$-divisible operator $\frac{1}{2}I$. ∎ Remark 3.13. It is typically nontrivial to calculate the exact distance from a given $x$ to the $n$-divisible operators. For $x$ compact and $n=\aleph_{0}$, we solved this in Proposition 3.3. For $x$ self-adjoint, an answer can be deduced from the main results of [AD], but we have no need to present such an expression here. In the proof of Proposition 3.12 we determined that $x$ could not belong to the closure of the $n$-divisible operators, but we did not obtain any estimate of the distance. Lower bounds are available, at least in theory, by using the “noncommutative continuous functions” introduced by Hadwin. These are appropriate limits of noncommutative polynomials; see [H2, HKM]. Revisiting the proof in this light, we have that $q\in C^{*}(x)$ implies $q=\varphi(x)$ for some noncommutative continuous function $\varphi$. Continuity means that there is $\delta>0$ such that for all $y\in\mathcal{B}(\ell^{2})$, $$\|y-x\|<\delta\quad\Rightarrow\quad\|\varphi(y)-q\|=\|\varphi(y)-\varphi(x)\|<% \frac{1}{2}.$$ Still assuming $n>m$, for $\|y-x\|<\delta$ we have $$\displaystyle\text{dist}(\varphi(y),n\text{-divs})$$ $$\displaystyle\geq\text{dist}(q,n\text{-divs})-\|\varphi(y)-q\|$$ $$\displaystyle=\frac{1}{2}-\|\varphi(y)-q\|$$ $$\displaystyle>0,$$ which implies as before that $y$ cannot be a norm limit of $n$-divisibles. Thus the distance from $x$ to the $n$-divisibles is at least $\delta$. The preceding paragraph bears some resemblance to the proof of [H1, Theorem 2.10]. Putting Proposition 3.12 together with the implication (1) $\to$ (4) in Theorems 3.8 and 3.9, we obtain Theorem 3.14. If an operator $x\in\mathcal{B}(\ell^{2})$ is a norm limit of divisible operators, then it is a norm limit of unitary conjugates of a single divisible operator. The skeptical reader may wonder if this is part of a larger and simpler truth, namely that norm limits of unitarily invariant sets in $\mathcal{B}(\ell^{2})$ must be approximately equivalent to a member of the set. A counterexample is given by the irreducible operators, which are norm dense ([Ha2] or the one-page paper [RR]). It is easy to check that no irreducible operator is approximately equivalent to a rank one projection. (Voiculescu’s theorem implies that if $x\in\mathcal{B}(\ell^{2})$ is approximately equivalent to an irreducible operator, $C^{*}(x)\cap\mathcal{K}$ is either $\{0\}$ or $\mathcal{K}$.) Theorem 2.4(2) already ruled out the norm density of the $n$-divisible operators in $\mathcal{B}(\ell^{2})$, for any $n$. After a lemma, we will establish a stronger result. Lemma 3.15. For any $x\in\mathcal{B}(\ell^{2})$ and $\varepsilon>0$, there is $y$ such that $\|y-x\|<\varepsilon$ and $C^{*}(y)$ contains a rank one projection. Proof. It goes back to Weyl ([We]) that any self-adjoint operator can be perturbed by an arbitrarily small self-adjoint compact operator to become diagonal. Apply this to $x_{1}$, the real part of $x$, finding $k$ such that $\|k\|<\frac{\varepsilon}{2}$ and $x_{1}+k$ is diagonal. Now choose any eigenvalue $\lambda$ for $x_{1}+k$ and let $p$ be a rank one projection under $\chi_{\{\lambda\}}(x_{1}+k)$. The operator $$y=x+k+\frac{\varepsilon}{2}\left[\chi_{(\lambda-(\varepsilon/4),\lambda+(% \varepsilon/4))}(x_{1}+k)-p\right]$$ has the property that $\chi_{(\lambda-(\varepsilon/4),\lambda+(\varepsilon/4))}(\text{Re }y)=p$, so that $p\in C^{*}(y)$. Furthermore $$\|y-x\|\leq\|k\|+\left\|\frac{\varepsilon}{2}\left[\chi_{(\lambda-(\varepsilon% /4),\lambda+(\varepsilon/4))}(x_{1}+k)-p\right]\right\|\leq\frac{\varepsilon}{% 2}+\frac{\varepsilon}{2}=\varepsilon.\qed$$ Theorem 3.16. The set of divisible operators in $\mathcal{B}(\ell^{2})$ is nowhere dense in the norm topology. Proof. By Lemma 3.15 any open ball contains an element $y$ such that $C^{*}(y)$ contains a rank one projection. According to Theorems 3.8 and 3.9, $y$ is not in the closure of the $n$-divisible operators for any $n$. By Proposition 3.12, $y$ is not in the closure of all the divisible operators. ∎ What about norm density of the $n$-divisible operators in von Neumann algebras of types II and III? The results of Section 2 show that in some $\text{II}_{1}$ factors, the $n$-divisible operators are not even $\sigma$-strongly dense, but this is all we can say at this point. It would be interesting to decide the norm density of the $n$-divisible operators in $\mathcal{R}$. Similarly to Remark 2.9, one can apply a martingale theorem ([D-N, Theorem 8]) to any McDuff factor $(\mathcal{M},\tau)$ and conclude that any operator in $\mathcal{M}$ is the almost uniform limit of $n$-divisible operators. (This means that for any $\varepsilon>0$, there is a projection $p\in\mathcal{M}$ and $n$-divisible operators $\{x_{n}\}$ with $\tau(p)<\varepsilon$ and $\|(x-x_{n})p\|\to 0$.) For any von Neumann algebra, one may measure the size of the norm closure by an invariant analogous to (2.5): just replace $\|\cdot\|_{2}$ with the operator norm. All the previous comments (including the author’s ignorance) apply to this variation. Many approximation problems from operator theory are unexplored in the larger context of von Neumann algebras. Techniques and answers may lend insight into the local structure of the algebras themselves, as in Theorem 2.8, or even provide useful invariants. Here is a basic example related both to this paper and to von Neumann algebraic analogues of Voiculescu’s theorem. Say that $x\in\mathcal{M}$ is reducible if $W^{*}(x)^{\prime}\cap\mathcal{M}\neq\mathbb{C}$ – are the reducible operators norm dense in a factor of type II or III? 4. Convexity of $\sigma$-weakly closed unitary orbits In this section we study the possible convexity of $\overline{\mathcal{U}(x)}^{\sigma-w}$, where $x\in\mathcal{M}$. (We remind the reader that the $\sigma$-weak topology is the weak* topology on $\mathcal{M}$. Actually the Banach space weak topology is also covered by the results below; see Remark 4.4.) For $\mathcal{M}=\mathcal{B}(\ell^{2})$, there are some descriptions of $\overline{\mathcal{U}(x)}^{\sigma-w}$ in the literature, probably the most notable being Hadwin’s characterization as the set of approximate compressions of $x$ ([H4, Theorem 4.4(3)]): (4.1) $$\overline{\mathcal{U}(x)}^{\sigma-w}=\overline{\{v^{*}xv\mid v^{*}v=1\}}^{\|\|}.$$ It also follows from [H4, Proposition 3.1(3) and Theorem 2.4] and [HL, Theorem 2(1)] that (4.2) $$\displaystyle\overline{\mathcal{U}(x)}^{\sigma-w}=\{\varphi($$ $$\displaystyle x)\mid\>\varphi:C^{*}(x)\to\mathcal{B}(\ell^{2})\text{ unital, completely}$$ $$\displaystyle\text{positive, and completely rank-nonincreasing}\}.$$ (The map $\varphi$ is completely rank-nonincreasing if $$\text{id}_{n}\otimes\varphi:\mathbb{M}_{n}\otimes C^{*}(x)\to\mathbb{M}_{n}% \otimes\mathcal{B}(\mathfrak{H})$$ is rank-nonincreasing for all finite $n$.) In another direction, Kutkut ([Ku1, Theorem 1.1]) showed that if $x$ is a contraction whose spectrum contains the unit circle, then $\overline{\mathcal{U}(x)}^{\sigma-w}$ is the closed unit ball of $\mathcal{B}(\ell^{2})$. He later extended this to certain operators with convex spectral sets ([Ku2]). Note that the closed unit disk is a spectral set for any contraction, by von Neumann’s inequality. In general von Neumann algebras most of the attention has focused on the closed convex hull $\overline{\text{conv}(\mathcal{U}(x))}^{\|\|}$. From among the substantial literature, we only mention two results here. Dixmier’s averaging theorem ([D]) establishes that $\overline{\text{conv}(\mathcal{U}(x))}^{\|\|}$ always intersects the center of $\mathcal{M}$. And assuming that $x$ is self-adjoint and $\mathcal{M}$ has separable predual, Hiai and Nakamura characterized $\overline{\text{conv}(\mathcal{U}(x))}^{\|\|}$ spectrally and proved that it equals $\overline{\text{conv}(\mathcal{U}(x))}^{\sigma-w}$ ([HN]). So in some cases where we can verify the convexity of $\overline{\mathcal{U}(x)}^{\sigma-w}$, we may actually deduce $$\overline{\mathcal{U}(x)}^{\sigma-w}=\overline{\text{conv}(\mathcal{U}(x))}^{% \sigma-w}=\overline{\text{conv}(\mathcal{U}(x))}^{\|\|}.$$ This means, for example, that one can do “Dixmier averaging” without any averaging… if one is content to approximate in the $\sigma$-weak topology. One might compare this with [S1, Corollary 6.6], where it is shown that $\overline{\mathcal{U}(x)}^{\sigma-s}$ (but not necessarily $\overline{\mathcal{U}(x)}^{\sigma-s^{*}}$) intersects the center whenever $\mathcal{M}$ is properly infinite. Outside of $\mathcal{B}(\mathfrak{H})$ the only descriptions we know of $\overline{\mathcal{U}(x)}^{\sigma-w}$ were obtained in recent work with Akemann ([AS]), and they apply exclusively to self-adjoint $x$. They do show that appropriate generalizations of (4.1) and (4.2), replacing “rank” by the equivalence class of the range projection, do not remain valid. Concerning convexity, they give Theorem 4.1. $($[AS]$)$ Let $x$ be a self-adjoint element of a factor $\mathcal{M}$. If $\mathcal{M}$ is type II or III, then $\overline{\mathcal{U}(x)}^{\sigma-w}$ is convex. If $\mathcal{M}$ is type I, $\overline{\mathcal{U}(x)}^{\sigma-w}$ is convex if and only if the spectrum and essential spectrum of $x$ have the same minimum and maximum. At present our only examples of nonconvex $\overline{\mathcal{U}(x)}^{\sigma-w}$ are in factors of type I. We would be interested to know if this can happen in other factors. The main goal of the section is to prove Theorem 4.2. Let $x$ belong to the norm closure of the $\aleph_{0}$-divisible operators in a von Neumann algebra $\mathcal{M}$. Then $\overline{\mathcal{U}(x)}^{\sigma-w}$ is convex. As mentioned in the Introduction, this result motivated our entire study of divisible operators. Its converse is not true: there are operators $x$ which are not norm limits of $\aleph_{0}$-divisible operators, yet $\overline{\mathcal{U}(x)}^{\sigma-w}$ is convex. (Use Theorems 3.9 and 4.1.) And the implication also fails when the norm topology is replaced by an operator topology. (Use Proposition 2.1 and Theorem 4.1.) Actually Theorem 4.2 is somewhat isolated, but it would have a very nice consequence if one could also show that the $\aleph_{0}$-divisible operators are norm dense in a type III factor. Lemma 4.3. Let $v$ and $x$ belong to a properly infinite von Neumann algebra $\mathcal{M}$, with $v$ an isometry. Then $v^{*}xv\in\overline{\mathcal{U}(x)}^{\sigma-w}$. Proof. Let $\{\varphi_{j}\}\subset\mathcal{M}^{+}_{*}$ be a finite subset. By repeatedly halving the identity, one can find a decreasing sequence of projections $\{p_{n}\}$ with $$p_{n}\sim 1,\;\forall n\qquad\text{and}\qquad\varphi_{j}(p_{n})\leq\frac{1}{n}% ,\;\forall j,n.$$ For each $n$, $vp_{n}^{\perp}$ is a partial isometry with right support $p_{n}^{\perp}$ and left support $vp_{n}^{\perp}v^{*}$. Note that $$1-vp_{n}^{\perp}v^{*}=1-vv^{*}+vp_{n}v^{*}\geq vp_{n}v^{*}\sim p_{n}\sim 1,$$ so that $1-vp_{n}^{\perp}v^{*}\sim p_{n}$. Letting $w_{n}$ be a partial isometry with right support $p_{n}$ and left support $1-vp_{n}^{\perp}v^{*}$, define $u_{n}$ to be the unitary operator $vp_{n}^{\perp}+w_{n}$. Thus $$u_{n}-v=w_{n}-vp_{n}.$$ Now we use Cauchy-Schwarz to calculate, for any $j$, $$\displaystyle|\varphi_{j}(u_{n}^{*}xu_{n}-v^{*}xv)|$$ $$\displaystyle=|\varphi_{j}(u_{n}^{*}xu_{n}-v^{*}xu_{n})+\varphi_{j}(v^{*}xu_{n% }-v^{*}xv)|$$ $$\displaystyle\leq|\varphi_{j}((w_{n}-vp_{n})^{*}xu_{n})|+|\varphi_{j}(v^{*}x(w% _{n}-vp_{n}))|$$ $$\displaystyle\leq\varphi_{j}(p_{n}(w_{n}-v)^{*}(w_{n}-v)p_{n})^{1/2}\varphi_{j% }(u_{n}^{*}x^{*}xu_{n})^{1/2}$$ $$\displaystyle\quad+\varphi_{j}(v^{*}xx^{*}v)^{1/2}\varphi_{j}(p_{n}(w_{n}-v)^{% *}(w_{n}-v)p_{n})^{1/2}$$ $$\displaystyle\leq\varphi_{j}(p_{n}(w_{n}-v)^{*}(w_{n}-v)p_{n})^{1/2}(2\|% \varphi_{j}\|^{1/2}\|x\|)$$ $$\displaystyle\leq\varphi_{j}(4p_{n})^{1/2}(2\|\varphi_{j}\|^{1/2}\|x\|)% \overset{n\to\infty}{\longrightarrow}0.\qed$$ Proof of Theorem 4.2. We first show (4.3) $$\frac{u_{1}xu_{1}^{*}+u_{2}xu_{2}^{*}}{2}\in\overline{\mathcal{U}(x)}^{\sigma-% w},\qquad\forall u_{1},u_{2}\in\mathcal{U}(\mathcal{M}).$$ Start by assuming that $x$ is $\aleph_{0}$-divisible. Then $W^{*}(x)^{\prime}\cap\mathcal{M}$ contains $\mathcal{B}(\ell^{2})$, so it contains two isometries $v,w$ satisfying $vv^{*}+ww^{*}=1$. This implies all of the following: (4.4) $$v^{*}w=w^{*}v=0,\quad v^{*}xw=w^{*}xv=0,\quad v^{*}xv=w^{*}xw=x.$$ Set (4.5) $$r=\frac{vu_{1}v^{*}+vu_{2}w^{*}}{\sqrt{2}},\quad s=\frac{vu_{1}v^{*}-vu_{2}w^{% *}}{\sqrt{2}}.$$ By computations using (4.4), one verifies that $$rr^{*}=ss^{*}=vv^{*},$$ so $r$ and $s$ are partial isometries, and moreover that $$r^{*}r+s^{*}s=1.$$ The complements of the left and right supports of $r$ are equivalent: $$1-rr^{*}=1-vv^{*}=ww^{*}\sim w^{*}w=1=v^{*}v\sim vv^{*}=ss^{*}\sim s^{*}s=1-r^% {*}r.$$ This means that $r$ can be extended to a unitary $y$, i.e. (4.6) $$r=rr^{*}y=vv^{*}y.$$ Using Lemma 4.3, we compute $$\displaystyle\overline{\mathcal{U}(x)}^{\sigma-w}$$ $$\displaystyle=\overline{\mathcal{U}(yxy^{*})}^{\sigma-w}\ni v^{*}yxy^{*}v$$ $$\displaystyle=v^{*}vv^{*}yxy^{*}vv^{*}v$$ $$\displaystyle\overset{\eqref{E:defy}}{=}v^{*}rxr^{*}v$$ $$\displaystyle\overset{\eqref{E:defrs}}{=}\frac{u_{1}v^{*}xvu_{1}^{*}+u_{1}v^{*% }xwu_{2}^{*}+u_{2}w^{*}xvu_{1}^{*}+u_{2}w^{*}xwu_{2}^{*}}{2}$$ $$\displaystyle\overset{\eqref{E:isom}}{=}\frac{u_{1}xu_{1}^{*}+u_{2}xu_{2}^{*}}% {2}.$$ Now we suppose $x$ to be a norm limit of $\aleph_{0}$-divisibles $\{x_{n}\}$, as in the statement of the theorem. For unitaries $u_{1},u_{2},y$ and a finite set $\{\varphi_{j}\}\subset\mathcal{M}^{+}_{*}$, $$\displaystyle\left|\varphi_{j}\left(\frac{u_{1}xu_{1}^{*}+u_{2}xu_{2}^{*}}{2}-% yxy^{*}\right)\right|\leq$$ $$\displaystyle\left|\varphi_{j}\left(\frac{u_{1}(x-x_{n})u_{1}^{*}+u_{2}(x-x_{n% })u_{2}^{*}}{2}\right)\right|$$ $$\displaystyle+\left|\varphi_{j}\left(\frac{u_{1}x_{n}u_{1}^{*}+u_{2}x_{n}u_{2}% ^{*}}{2}-yx_{n}y^{*}\right)\right|$$ $$\displaystyle+|\varphi_{j}(y(x_{n}-x)y^{*})|.$$ We can guarantee that this is small for all $\varphi_{j}$ by first choosing $n$ to bound the first and third terms, then choosing $y$ as in the first part of the proof to bound the second. This establishes (4.3). From (4.3) it follows that $\overline{\mathcal{U}(x)}^{\sigma-w}\supseteq\text{conv}(\mathcal{U}(x))$. Then $$\overline{\mathcal{U}(x)}^{\sigma-w}\supseteq\overline{\text{conv}(\mathcal{U}% (x))}^{\sigma-w}\supseteq\overline{\mathcal{U}(x)}^{\sigma-w},$$ implying equality. It is an easy general fact that the closure of a convex set is convex, as long as the map $(\xi,\eta)\mapsto\frac{\xi+\eta}{2}$ is continuous in the relevant topology. Thus $\overline{\mathcal{U}(x)}^{\sigma-w}$ is convex, finishing the proof. ∎ Remark 4.4. The preceding lemma and theorem are also true for the Banach space weak topology ($\sigma(\mathcal{M},\mathcal{M}^{*})$-topology); just choose the set $\{\varphi_{j}\}$ from $\mathcal{M}_{+}^{*}$. Appendix A Quotients of operators by cardinals We first explain what is meant here by “dividing an operator by a cardinal.” Given $x\in\mathcal{M}$, a quotient by $n$ is $y\in\mathcal{N}$ satisfying $$\text{\textcircled{$n$}}(y\in\mathcal{N})\cong(x\in\mathcal{M}).$$ The existence of a solution is equivalent to the $n$-divisibility of $x$. As mentioned in the Introduction, uniqueness only becomes meaningful once we agree to identify isomorphic operators, as follows. Note that $\cong$ is an equivalence relation on operators in von Neumann algebras, and write equivalence classes with brackets, e.g. $[x\in\mathcal{M}]$. Since amplification is well-defined on equivalence classes (Lemma 1.4(1)), we may also consider the equation (A.1) $$\text{\textcircled{$n$}}[y\in\mathcal{N}]=[x\in\mathcal{M}].$$ For $n$ finite, the solution to (A.1) is always unique (if it exists). The main goals of this appendix are to explain why this is true and to discuss several variations of interest. In everything that follows, operators may be replaced with *-homomorphisms of $C^{*}$-algebras. A.1. Initial comments 1. The first issue in (A.1) is really to identify the algebra $\mathcal{N}$ (up to isomorphism). We write $[\mathcal{M}]$ for the isomorphism class of $\mathcal{M}$, and $[\mathcal{M}]^{n}$ for $[\mathbb{M}_{n}\otimes\mathcal{M}]$. Then the algebra in question is a solution to (A.2) $$[\mathcal{N}]^{n}=[\mathcal{M}].$$ For factors on a separable Hilbert space, the study of equation (A.2) goes all the way back to Murray and von Neumann, who wrote it as $\overline{\textbf{N}}^{p}=\overline{\textbf{M}}$. Their results ([MvN, Section 2.6]) are subsumed in Lemma 1.2 and Proposition A.1(1). 2. Although we think of the maps $[\mathcal{M}]\mapsto[\mathcal{M}]^{n}$ and $[x]\mapsto\text{\textcircled{$n$}}[x]$ as multiplications, they do not arise by iterating some kind of sum operation. Indeed, if there were a sum “+” satisfying $[\mathcal{M}]\text{``+"}[\mathcal{M}]=[\mathcal{M}]^{2}$, what would $[\mathcal{M}]\text{``+"}[\mathcal{N}]$ be? In this sense algebras of the form $\mathcal{B}(\mathfrak{H})$ are very special, since one can take $[\mathcal{B}(\mathfrak{H}_{1})]\text{``+"}[\mathcal{B}(\mathfrak{H}_{2})]=[% \mathcal{B}(\mathfrak{H}_{1}\oplus\mathfrak{H}_{2})]$. We now admit that the opening paragraph of this paper is somewhat disingenuous. At the level of operators, the situation is even worse. In general one cannot form the diagonal sum of a pair of classes, even from the same algebra: a pair $[x\in\mathcal{M}],[y\in\mathcal{M}]$ does not determine a well-defined class $[(\begin{smallmatrix}x&0\\ 0&y\end{smallmatrix})\in\mathbb{M}_{2}\otimes\mathcal{M}]$. As we explain elsewhere, this can be attributed to the existence of automorphisms which are not locally inner ([S3, Section 3]). Of course the usual direct products are defined on isomorphism classes of algebras and operators, i.e. $[x\in\mathcal{M}]\oplus[y\in\mathcal{N}]=[x\oplus y\in\mathcal{M}\oplus% \mathcal{N}]$. By iteration, they give rise to the multiplications (A.3) $$\displaystyle[\mathcal{M}]$$ $$\displaystyle\mapsto[\ell^{\infty}_{n}\overline{\otimes}\mathcal{M}],$$ (A.4) $$\displaystyle[x\in\mathcal{M}]$$ $$\displaystyle\mapsto[1\otimes x\in\ell^{\infty}_{n}\overline{\otimes}\mathcal{% M}].$$ One may further say that $(y\in\mathcal{N})$ is “centrally $n$-divisible” if it is isomorphic to an output of (A.4). But this property is not nicely characterized in $W^{*}(y)$ or its relative commutant, as the intertwiners which indicate multiplicity lie outside of $\mathcal{N}$. The substitution of $\mathbb{M}_{n}$ for $\ell^{\infty}_{n}$ suggests that the maps $[\mathcal{M}]\mapsto[\mathcal{M}]^{n}$ and $[x]\mapsto\text{\textcircled{$n$}}[x]$ should be considered quantized multiplications. 3. In his 1955 book [Ka2], Kaplansky posed three “test problems” for abelian groups and suggested their possible merit for other mathematical structures with sum and isomorphism. (Only the first two problems made it into the second edition of the book.) The second test problem is this: if $a\oplus a\cong b\oplus b$, must $a\cong b$? Among the substantial literature on these problems, Kadison and Singer answered them affirmatively in the context of unitary equivalence for Hilbert space operators two years later. Their result says that in type I factors, $\text{\textcircled{\smaller[3] 2 \larger[3]}}[a]=\text{\textcircled{\smaller[3% ] 2 \larger[3]}}[b]$ implies $[a]=[b]$. In other words, the map $[x]\mapsto\text{\textcircled{\smaller[3] 2 \larger[3]}}[x]$ has a (partially-defined) inverse. See Azoff ([Az]) for other operator theoretic results and references concerning the test problems. In particular [Az] answers the test problems affirmatively for the direct sum of von Neumann algebras, so that (A.3) also has an inverse when $n=2$. Of course any finite $n$ is also suitable; the intrepid reader may go on to show the existence of a “central” operator quotient by inverting (A.4). A.2. Uniqueness of finite quotients It suffices to prove Proposition A.1. Let $n$ be finite, and let $x\in\mathcal{M}$ and $y\in\mathcal{N}$ be elements of von Neumann algebras. (1) If $\mathbb{M}_{n}\otimes\mathcal{M}\cong\mathbb{M}_{n}\otimes\mathcal{N}$, then $\mathcal{M}\cong\mathcal{N}$. (2) If $\text{\textcircled{$n$}}x\cong\text{\textcircled{$n$}}y$, then $x\cong y$. Sketch of proof. Statement (1) is a consequence of (2). As we just mentioned, (2) was proved for $n=2$ and type I factors in [KS, Theorem 1]. The essence of the following argument is the same. Fix $$\pi:\mathbb{M}_{n}\otimes\mathcal{M}\overset{\sim}{\to}\mathbb{M}_{n}\otimes% \mathcal{N},\qquad\pi(1\otimes x)=1\otimes y.$$ Here is a suitable chain of isomorphisms, with explanations afterward: $$\displaystyle x\in\mathcal{M}$$ $$\displaystyle\cong e_{11}\otimes x\in(e_{11}\otimes 1_{\mathcal{M}})(\mathbb{M% }_{n}\otimes\mathcal{M})(e_{11}\otimes 1_{\mathcal{M}})$$ $$\displaystyle\cong\pi(e_{11}\otimes 1_{\mathcal{M}})(1\otimes y)\in\pi(e_{11}% \otimes 1_{\mathcal{M}})(\mathbb{M}_{n}\otimes\mathcal{N})\pi(e_{11}\otimes 1_% {\mathcal{M}})$$ $$\displaystyle\cong(e_{11}\otimes 1_{\mathcal{N}})(1\otimes y)\in(e_{11}\otimes 1% _{\mathcal{N}})(\mathbb{M}_{n}\otimes\mathcal{N})(e_{11}\otimes 1_{\mathcal{N}})$$ $$\displaystyle\cong y\in\mathcal{N}.$$ The first and fourth isomorphisms are clear. The second isomorphism is an application of $\pi$, using $e_{11}\otimes x=(e_{11}\otimes 1_{\mathcal{M}})(1\otimes x)$. For the third, first note that $e_{11}\otimes 1_{\mathcal{M}}$ commutes with $1\otimes x$, so $\pi(e_{11}\otimes 1_{\mathcal{M}})$ commutes with $\pi(1\otimes x)=1\otimes y.$ Then both $\pi(e_{11}\otimes 1_{\mathcal{M}})$ and $e_{11}\otimes 1_{\mathcal{N}}$ are projections in $W^{*}(1\otimes y)^{\prime}\cap(\mathbb{M}_{n}\otimes\mathcal{N})$ which solve the equation (A.5) $$\underbrace{[p]+[p]+\dots+[p]}_{n\text{ times }}=[1]$$ in the dimension theory for $W^{*}(1\otimes y)^{\prime}\cap(\mathbb{M}_{n}\otimes\mathcal{N})$, and this implies that they are Murray-von Neumann equivalent in $W^{*}(1\otimes y)^{\prime}\cap(\mathbb{M}_{n}\otimes\mathcal{N})$. (The dimension theory for a von Neumann algebra $\mathcal{M}$ is the quotient $(\mathcal{P}(\mathcal{M})/\sim)$, where $\mathcal{P}(\mathcal{M})$ is the set of the projections in $\mathcal{M}$ and $\sim$ is Murray-von Neumann equivalence. Among its many features is a partially-defined addition for arbitrarily large sets of summands. See [S2, Section 2] for an overview.) The third isomorphism can then be had by conjugating by a partial isometry in $W^{*}(1\otimes y)^{\prime}\cap(\mathbb{M}_{n}\otimes\mathcal{N})$ which goes from $\pi(e_{11}\otimes 1_{\mathcal{M}})$ to $e_{11}\otimes 1_{\mathcal{N}}$. ∎ Although it looks innocuous, Proposition A.1(1) does not hold for $C^{*}$-algebras! The first example was given in [Pl], and [Ko] contains a more systematic study. Proposition A.1 also fails for infinite $n$. For example, note that a projection in $\mathcal{B}(\ell^{2})$ with infinite rank and corank is an $\aleph_{0}$-multiple of any nontrivial projection on a separable (possibly finite-dimensional) Hilbert space. Just as for cardinals, division by an infinite quantity is problematic. We do, however, have the implications $$[\mathcal{N}]^{n}=[\mathcal{M}]\Rightarrow[\mathcal{M}]^{n}=[\mathcal{M}]\quad% \text{and}\quad\text{\textcircled{$n$}}[y]=[x]\Rightarrow\text{\textcircled{$n% $}}[x]=[x].$$ Their proofs are alike; for the second, assume $\text{\textcircled{$n$}}y\cong x$ and compute $$\text{\textcircled{$n$}}x\cong\text{\textcircled{$n$}}(\text{\textcircled{$n$}% }x)\cong\text{\textcircled{$n$}}y\cong x.$$ This means that for $n$ infinite and $x$ $n$-divisible, the equation $\text{\textcircled{$n$}}[y]=[x]$ always has the solution $[y]=[x]$. Typically there are other solutions, but not always (for instance, $n=\aleph_{0}$ and $x$ the identity of a $\sigma$-finite type III factor). The property (A.6) $$[x]=\text{\textcircled{$n$}}[x]$$ may be thought of as a “self-similarity.” For $n$ infinite, (A.6) is no stronger than $n$-divisibility, as we just mentioned. For $n$ finite, by repeated substitution (A.6) entails that $x$ is $n^{k}$-divisible for any natural $k$, or equivalently, that $W^{*}(x)^{\prime}\cap\mathcal{M}$ lacks a finite type I summand. But the converse to this implication does not generally hold. For example, the identity of a $\text{II}_{1}$ factor $\mathcal{M}$ satisfies (A.6) if and only if $\mathcal{M}\cong\mathbb{M}_{n}\otimes\mathcal{M}$, which is not always true. A.3. Generalized amplifications of operators Readers familiar with von Neumann algebras will not be surprised to hear that for some $[x\in\mathcal{M}]$, the map $[x]\mapsto\text{\textcircled{$n$}}[x]$ makes sense for non-integer values of $n$. So for example one may sometimes amplify $[x\in\mathcal{M}]$ by $\sqrt{2}$, thinking of this as the isomorphism class of the quantum direct sum of $\sqrt{2}$ copies of $x$. In fact, in the broadest context, the parameter may be chosen from the Murray-von Neumann equivalence classes of projections in amplifications of $W^{*}(x)^{\prime}\cap\mathcal{M}$. For a projection $p\in\mathbb{M}_{k}\otimes(W^{*}(x)^{\prime}\cap\mathcal{M})\subseteq\mathbb{M}% _{k}\otimes\mathcal{M}$, we define the (generalized) amplification of $[x\in\mathcal{M}]$ by the dimension $[p]$ to be $$\text{\textcircled{\smaller[5]$[p]$\larger[5]}}[x\in\mathcal{M}]\overset{\text% {def}}{=}[p(1\otimes x)\in p(\mathbb{M}_{k}\otimes\mathcal{M})p].$$ The parameter $[p]$ looks dishearteningly non-numerical, but by dimension theory it can be identified with a cardinal-valued function on the spectrum of the center of $W^{*}(x)^{\prime}\cap\mathcal{M}$ ([To]). (On the spectrum of the type II summand, the function may also take values in the positive reals.) This allows us to unify division and multiplication, as we now illustrate with a simple example. Consider (A.1) under the assumption that $W^{*}(x)^{\prime}\cap\mathcal{M}$ and $n$ are finite, with the identity of $W^{*}(x)^{\prime}\cap\mathcal{M}$ $n$-divisible. Let $[p]\in(\mathcal{P}(W^{*}(x)^{\prime}\cap\mathcal{M})/\sim)$ be the unique solution to (A.5). Then $[p]$ is characterized as the set of projections in $W^{*}(x)^{\prime}\cap\mathcal{M}$ whose image under the canonical dimension function (or center-valued trace) for $W^{*}(x)^{\prime}\cap\mathcal{M}$ is exactly $\frac{1}{n}$. On its domain the operation [⃝5]$[p]$[5] is inverse to $⃝n$, so one considers it as “division by $n$,” solving (A.1) for $[y]$ by applying it to both sides. It is actually not too much trouble to set up an algebraic calculus for amplifications in which only dimensions are used. But in general the incorporation of cardinals requires some unwieldy extra bookkeeping, because dimension functions are not unital in infinite algebras (so that the map from cardinals to dimensions is many-to-one), and are not even canonical in $\text{II}_{\infty}$ algebras. We do not give the details here, but we point out that Ernest worked out a version of this theory for $\mathcal{B}(\ell^{2})$ ([E, Chapter 4]). He used no cardinals higher than $\aleph_{0}$, and he only considered dimensions with full central support. (Modulo the cardinality restriction, these correspond to the coupling functions for $W^{*}(x)^{\prime}\cap\mathcal{M}$.) This produces a useful subset of the amplifications of $x\in\mathcal{B}(\ell^{2})$: (A.7) $$\{y\in\mathcal{B}(\ell^{2})\mid\text{\textcircled{\smaller[5]$\aleph_0$\larger% [5]}}y\cong\text{\textcircled{\smaller[5]$\aleph_0$\larger[5]}}x\}.$$ Ernest called the relation $\text{\textcircled{\smaller[5]$\aleph_0$\larger[5]}}y\cong\text{\textcircled{% \smaller[5]$\aleph_0$\larger[5]}}x$ quasi-equivalence, so that (A.7) is the quasi-equivalence class of $x$. This relation might also be considered “stable equivalence” or even a sort of Morita equivalence for operators, as coupling functions are invariants of representations. In $\mathcal{B}(\ell^{2})$ or other $\sigma$-finite von Neumann algebras we prefer to call (A.7) the genus of $x$, following terminology set up long ago by Murray and von Neumann for analogous equivalence classes of factors ([MvN, Chapter III]). So once again, they have the last word. References [AA] C. A. Akemann and J. Anderson, Lyapunov theorems for operator algebras, Mem. Amer. Math. Soc. 94 (1991), no. 458. [AS] C. A. Akemann and D. Sherman, The weak*-closed unitary orbit of a self-adjoint operator in a factor, in preparation. [A1] W. Arveson, An Invitation to $C^{*}$-Algebras, Graduate Texts in Mathematics, vol. 39, Springer-Verlag, New York, 1976. [A2] W. Arveson, Notes on extensions of $C^{*}$-algebras, Duke Math. J. 44 (1977), 329–355. [Az] E. Azoff, Test problems for operator algebras, Trans. Amer. Math. Soc. 347 (1995), 2989–3001. [AD] E. Azoff and C. Davis, On distances between unitary orbits of selfadjoint operators, Acta Sci. Math. (Szeged) 47 (1984), 419–439. [Bl] B. Blackadar, Operator Algebras: Theory of $C^{*}$-algebras and von Neumann Algebras, Springer-Verlag, Berlin, 2006. [B] D. Bisch, On the existence of central sequences in subfactors, Trans. Amer. Math. Soc. 321 (1990), 117–128. [BKR] B. Blackadar, A. Kumjian, and M. Rørdam, Approximately central matrix units and the structure of noncommutative tori, $K$-Theory 6 (1992), 267–284. [Bu] D. Bures, Abelian subalgebras of von Neumann algebras, Mem. Amer. Math. Soc. (1971), no. 110. [C1] A. Connes, Classification of injective factors: cases $\text{II}_{1}$, $\text{II}_{\infty}$, $\text{III}_{\lambda}$, $\lambda\neq 1$, Ann. of Math. (2) 104 (1976), 73–115. [C2] A. Connes, A factor of type $\text{II}_{1}$ with countable fundamental group, J. Operator Theory 4 (1980), 151–153. [CJ] A. Connes and V. Jones, Property $T$ for von Neumann algebras, Bull. London Math. Soc. 17 (1985), 57–62. [D-N] N. Dang-Ngoc, Pointwise convergence of martingales in von Neumann algebras, Israel J. Math. 34 (1979), 273–280. [DH] H. Ding and D. Hadwin, Approximate equivalence in von Neumann algebras, Sci. China Ser. A 48 (2005), 239–247. [D] J. Dixmier, Les anneaux d’opérateurs de classe finie, Ann. Sci. École Norm. Sup. (3) 66 (1949), 209–261. [DL] J. Dixmier and E. C. Lance, Deux nouveaux facteurs de type $\text{II}_{1}$, Invent. Math. 7 (1969), 226–234. [Dy] H. A. Dye, The unitary structure in finite rings of operators, Duke Math. J. 20 (1953), 55–69. [Ef] E. Effros, The Borel space of von Neumann algebras on a separable Hilbert space, Pacific J. Math. 15 (1965), 1153–1164. [El] G. A. Elliott, On derivations of $AW^{*}$-algebras, Tôhoku Math. J. (2) 30 (1978), 263–276. [E] J. Ernest, Charting the operator terrain, Mem. Amer. Math. Soc. 6 (1976), no. 171. [FGL] J. Fang, L. Ge, and W. Li, Central sequence algebras of von Neumann algebras, Taiwanese J. Math. 10 (2006), 187–200. [Ga] P. Gajendragadkar, Norm of a derivation on a von Neumann algebra, Trans. Amer. Math. Soc. 170 (1972), 165–170. [GH] L. Ge and D. Hadwin, Ultraproducts of C*-algebras, in: Recent Advances in Operator Theory and Related Topics (Szeged, 1999), pp. 305–326, Oper. Theory Adv. Appl., vol. 127, Birkhäuser, Basel, 2001. [GP] L. Ge and S. Popa, On some decomposition properties for factors of type $\text{II}_{1}$, Duke Math. J. 94 (1998), 79–101. [GS] L. Ge and J. Shen, Generator problem for certain property $T$ factors, Proc. Natl. Acad. Sci. USA 99 (2002), 565–567. [G] T. Giordano, personal communication. [GrP] K. Grove and G. K. Pedersen, Diagonalizing matrices over $C(X)$, J. Funct. Anal. 59 (1984), 65–89. [HW] U. Haagerup and C. Winsløw, The Effros-Maréchal topology in the space of von Neumann algebras, Amer. J. Math. 120 (1998), 567–617. [H1] D. Hadwin, An operator-valued spectrum, Indiana Univ. Math. J. 26 (1977), 329–340. [H2] D. Hadwin, Continuous functions of operators: a functional calculus, Indiana Univ. Math. J. 27 (1978), 113–125. [H3] D. Hadwin, An asymptotic double commutant theorem for $C^{*}$-algebras, Trans. Amer. Math. Soc. 244 (1978), 273–297. [H4] D. Hadwin, Completely positive maps and approximate equivalence, Indiana Univ. Math. J. 36 (1987), 211–228. [H5] D. Hadwin, Free entropy and approximate equivalence in von Neumann algebras, in: Operator Algebras and Operator Theory (Shanghai, 1997), pp. 111-131, Contemp. Math., vol. 228, Amer. Math. Soc., Providence, 1998. [HKM] D. Hadwin, L. Kaonga, and B. Mathes, Noncommutative continuous functions, J. Korean Math. Soc. 40 (2003), 789–830. [HL] D. Hadwin and D. Larson, Completely rank-nonincreasing linear maps, J. Funct. Anal. 199 (2003), 210–227. [Ha1] P. R. Halmos, Introduction to Hilbert Space and the Theory of Spectral Multiplicity, Chelsea, New York, 1951. [Ha2] P. R. Halmos, Irreducible operators, Michigan Math. J. 15 (1968) 215–223. [Ha3] P. R. Halmos, Ten problems in Hilbert space, Bull. Amer. Math. Soc. 76 (1970), 887–933. [Hal] H. Halpern, Essential central range and selfadjoint commutators in properly infinite von Neumann algebras, Trans. Amer. Math. Soc. 228 (1977), 117–146. [HN] F. Hiai and Y. Nakamura, Closed convex hulls of unitary orbits in von Neumann algebras, Trans. Amer. Math. Soc. 323 (1991), 1–38. [K1] R. V. Kadison, Unitary invariants for representations of operator algebras, Ann. of Math. (2) 66 (1957), 304–379. [K2] R. V. Kadison, Diagonalizing matrices, Amer. J. Math. 106 (1984), 1451–1468. [KK] R. V. Kadison and D. Kastler, Perturbations of von Neumann algebras I: stability of type, Amer. J. Math. 94 (1972), 38–54. [KR] R. V. Kadison and J. R. Ringrose, Fundamentals of the Theory of Operator Algebras I, Graduate Studies in Mathematics, vol. 15, Amer. Math. Soc., Providence, 1997. [KS] R. V. Kadison and I. M. Singer, Three test problems in operator theory, Pacific J. Math. 7 (1957), 1101–1106. [Kaf1] V. Kaftal, On the theory of compact operators in von Neumann algebras I, Indiana Univ. Math. J. 26 (1977), 447–457. [Kaf2] V. Kaftal, Type decomposition for von Neumann algebra embeddings, J. Funct. Anal. 98 (1991), 169–193. [Ka1] I. Kaplansky, A theorem on rings of operators, Pacific J. Math. 1 (1951), 227–232. [Ka2] I. Kaplansky, Infinite Abelian Groups, The University of Michigan Press, Ann Arbor, 1955. [Ko] K. Kodaka, $C^{*}$-algebras that are isomorphic after tensoring and full projections, Proc. Edinb. Math. Soc. (2) 47 (2004), 659–668. [Ku1] M. Kutkut, Weak closure of the unitary orbit of contractions, Acta Math. Hungar. 46 (1985), 255–263. [Ku2] M. Kutkut, Weak closure of the unitary orbits of operators having a convex spectral set, J. Pure Appl. Sci. 18 (1985), 271–281. [MM] L. W. Marcoux and G. J. Murphy, Unitarily-invariant linear spaces in $C^{*}$-algebras, Proc. Amer. Math. Soc. 126 (1998), 3597–3605. [M] K. Matsumoto, Strongly stable factors, crossed products and property $\Gamma$, Tokyo J. Math. 11 (1988), 247–268. [McD] D. McDuff, Central sequences and the hyperfinite factor, Proc. London Math. Soc. (3) 21 (1970), 443–461. [MvN] F. J. Murray and J. von Neumann, On rings of operators IV, Ann. of Math. (2) 44 (1943), 716–808. [vN1] J. von Neumann, Zur Algebra der Funktionaloperatoren und Theorie der normalen Operatoren, Math. Ann. 102 (1929), 370–427. [vN2] J. von Neumann, Approximative properties of matrices of high finite order, Portugaliae Math. 3 (1942), 1–62. [Pl] J. Plastiras, $C^{*}$-algebras isomorphic after tensoring, Proc. Amer. Math. Soc. 66 (1977), 276–278. [P1] S. Popa, Notes on Cartan subalgebras in type $\text{II}_{1}$ factors, Math. Scand. 57 (1985), 171–188. [P2] S. Popa, Free-independent sequences in type $\text{II}_{1}$ factors and related problems, in: Recent Advances in Operator Algebras (Orléans, 1992), Astérisque no. 232 (1995), 187–202. [RR] H. Radjavi and P. Rosenthal, The set of irreducible operators is dense, Proc. Amer. Math. Soc. 21 (1969), 256. [Sa] S. Sakai, The Theory of $W^{*}$-Algebras, lecture notes, Yale University, 1962. [Sh] J. Shen, Singly generated $\text{II}_{1}$ factors, arXiv preprint math.OA/0511327. [S1] D. Sherman, Unitary orbits of normal operators in von Neumann algebras, J. Reine Angew. Math. 605 (2007), 95–132. [S2] D. Sherman, On the dimension theory of von Neumann algebras, Math. Scand. 101 (2007), 123–147. [S3] D. Sherman, Locally inner automorphisms of operator algebras, arXiv preprint math.OA/0609735. [S4] D. Sherman, Remarks on automorphisms of ultrapowers of $\text{II}_{1}$ factors, in preparation. [Sho] K. Shoda, Einige Sätze über Matrizen, Japan J. Math 13 (1936), 361–365. [T1] M. Takesaki, Theory of Operator Algebras I, Springer-Verlag, Berlin, 1979. [T2] M. Takesaki, Theory of Operator Algebras III, Springer-Verlag, Berlin, 2002. [To] J. Tomiyama, Generalized dimension function for $W^{*}$-algebras of infinite type, Tôhoku Math. J. (2) 10 (1958), 121–129. [U] H. Umegaki, Conditional expectation in an operator algebra II, Tôhoku Math. J. (2) 8 (1956), 86–100. [V] D. Voiculescu, A non-commutative Weyl-von Neumann theorem, Rev. Roumaine Math. Pures Appl. 21 (1976), 97–113. [We] H. Weyl, Über beschränkte quadritische Formen deren Differenz vollsteig ist, Rend. Circ. Mat. Palermo 27 (1909), 373–392. [W] F. B. Wright, A reduction for algebras of finite type, Ann. of Math. (2) 60 (1954), 560–570. [Z] L. Zsidó, The norm of a derivation in a $W^{*}$-algebra, Proc. Amer. Math. Soc. 38 (1973), 147–150.
Second generation planet formation in NN Serpentis? M. Völschow Second generation planet formation in NN Serpentis?Second generation planet formation in NN Serpentis?    R. Banerjee Second generation planet formation in NN Serpentis?Second generation planet formation in NN Serpentis?    F. V. Hessman Second generation planet formation in NN Serpentis?Second generation planet formation in NN Serpentis? () Key Words.: planets and satellites: formation – planets and satellites: dynamical evolution and stability – planet-star interactions – stars: binaries: close – stars: AGB and post-AGB – stars: planetary systems ††offprints: [email protected]: Hamburger Sternwarte, Universität Hamburg, Gojenbergsweg 112, 21029 Hamburg, Germany 22institutetext: Institut für Astrophysik, Universität Göttingen, Friedrich-Hund-Platz 1, 37077 Göttingen, Germany In this paper, we study the general impact of stellar mass-ejection events in planetary orbits in post-common envelope binaries with circumbinary planets like those around NN Serpentis. We discuss a set of simple equations that determine upper and lower limits for orbital expansion and investigate the effect of initial eccentricity. We deduce the range of possible semi-major axes and initial eccentricity values of the planets prior to the common-envelope event. In addition to spherically-symmetric mass-ejection events, we consider planetary dynamics under the influence of an expanding disk. In order to have survived, we suggest that the present planets in NN Ser must have had semi-major axes $\,{}^{>}_{\sim}\,10~{}\rm{au}$ and high eccentricity values which is in conflict with current observations. Consequently, we argue that these planets were not formed together with their hosting stellar system, but rather originated from the fraction of matter of the envelope that remained bound to the binary. According to the cooling age of the white dwarf primary of $10^{6}~{}\rm{yr}$, the planets around NN Ser might be the youngest known so far and open up a wide range of further study of second generation planet formation. 1 Introduction The last two decades have seen a revolution in our knowledge of the properties and types of extrasolar planetary systems. Seemingly exotic objetcs like hot jupiters, super earths, pulsar planets, and circumbinary planets have dramatically expanded our understanding of planet formation. A particular challange is the origin of circumbinary planets in post-common envelope binary (PCEB) systems (Perets, 2010): the predecessor systems are moderately close binaries with semi-major axes in the range of $\sim 1~{}\rm{au}$ which, like Kepler 47 (Orosz et al., 2012), can contain circumbinary planets. In PCEB systems, the common-envelope phase caused by the entry of the secondary star in the red giant envelope of the evolved primary redistributed the angular momentum budget of the stellar system and resulted in the formation of a close binary with orbital periods of just hours or days. Most of the primary’s former mass is ejected because of the transfer of angular momentum and energy to the envelope on a time scale of weeks or months (Ivanova et al., 2012). To date, a handfull of PCEB systems have provided evidence for the presence of planets, despite their dramatic dynamical evolutionary paths (Horner et al., 2013). At the moment, the surest case is NN Ser, whose eclipse-timing variations can be explained by the presence of a planetary system consisting of two massive Jovian planets with roughly $7$ and $2\,{\rm~{}M}_{Jup}$, orbiting their hosting binary at distances of $3~{}\rm{au}$ and $5~{}\rm{au}$, respectively (Beuermann et al., 2010). The dynamical stability of the first fitted planetary configurations (Beuermann et al., 2010) were investigated with detailed N-body simulations by Funk et al. (2011) and Horner et al. (2012b) who found that some were unstable, whereas others were dynamically feasible. In the first fit, the outer planet’s eccentricity was set to zero. Horner et al. (2012b) did another analysis of the observational data but also varied the outer planet’s eccentricity and obtained a new set of planetary parameters. In their study of the system’s stability, most of their fits turned out to be unstable after short periods of time which initiated some controversary about the system’s actual architecture. Recent dynamical analysis of other candidates such as HW Virginis (Horner et al., 2012a) or HU Aquarii (Wittenmyer et al., 2012) also found high orbital instability in the proposed systems. Except for pure orbital fitting and stability analysis, intrinsic effects of the binary have been investigated and might also explain the observed transit timing variations (see, e.g., Horner et al., 2012a; Wittenmyer et al., 2012). Recently, Beuermann et al. (2013) used new observations and an equally extensive dynamical study to show that the family of stable solutions can now be constrained to be around a 2:1 resonance and moderate eccentricities (0.22 and 0.14 for the inner and outer planets, respectively)111Because the naming nomenclature for exoplanets has not yet been officially determined and several different nomenclatures appear in the literature, we use the names “outer” for the planet first detected and “inner” for the second planet detected; following the nomenclature proposed by Hessman et al. (2010), these would be equivalent to NN Ser (AB)b and NN Ser (AB)c, respectively.. We summarize the results of Beuermann et al. (2013) in Table 1. Even if the orbital structure of the planetary system in NN Ser has been determined, the question of its origin in such an exotic binary system is still unsolved. Other than the very unlikely case of planetary capture, two distinct scenarios are possible. • The planets formed jointly with the binary and survived the common-envelope event (First Generation Formation). • The planets formed after the common-envelope event. Possible former planets did not survive (Second Generation Formation). The second case is particularly fascinating as it implies a second phase of planet formation initiated by the common-envelope event. Recent studies conducted by Kashi & Soker (2011) have shown that a fraction of 1 to 10 per cent of the envelope’s mass can stay bound to the binary, falls back on it, flattens and forms a circumbinary disc out of which planets can easily form again. If a first generation origin can be ruled out in the NN Ser system, this mechanism would have to be responsible for the present planets, opening up a whole new perspective on extrasolar planets. 2 Simple estimates We consider a planet with mass $m_{p}$ in a orbit with initial period $P_{i}$, velocity $v_{i}$, eccentricity $e_{i}$ and semi-major axis $a_{i}$. In addition, we neglect all kinds of dissipative effects on the system’s energy and angular momentum budget. As a first approximation, stellar mass-ejection events can be modeled as a simple variation in the gravitational potential of the star, which affects the orbital parameters of any accompanying body in a way that it causes radial drift, changes in eccentricity, or an escape. A similar analysis was carried out by various authors. We refer the reader to Blaauw (1961), Alexander et al. (1976) or Perets (2010). 2.1 Initially circular orbits Two analytic limiting cases for stellar mass ejection and its effect on the orbiting planet(s) exist, characterized by the timescale of mass loss $\tau$ and the orbital period of the planet: • $\tau\gg P_{i}$ (adiabatic mass loss) • $\tau\ll P_{i}$ (instantaneous mass loss) For both asymptotic regimes, we have summarized the analytic results in Table 2. In the case of instantaneous mass ejection, a mass-loss factor $\mu\leq 1/2$ causes all of the planets to reach escape velocity and get lost, while for adiabatic mass loss no such boundary exists. What conclusions can be drawn from this for NN Ser? The former binary system itself had a separation of about $1.44~{}\rm{au}$ (Beuermann et al., 2010). Horner et al. (2012b) derived a simple stability criterion from extensive numerical simulations, enabling us to estimate a lower limit for stable planetary orbits in the former binary. The main result of the stability criterion is that configurations with close encounters lower than three Hill radii (which is $\sim\!1.1~{}\rm{au}$ for the secondary) are dynamically not feasible and require the planets to have semi-major axes of at least $\sim\!2.5~{}\rm{au}$. Beuermann et al. (2010) argue that planets in the former NN Ser system should even have had semi-major axes of at least $3.5~{}\rm{au}$, which is bigger than the typical extent of a $2~{}{\rm~{}M}_{\sun}$ red giant star and the evolving CE. These results indicate that the former planets were not engulfed in the evolving Common Envelope. In addition, the mass of the envelope was much bigger than the total mass of both stellar cores, indicating that the planets did non suffer any significant drift as a result of the orbital shrinking of the binary. Hence, the planets can be approximately treated as decoupled from the binary evolution. According to Beuermann et al. (2010), we have $\mu\!\approx\!0.3$, implying that the original planets would have been lost if mass loss took place on an instantaneous timescale. For adiabatic mass loss and assuming $e_{i}\!\approx\!0$ for both planets, the planets’ original semi-major axes should be expanded by a factor of $\sim\!3.2$, implying $a_{i}<0.3\cdot a_{f}$. Therefore, neglecting any mutual gravitational perturbations, the present planets may have orbited the original binary at separations of $a<2~{}\rm{au}$ if other interactions between the planet and the CE can be ruled out as explained in this section. Such close orbits are not dynamically feasible in the former binary. On the other hand, considering an expansion factor of $\sim\!3.2$ for initially circular orbits and a lower limit of $\sim\!3.5~{}\rm{au}$ for stable orbits, we conclude that first generation planetary candidates should now be detected at distances of more than $\sim\!10~{}\rm{au}$, which is obviously greater than the present planets’ proposed semi-major axes. 2.2 Initially elliptical orbits The orbital parameters of a planet are conserved if mass loss occurs adiabatically. However, for instantaneous mass-ejection they become time-dependant. For a given orbital configuration with semi-major axis $a_{i}$, eccentricity $e_{i}$, and a planet located at a distance $r_{i}$ at a polar angle $\phi_{i}$ at the moment of mass-ejection, the planet’s new eccentricity and semi-major axis are given by $$e_{f}=\frac{1}{\mu}\left(2-\frac{1-e_{i}^{2}}{1+e_{i}\cos\phi_{i}}\right)-1~{}% ~{}~{};~{}~{}~{}a_{f}=\frac{a_{i}}{\mu}\cdot\frac{1-e_{i}^{2}}{1-e_{f}^{2}}~{}% ~{}~{}.$$ (1) Consequently, a particular mass-ejection scenario does result in planetary ejection if $$\frac{1}{\mu}\cdot\left(2-\frac{1-e_{i}^{2}}{1+e_{i}\cos\phi_{i}}\right)=f(\mu% ,e_{i},\phi_{i})>1~{}~{}~{}.$$ (2) Relation 2 can be directly translated into an escape probability $p_{esc}(\mu,e_{i})$ defined as the fraction of unstable systems and stable configurations for a given mass-loss factor and given eccentricity by evaluating $f(\mu,e_{i},\phi_{i})$ for different polar angles $\phi_{i}$ (cf. Fig. 1). In contrast to planets in circular orbits, a planet in an initially elliptical orbit can stay bound to the star even for $\mu<0.5$ if the mass-ejection event happens near apastron, i.e., the planet moves in a sufficiently eccentrical orbit. On the other hand, the planet can escape from the system if it happens to be located near periastron even for $\mu>0.5$. Another consequence is that the orbit can be circularized for particular configurations: if the mass-ejection event takes place near the apastron, an initial eccentricity of $e_{i}=1-\mu$ will result in a circular orbit and an orbital expansion factor of $a_{f}/a_{i}=2-\mu$. What are the implications for NN Ser? For planets at a distance of a few AU we have orbital periods of a few years and consequently $\tau\ll P_{i}$ for a typical CE event (Ivanova et al., 2012). Again, we can neglect drag effects or dissipative interactions between the envelope and the planets as well as orbital drift due to the evolution of the binary, as explained in the last section. In the case of instantaneous mass loss, from Fig. 1 it becomes clear that a mass-loss factor as dramatic as $\mu\approx 0.3$ requires the planets to have very high initial eccentricities of $e_{i}\approx 0.4$ individually to ensure at least small survival probabilities. However, high original eccentricities for both planets makes it even more difficult to obtain stable first-generation solutions for both planets. 3 Gas drag estimates So far, we have derived analytic results of the orbital parameters for the two asymptotic regimes of slowly and rapidly changing gravitational potentials without any friction effects due to gas drag. To model the gas ejection event and estimate the effect of gas drag, we chose a self-consistent, exponential density profile $$\rho(r,t)=\begin{cases}\rho_{0}\exp{[\beta(r_{gas}(t))\cdot r]}&0\leq r\leq r_% {gas}(t)\\ 0&r>r_{gas}(t)\end{cases}$$ (3) ($\beta>0$) and a linear velocity profile characterised by the outer rim velocity $v_{max}$ and the initial outer boundary $r_{max,0}$ of the expanding envelope, mainly based on results obtained by Taam & Ricker (2010) and Ricker & Taam (2012) and the Sedov profile for blast wave propagation. For a given mass, $\beta(r_{gas})$ is determined by the total amount of gas $M_{gas}$ via $$M_{gas}=4\pi\rho_{0}\int\limits_{0}^{r_{gas}(t)}r^{2}\exp{(\beta\cdot r)}% \mathrm{d}r~{}~{}~{}.$$ (4) Gas drag effects (see, e.g., Kokubo & Ida, 2000) result in the additional force $$\@vec{F}_{d}=\underbrace{\frac{1}{2}\pi R_{planet}^{2}c_{d}}_{:=c_{p}}\cdot% \rho(r_{p})\cdot|\@vec{u}|\cdot\@vec{u}~{}~{}~{},$$ (5) where $\@vec{u}=\@vec{v}_{gas}-\@vec{v}_{planet}$. A quick upper limit for $|\@vec{F}_{d}|/|\@vec{F}_{g}|$ can be obtained by considering the planet in an initially circular orbit and also assuming $\@vec{v}_{p}\perp\@vec{v}_{gas}$, as well as $|\@vec{v}_{p}|=\sqrt{GM/r_{p}}$ and $|\@vec{u}|^{2}=|\@vec{v}_{p}|^{2}+|\@vec{v}_{max}|^{2}$: $$\frac{|\@vec{F}_{d}|}{|\@vec{F}_{g}|}=\frac{c_{p}\rho(r)[\frac{GM}{r}+|\@vec{v% }_{max}|^{2}]}{\frac{GMm_{p}}{r^{2}}}=\rho_{0}\frac{c_{p}}{m_{p}}e^{\gamma}r_{% p}\left[1+\frac{|\@vec{v}_{max}|^{2}}{|\@vec{v}_{p}|^{2}}\right]~{}~{}~{}.$$ (6) For an initially close planet ($r_{p}\approx 2~{}\rm{au}$), we have $|\@vec{v}_{gas}|^{2}\ll|\@vec{v}_{p}|^{2}$ and $\beta\approx 10$. Using $m_{p}\approx 5~{}{\rm~{}M}_{Jup}$ for a massive Jovian planet, $\rho_{0}=10^{-15}~{}\rm{g}/\rm{cm}^{3}$ for the gas sphere and typical values for the specific drag coefficient like $c_{p}\approx 10^{12}~{}\rm{cm}^{2}$ (Kokubo & Ida, 2000) leads to $|\@vec{F}_{d}|/|\@vec{F}_{g}|\approx 10^{-4}$. For a distant planet ($r_{p}\approx 20~{}\rm{au}$) we have $\gamma\rightarrow 0$ and can set $|\@vec{u}|\approx|\@vec{v}_{max}|\approx 10~{}\rm{km}/\rm{s}$ so that $|\@vec{F}_{d}|/|\@vec{F}_{g}|\approx 10^{-10}$. These results imply that when there is spherical symmetry geometric gas drag effects are not expected to cause any significant deviations from our solutions even in very massive ejection events. For the density profile discussed in this section, we also conducted a number of simulation runs with different gas velocities and $\mu=0.3$ where we placed a planet in an initially circular orbit between $2.0~{}\rm{au}$ and $7.0~{}\rm{au}$. For $v_{max}>10~{}\rm{km}/\rm{s}$, virtually all planets have already exceeded the escape velocity. For $1~{}\rm{km}/\rm{s}<v_{max}<10~{}\rm{km}/\rm{s}$, the planets’ semi-major axes expanded at least by a factor of $\sim 3$. As expected, the planets ended up with significantly eccentric orbits. Based on these results, we conclude that it appears to be impossible to produce circumbinary planets resembling those in NN Ser using a spherically symmetric density profile if the planets were not part of the CE and did not interact with it. The envelope densities and hence geometric drag forces of the expanding envelope which might have resulted in the retention of the original planets are too small to cause any significant inward directed drift. 4 Disc-shaped mass-ejection Common-envelope ejection events are, admittedly, only roughly spherical-symmetric, since the envelope is given the considerable angular momentum of the secondary star. In fact, hydrodynamical simulations of common-envelope events (see e.g., Ivanova et al., 2012; Taam & Ricker, 2010; Ricker & Taam, 2012) showed a significant amount of mass concentration in the orbital plane of the binary star. Thus, we conducted a few simulations of planetary dynamics under the influence of a thin expanding disc (see Huré & Hersant (2011) for further information as well as a far more rigid approach). For $z_{gas}\ll r_{gas}$, $\@vec{r}(r,z,\phi)\approx\@vec{r}(r,\phi)$ and the gravitational potential of the disc can be approximated as $$U(\@vec{r})\approx-2Gz_{gas}\cdot\int\limits_{0}^{r_{gas}}\int\limits_{0}^{2% \pi}\frac{\rho(\@vec{r^{\prime}})}{|\@vec{r}-\@vec{r^{\prime}}|}r^{\prime}% \mathrm{d}r^{\prime}\mathrm{d}\phi^{\prime}~{}~{}~{}.$$ (7) This integral can be easily approximated by that of a disc split into a number of annuli segments ($\Delta r$,$\Delta\phi$), where a particular segment’s mean density and position vector is evaluated in its middle. The gravitational potentials of thin exponential discs with $M=1~{}{\rm~{}M}_{\sun}$, $\rho_{0}=10^{-15}~{}\rm{g}{\rm~{}cm}^{-3}$, $z_{gas}=0.05~{}\rm{au}$, and different radii are plotted in Fig. 2. All disc potentials feature a prominent potential minimum located at the disc’s outer edge whose depth is predominantly determined by the disc’s extension. Because of the substantially lower volume of the disc, gas densities are expected to be significantly higher and gas drag effects might be an issue. The density profile parameter $\beta$ drops more slowly and the outer rim gas densities are higher. Estimating the effect of friction is complicated, as no analytic result exists for the potential of a finite disc with non-constant density profile. For a moderately thin disc ($h\approx r_{max}/10$) with two $\rm{au}$ in radius and an exponential density profile with $\rho_{0}\sim 10^{-15}~{}\rm{g}{\rm~{}cm}^{-3}$, numerical results give an outer rim density of $~{}3\cdot 10^{-6}~{}\rm{g}{\rm~{}cm}^{-3}$, which is only one or two orders of magnitude higher than the outer rim density of a comparable gas sphere. Consequently, geometric gas drag effects are still negligible. When this model is applied to NN Ser, an initially close planet ($a<5~{}\rm{au}$) starts to drift radially inwards onto the expanding disc as the potential minimum approaches. Entering the disc, the planet is exposed to gas densities in the range of $\approx 10^{-6}~{}\rm{g}{\rm~{}cm}^{-3}$. While the disc continues its expansion, the planet slows down because of potential effects (see Fig 2) and suffers from outwards directed gravitational forces inside the disc. As a consequence, the planet increases its radial distance from the disc’s center and is carried towards the disc’s edge by gravitational forces again. In fact, most of our simulations of planetary dynamics under the influence of a massive expanding disc resulted in the ejection of the planet. In this approach, cylindrical symmetry helps the planets to get unbound. 5 Conclusions and discussion We analyzed the general impact of mass loss in both circular and eccentrical orbits. In NN Ser, simple geometric arguments and semi-analytic stability criteria suggest that dissipative interactions between the common envelope and the planets as well as orbital variations due to the evolution of the hosting binary can be neglected. Rather, our analytic work implies that the dramatic mass loss that led to the formation of the close binary system NN Ser should have resulted in a significant orbital expansion if not loss of the planets, especially for initially eccentric orbits. As a consequence, a first generation scenario demands that the original planets had highly elliptical orbits and that the present planets should have semi-major axes of more than $10~{}\rm{au}$. The latter is clearly in conflict with the current planets’ inferred orbital parameters. Furthermore, we examined the effect of cylindrical symmetry on the planetary dynamics and found that this scenario also helps the planets’ ejection. Considering all of these effects, we argue that the planets around NN Ser could not have been formed together with the hosting binary star. Hence, these planets should have formed out of the back-falling material of the CE (e.g., Kashi & Soker, 2011), i.e. they should be second generation planets (Perets, 2010). Given a cooling age of about $10^{6}~{}\rm{yr}$ (see, e.g., Beuermann et al., 2010) these planets are then the youngest planets known so far. Further efforts are essential in order to constrain orbital parameters of the planets, to get a better idea of the common envelope ejection phase, and to estimate possible planet-envelope interactions and ascertain the timescale for mean-motion resonance locking in this particular system. In any case, NN Ser should cast much light on this fascinating form of planetary evolution. Acknowledgements. We would like to thank Stefan Dreizler for valuable discussions. We would also like to thank the anonymous referee for useful advice. We gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft via Research Training Group (GrK) 1351 Extrasolar Planets and their Host Stars. References Alexander et al. (1976) Alexander, M. E., Chau, W. Y., & Henriksen, R. N. 1976, ApJ, 204, 879 Beuermann et al. (2013) Beuermann, K., Dreizler, S., & Hessman, F. V. 2013, ArXiv e-prints Beuermann et al. (2010) Beuermann, K., Hessman, F. V., Dreizler, S., et al. 2010, A&A, 521, L60 Blaauw (1961) Blaauw, A. 1961, Bull. Astron. Inst. Netherlands, 15, 265 Funk et al. (2011) Funk, B., Eggl, S., Gyergyovits, M., Schwarz, R., & Pilat-Lohinger, E. 2011, in EPSC-DPS Joint Meeting 2011, 1725 Hessman et al. (2010) Hessman, F. V., Dhillon, V. S., Winget, D. E., et al. 2010, ArXiv e-prints Horner et al. (2012a) Horner, J., Hinse, T. C., Wittenmyer, R. A., Marshall, J. P., & Tinney, C. G. 2012a, MNRAS, 427, 2812 Horner et al. (2012b) Horner, J., Wittenmyer, R. A., Hinse, T. C., & Tinney, C. G. 2012b, MNRAS, 425, 749 Horner et al. (2013) Horner, J., Wittenmyer, R. A., Tinney, C. G., et al. 2013, ArXiv e-prints Huré & Hersant (2011) Huré, J.-M. & Hersant, F. 2011, A&A, 531, A36 Ivanova et al. (2012) Ivanova, N., Justham, S., Chen, X., et al. 2012, ArXiv e-prints Kashi & Soker (2011) Kashi, A. & Soker, N. 2011, MNRAS, 417, 1466 Kokubo & Ida (2000) Kokubo, E. & Ida, S. 2000, in Astronomical Society of the Pacific Conference Series, Vol. 213, Bioastronomy 99, ed. G. Lemarchand & K. Meech, 51 Orosz et al. (2012) Orosz, J. A., Welsh, W. F., Carter, J. A., et al. 2012, Science, 337, 1511 Perets (2010) Perets, H. B. 2010, ArXiv e-prints Ricker & Taam (2012) Ricker, P. M. & Taam, R. E. 2012, ApJ, 746, 74 Taam & Ricker (2010) Taam, R. E. & Ricker, P. M. 2010, New A Rev., 54, 65 Wittenmyer et al. (2012) Wittenmyer, R. A., Horner, J., Marshall, J. P., Butters, O. W., & Tinney, C. G. 2012, MNRAS, 419, 3258
Zero-Energy Self-Similar Solutions Describing Singularity Formation In The Nonlinear Schrodinger Equation In Dimension $N=3$ William C. Troy Abstract In dimension $N=3$ the cubic nonlinear Schrodinger (NLS) equation has solutions which become singular, i.e. at a spatial point they blow up to infinity in finite time. In 1972 Zakharov famously investigated finite time singularity formation in the cubic nonlinear Schrodinger equation as a model for spatial collapse of Langmuir waves in plasma, the most abundant form of observed matter in the universe. Zakharov assumed that (NLS) blow up of solutions is self-similar and radially symmetric, and that singularity formation can be modeled by a solution of an associated self-similar, complex ordinary differential equation (ODE). A parameter $a>0$ appears in the ODE, and the dependent variable, $Q,$ satisfies $(Q(0),Q^{\prime}(0))=(Q_{0},0),$ where $Q_{0}>0.$ A fundamentally important step towards putting the Zakharov model on a firm mathematical footing is to prove, when $N=3,$ whether values $a>0$ and $Q_{0}>0$ exist such that $Q$ also satisfies the physically important ‘zero-energy’ integral constraint. Since 1972 this has remained an open problem. Here, we resolve this issue by proving that for every $a>0$ and $Q_{0}>0,$ $Q$ satisfies the the ‘zero-energy’ integral constraint. AMS subject classifications. 34A12, 34C05, 34L30, 35J10, 78A60 Keywords. nonlinear Schrodinger equation, plasma, self-similar, profile equation, singular 1 Introduction The nonlinear Schrodinger system $$\displaystyle{\rm i}\psi_{t}+\Delta\psi+\psi|\psi|^{2}=0,~{}~{}t>0,$$ (1.1) $$\displaystyle\psi(x,0)=u_{0}(x),~{}~{}x\in R^{N}$$ (1.2) is often considered to be the simplest model of singularity formation in nonlinear dispersive systems [9, 15]. When dimension $N\in[2,4)$ there exists a wide class of initial conditions for which the solution of (1.1)-(1.2) forms a singularity at time $T>0,$ i.e. as $t\to T^{-}$ the solution becomes infinite at a single spatial point where a growing and increasingly narrow peak forms [9, 16]. When $N=2$ problem (1.1)-(1.2) asises in modeling singularity formation in nonlinear optical media, and finite time blowup of a solution corresponds to an extreme increase in field amplitude due to ‘self-focusing’ [3, 7, 9]. In dimension $N=3$ problem (1.1)-(1.2) arises as the subsonic limit of the Zakharov model for Langmuir waves in plasma, and singularity formation is referred to as wave ‘collapse’ [9, 22, 23]. Plasma, often described as the fourth state of matter, consists mainly of charged particles (ions) and/or electrons, and is the most abundant form of observed matter in the universe [4]. On August 22, 1879 the existence of plasma was first reported by Sir William Crookes, who identified it as “radiant matter” in a lecture to the British Association for the Advancement of Science. In 1928 Langmuir introduced the word plasma in his studies of plasma waves, i.e. oscillations in the density of ‘ionized gas’ [11]. Over the general range $2<N<4,$ multiple investigations of (1.1)-(1.2) have led to new understandings of singularity formation both from the numerical and analytical point of view [1, 2, 5, 6, 9, 10, 12, 13, 14, 15, 17, 18, 19, 20, 23]. In particular, when $N=3,$ the 1972 and 1984 investigations by Zakharhov [22, 23], the 1986 numerical study by McGlaughlin et al [14], and the 1988 numerical investigations of LeMesurier et al [12, 13] and Landman et al [10], demonstrated that, for both symmetric and asymmetric initial data, singularity formation occurs in a spherically symmetric (i.e. $r=|x|$) and self-similar manner. Thus, for $N\in(2,4),$ these authors modeled singularity formation at $r=0$ and time $t=T,$ by an exact solution of the form $$\displaystyle\psi={\frac{1}{{\sqrt{2a(T-t)}}}}\exp\left(i\tilde{\theta}+{\frac{i}{2a}}\ln\left({\frac{T}{T-t}}\right)\right)Q\left({\frac{r}{{\sqrt{2a(T-t)}}}}\right),$$ (1.3) where $a>0$ and $\tilde{\theta}$ (fixed phase shift) are constants. Let $\zeta={\frac{r}{{\sqrt{2a(T-t)}}}}.$ Then $\zeta\to\infty$ as $t\to T$ from below, and $Q(\zeta)$ solves the profile equation $$\displaystyle Q^{\prime\prime}+{\frac{N-1}{\zeta}}Q^{\prime}-Q+ia(Q+\zeta Q^{\prime})+Q|Q|^{2}=0,~{}~{}\zeta\geq 0,$$ (1.4) $$\displaystyle Q(0)=Q_{0},~{}Q^{\prime}(0)=0,$$ (1.5) where $2<N<4$ is spatial dimension, $Q_{0}$ is real, $$\displaystyle Q(\infty)=0,$$ (1.6) and energy is zero, i. e. $$\displaystyle H(Q)=\int_{0}^{\infty}\eta^{N-1}\left(|Q^{\prime}(\eta)|^{2}-{\frac{1}{2}}|Q(\eta)|^{4}\right)d\eta=0.$$ (1.7) Here, $Q_{0}$ and $a>0$ are constants. Zakharov [23] and Hastings and McLeod [8] point out that, because of symmetry and the fact that (1.4) is invariant under a rotation $Q\Rightarrow Qe^{i\theta},$ it is justified to assume that $Q_{0}$ is real and positive. Previous Results And Predictions. In 1988 LeMesurier et al [12, 13] investigated behavior of solutions when $N=3.$ They analyzed the rate at which $Q(\zeta)\to 0$ as $\zeta\to\infty$ for solutions satisfying (1.4)-(1.5)-(1.6)-(1.7). Linearizing (1.4) around the constant solution $Q=0$ gives $$\displaystyle Q^{\prime\prime}+{\frac{2}{\zeta}}Q^{\prime}-Q+ia(Q+\zeta Q^{\prime})=0,~{}~{}0<\zeta<\infty.$$ (1.8) They show that (1.8) has independent solutions $Q_{1}$ and $Q_{2}$ which satisfy $$\displaystyle Q_{1}\sim\zeta^{-1-{\frac{i}{a}}}~{}~{}{\rm and}~{}~{}Q_{2}\sim\zeta^{-2+{\frac{i}{a}}}e^{-ia{\frac{\zeta^{2}}{2}}}~{}{\rm as}~{}\zeta\to\infty.$$ (1.9) LeMesurier et al [12, 13] prove that a solution of (1.4)-(1.5) satisfies zero-energy condition (1.7) only if, for some $k\neq 0,$ $$\displaystyle Q(\zeta)\sim kQ_{1}(\zeta)~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (1.10) Their numerical investigation led to Prediction (I) [12, 13, 19] When $N=3$ a wide range of initial conditions exist such that the solution of (1.1)-(1.2) asymptotoically approaches (as $t\to T^{-}$) a self-similar solution of (1.4)-(1.5) when $(Q_{0},a)\approx(1.885,.918).$ The solution satisfies $Q(\infty)=0,$ the zero-energy condition (1.7), and its profile $|Q|$ is monotonically decreasing. In 1990 Wang [21] also investigated the behavior of solutions when $N=3.$ He proved that, for each $a>0$ and $Q_{0}>0,$ the solution of (1.4)-(1.5) satisfies $Q(\infty)=0.$ However, he did not analyze the number of oscillations in the profile, $|Q|,$ nor did he determine whether any solution satisfies the physically important zero-energy condition (1.7). In 1995 Kopell and Landman [9] proved existence of solutions of (1.4)-(1.5)-(1.6)-(1.7) when $N>2$ is exponentially close to $N=2.$ In their 1999 book, Sulem and Sulem [19] gave an extensive summary of numerical and theoretical results, and described physical relevance of solutions of problem (1.4)-(1.5)-(1.6)-(1.7). In 2000 Budd, Chen and Russel [2] obtained further results. Their numerical study led to Prediction (II) [2] When $2<N<4$ there exists a countably infinite set of multi-bump solutions of (1.4)-(1.5)-(1.6)-(1.7), and $n_{a},$ the number of oscillations of the profile $|Q|,$ satisfies $n_{a}\to\infty$ as $a\to 0^{+}.$ Furthermore, their numerical experiments demonstrate the important role of multi-bump solutions in the formation of singularities in solutions of problem (1.1)-(1.2). Budd et al [2] also derived basic properties of solutions of (1.4)-(1.5). We will make use of three of their results. First, they showed that, for any $N\in(2,4),$ $Q_{0}>0$ and $a>0,$ the solution of (1.4)-(1.5) exists for all $\zeta\geq 0,$ and an $M>0$ exists such that $$\displaystyle|\zeta Q(\zeta)|\leq M~{}~{}{\rm and}~{}~{}|\zeta^{\alpha}Q^{\prime}(\zeta)|\leq M,~{}~{}\zeta\geq 0,$$ (1.11) where $0<\alpha<N-2$ if $2<N<3$, and $\alpha=1$ if $3\leq N<4.$ Second, they showed that, if $2<N<4,$ then a solution of (1.4)-(1.5)-(1.6) satisfies $$\displaystyle|\zeta Q^{\prime}+Q|^{2}+{\frac{1}{2}}\zeta^{2}|Q|^{4}-\zeta^{2}|Q|^{2}=|Q(0)|^{2}-\int_{0}^{\zeta}s|Q(s)|^{4}ds,~{}~{}\zeta\geq 0.$$ (1.12) Third, when $2<N<4$ they proved that a solution of (1.4)-(1.5)-(1.6) satisfies $$\displaystyle H(Q)=0\iff{\Bigg{|}}\zeta Q^{\prime}+\left(1+{\frac{i}{a}}\right)Q{\Bigg{|}}\to 0~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (1.13) Note that the first inequality in (1.11) implies that $Q(\infty)=0$ for all choices of $Q_{0}>0$ and $a>0,$ which is consistent with the 1990 Wang [21] result. Subsequently, in 2002 and 2003 Rottschafer and Kaper [17, 18] extended the Kopell et al [9] and Budd et al [2] results by proving existence of families of multi-bump solutions of (1.4)-(1.5)-(1.6)-(1.7) when $N>2$ is algebraically close to $N=2.$ Goals. In this paper we focus on dimension $N=3$ and analyze qualitative behavior of solutions of problem (1.4)-(1.5). Since the 1984-1986 pioneering investigations by Zhakharhov [23], McLaughlin et al [14], LeMesurier et al [12, 13] and Landman et al [10], two important theoretical problems have remained unresolved: Problem I. Do $Q_{0}>0$ and $a>0$ exist such that the solution of (1.4)-(1.5)-(1.6) satisfies zero-energy condition (1.7) ? Problem II. If $Q_{0}>0$ and $a>0$ exist such that the solution of (1.4)-(1.5)-(1.6) satisfies (1.7), can we prove the number of oscillations (i.e. bumps) of $|Q|$ ? In order to put previous numerical predictions on a firm theoretical foundation, it is first necessary to determine the maximal range of values $Q_{0}>0$ and $a>0$ such that zero-energy condition (1.7) is satisfied. Thus, our main goal is to resolve Problem I. For this we prove Theorem 1.1 Let N=3. For each $Q_{0}>0$ and $a>0$ the solution of (1.4)-(1.5) satisfies $$\displaystyle Q(\zeta)\neq 0~{}~{}\forall\zeta\geq 0~{}~{}{\rm and}~{}~{}H(Q)=0.$$ (1.14) Discussion (1) Previous numerical computations [2, 10, 12, 13, 14, 23] were done on finite intervals, hence it is not clear that they correspond to solutions of (1.4)-(1.5) which, for some $Q_{0}>0$ and $a>0,$ satisfy the physically important zero-energy condition (1.7) at $\zeta=\infty.$ Theorem 1.1 resolves this important issue since it is a global result which guarantees that zero-energy condition (1.7) holds for all choices $Q_{0}>0$ and $a>0.$ The next step in putting numerical predictions on a firm theoretical footing is to investigate Problem II and precisely determine shapes of profiles of solutions of (1.4)-(1.5)-(1.6)-(1.7) as $Q_{0}>0$ and $a>0$ vary. This will be the object of future studies. (2) The proof of Theorem 1.1 is given in Section  2. 2 Proof of Theorem 1.1 The first step of our proof of Theorem 1.1 is to put problem (1.4)-(1.5) into polar form. For this we substitute $Q=\rho e^{i\theta}$ into (1.4)-(1.5), and obtain $$\displaystyle\rho^{\prime\prime}+{\frac{2}{\zeta}}\rho^{\prime}=\rho\left((\theta^{\prime})^{2}+a\zeta\theta^{\prime}+1-\rho^{2}\right),\vspace{0.1in}$$ (2.1) $$\displaystyle\theta^{\prime\prime}+{\frac{2}{\zeta}}\theta^{\prime}+2{\frac{\rho^{\prime}}{\rho}}\theta^{\prime}=-a{\frac{(\zeta\rho)^{\prime}}{\rho}},\vspace{0.1in}$$ (2.2) $$\displaystyle\rho(0)=\rho_{0}>0,~{}\rho^{\prime}(0)=0,~{}\theta(0)=0,~{}\theta^{\prime}(0)=0,$$ (2.3) where $\rho_{0}>0$ and $a>0.$ Combining the zero energy criterion (1.13) with (1.14) and the fact that $Q=\rho e^{i\theta},$ we conclude that the proof of Theorem 1.1 is complete if we show that $$\displaystyle\rho(\zeta)>0~{}~{}\forall\zeta\geq 0,~{}~{}\lim_{\zeta\to\infty}\rho(\zeta)=0,$$ (2.4) and $$\displaystyle\lim_{\zeta\to 0}\left((\left(\zeta\rho\right)^{\prime})^{2}+\left(\zeta\rho\theta^{\prime}\right)^{2}+{\frac{2\zeta\rho^{2}\theta^{\prime}}{a}}+{\frac{\rho^{2}}{a^{2}}}\right)=0.$$ (2.5) Our goal in the remainder of this section is to prove that properties (2.4)-(2.5) hold. For this we develop seven auxiliary Lemmas in which we prove key qualitative properties of solutions which will allow us to prove (2.4)-(2.5). In particular, these technical results show that, for each $\rho_{0}>0$ and $a>0,$ there exists $M>0$ such that the solution of initial value probem (2.1 )-(2.2)-(2.3) satisfies $$\displaystyle 0<\zeta\rho\leq M~{}~{}{\rm and}~{}~{}0\leq|\zeta\rho\theta^{\prime}|\leq M~{}~{}\forall\zeta\geq 0,~{}~{}\lim_{\zeta\to\infty}(\zeta\rho)^{\prime}=0~{}~{}{\rm and}~{}~{}\lim_{\zeta\to\infty}\theta^{\prime}=0.$$ (2.6) It is easily verified that properties (2.4)-(2.5) follow from (2.6). Lemma 2.1 Let $\rho_{0}>0$ and $a>0.$ There is an $M>0$ such that the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle 0\leq\zeta\rho\leq M,~{}0\leq|\zeta\rho^{\prime}|\leq M~{}{\rm and}~{}~{}0\leq|\zeta\rho\theta^{\prime}|\leq M~{}~{}\forall\zeta\geq 0.$$ (2.7) Proof. It follows from (1.11), combined with the fact that $Q=\rho e^{i\theta},$ that $$\displaystyle 0\leq\zeta\rho\leq M~{}~{}{\rm and}~{}~{}0\leq\left(\zeta\rho^{\prime}\right)^{2}+\left(\zeta\rho\theta^{\prime}\right)^{2}\leq M^{2}~{}~{}\forall\zeta\geq 0.$$ (2.8) Property (2.7) follows immediately from (2.8). This completes the proof. Remark. The second property in (2.4) follows from first property in (2.8). Lemma 2.2 Let $\rho_{0}>0$ and $a>0.$ Then the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle\rho(\zeta)>0~{}~{}\forall\zeta\geq 0.$$ (2.9) Remark. It follows from (2.9) that $|Q(\zeta)|=\rho>0~{}~{}\forall\zeta\geq 0,$ hence $Q(\zeta)\neq 0~{}~{}\forall\zeta\geq 0.$ This proves the first property in (2.4), and also the first property in (1.14). Proof. Suppose, for contradiction, that $\bar{\zeta}>0$ exists such that $$\displaystyle\rho(\zeta)>0~{}~{}\forall\zeta\in[0,\bar{\zeta})~{}~{}{\rm and}~{}~{}\rho(\bar{\zeta})=0.$$ (2.10) The first step in obtaining a contradiction to (2.10) is to recall from (2.7) that $$\displaystyle|\zeta\rho\theta^{\prime}|\leq M~{}~{}\forall\zeta\in[0,\bar{\zeta}].$$ (2.11) Next, we write (2.2) as $$\displaystyle\left(\zeta^{2}\rho^{2}\theta^{\prime}\right)^{\prime}=-{\frac{a\zeta}{2}}\left(\left(\zeta\rho\right)^{2}\right)^{\prime}.$$ (2.12) Integrating (2.12) by parts gives $$\displaystyle\zeta^{2}\rho^{2}\left(\theta^{\prime}+{\frac{a\zeta}{2}}\right)={\frac{a}{2}}\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt.$$ (2.13) Now define the finite, positive value $$\displaystyle C={\frac{a}{2{\bar{\zeta}}^{2}}}\int_{0}^{\bar{\zeta}}\left(t\rho(t)\right)^{2}dt>0.$$ (2.14) We conclude from (2.13), (2.14) and the fact that $\rho(\zeta)\to 0^{+}$ as $\zeta\to{\bar{\zeta}}^{-},$ that $$\displaystyle\theta^{\prime}\sim{\frac{C}{\rho^{2}}}~{}~{}{\rm as}~{}~{}\zeta\to{\bar{\zeta}}^{-}.$$ (2.15) It follows from (2.15) that $|\zeta\rho\theta^{\prime}|\to\infty$ as $\zeta\to{\bar{\zeta}}^{-},$ which contradicts (2.11). This completes the proof of Lemma 2.2. Lemma 2.3 Let $\rho_{0}>0$ and $a>0.$ Then the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt\to\infty~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (2.16) Proof. We assume, for contradiction, that $\bar{\rho}_{0}>0$ and $\bar{a}>0$ exists such that the solution of (2.1)-(2.2)-(2.3) corresponding to $(\rho_{0},a)=(\bar{\rho}_{0},\bar{a})$ satisfies $$\displaystyle 0<D=\int_{0}^{\infty}\left(t\rho(t)\right)^{2}dt<\infty.$$ (2.17) The first step in obtaining a contradiction to (2.17) is to write equation (2.13) as $$\displaystyle\zeta\rho\left(\zeta\rho\theta^{\prime}\right)+{\frac{a}{2}}\zeta\left(\zeta\rho\right)^{2}={\frac{a}{2}}\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt.$$ (2.18) It follows from (2.7) that $M>0$ exists such that the first term in (2.18) satisfies $$\displaystyle 0\leq|\zeta\rho\left(\zeta\rho\theta^{\prime}\right)|\leq M^{2}~{}~{}\forall\zeta\geq 0.$$ (2.19) Next, we claim that $$\displaystyle\zeta\rho(\zeta)\to 0~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (2.20) If (2.20) is false, there exist $\delta\in(0,M)$ and a positive, increasing, unbounded sequence $\left(\zeta_{N}\right)$ exists such that $$\displaystyle 0<\delta<\zeta_{N}\rho(\zeta_{N})\leq M~{}~{}\forall N\geq 1.$$ (2.21) Along the sequence $\left(\zeta_{N}\right)$ equation (2.18) becomes $$\displaystyle\zeta_{N}\rho(\zeta_{N})\left(\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\right)+{\frac{a}{2}}\zeta_{N}\left(\zeta_{N}\rho(\zeta_{N})\right)^{2}={\frac{a}{2}}\int_{0}^{\zeta_{N}}\left(t\rho(t)\right)^{2}dt.$$ (2.22) Combining the bounds in (2.7) and (2.21) with the left side of (2.22) gives $$\displaystyle\zeta_{N}\rho(\zeta_{N})\left(\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\right)+{\frac{a}{2}}\zeta_{N}\left(\zeta_{N}\rho(\zeta_{N})\right)^{2}\geq{\frac{a}{2}}\zeta_{N}\delta^{2}-M^{2}~{}~{}\forall N\geq 1.$$ (2.23) It follows from (2.23) and the fact that $\zeta_{N}\to\infty$ as $N\to\infty$ that $$\displaystyle\lim_{N\to\infty}\left(\zeta_{N}\rho(\zeta_{N})\left(\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\right)+{\frac{a}{2}}\zeta_{N}\left(\zeta_{N}\rho(\zeta_{N})\right)^{2}\right)=\infty.$$ (2.24) However, the right side of (2.23) remains bounded as $N\to\infty,$ since, by (2.17), $$\displaystyle 0<\lim_{N\to\infty}\left({\frac{a}{2}}\int_{0}^{\zeta_{N}}\left(t\rho(t)\right)^{2}dt\right)={\frac{aD}{2}}<\infty.$$ (2.25) We conclude from (2.22), (2.24) and (2.25) that the left side of (2.22) becomes unbounded as $N\to\infty,$ whereas the right side remains bounded as $N\to\infty,$ a contradiction. Thus, property (2.20) holds, as claimed. It now follows from (2.20), and the fact that $0\leq|\zeta\rho\theta^{\prime}|\leq M~{}~{}\forall\zeta\geq 0$ (i.e. see (2.7)), that the first term in (2.18) satisfies $$\displaystyle\zeta\rho\left(\zeta\rho\theta^{\prime}\right)\to 0~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (2.26) We conclude from (2.17), (2.18) and (2.26) that $$\displaystyle\zeta\left(\zeta\rho\right)^{2}\to D~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (2.27) It follows from (2.27) and the assumption $D>0,$ that $\zeta_{1}>0,$ $k_{1}>0$ exist such that $$\displaystyle\left(\zeta\rho\right)^{2}\geq{\frac{k_{1}}{\zeta}}~{}~{}\forall\zeta\geq\zeta_{1}.$$ (2.28) We conclude from (2.28) that $$\displaystyle\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt\geq\int_{\zeta_{1}}^{\zeta}{\frac{k_{1}}{t}}dt=k_{1}\left(\ln(\zeta)-\ln(\zeta_{2})\right)~{}~{}\forall\zeta\geq\zeta_{1}.$$ (2.29) It follows from (2.29) that $\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt\to\infty$ as $\zeta\to\infty,$ contradicting (2.17). Thus, we conclude that (2.16) holds as claimed. This completes the proof of Lemma 2.3. Lemma 2.4 Let $\rho_{0}>0$ and $a>0.$ Then the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle\lim_{\zeta\to\infty}\zeta\left(\zeta\rho\right)^{2}=\infty,~{}~{}\lim_{\zeta\to\infty}{\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}=0~{}~{}{\rm and}~{}~{}\lim_{\zeta\to\infty}\left(\zeta\rho\right)^{\prime}=0.$$ (2.30) Remark. The first two limits in (2.30) will be used to prove the third limit, and third limit proves the fourth property in (2.6). Properties (2.30) will play an essential role in completing the proof that $\lim_{\zeta\to\infty}\theta^{\prime}=0,$ the crucial fifth property in (2.6). Proof. First, it follows from (2.16) and (2.18) that $$\displaystyle\lim_{\zeta\to\infty}\left(\zeta\rho\left(\zeta\rho\theta^{\prime}\right)+{\frac{a}{2}}\zeta\left(\zeta\rho\right)^{2}\right)=\lim_{\zeta\to\infty}\left({\frac{a}{2}}\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt\right)=\infty.$$ (2.31) Recall from (2.19) that $0\leq|\zeta\rho\left(\zeta\rho\theta^{\prime}\right)|\leq M^{2}~{}~{}\forall\zeta\geq 0.$ This and (2.31) imply that $$\displaystyle\lim_{\zeta\to\infty}\zeta\left(\zeta\rho\right)^{2}=\infty,$$ (2.32) which proves the first property in (2.30). Next, divide (2.18) by ${\frac{a}{2}}\zeta\left(\zeta\rho\right)^{2}$ and get $$\displaystyle{\frac{2}{a}}{\frac{\zeta\rho\left(\zeta\rho\theta^{\prime}\right)}{\left(\zeta\left(\zeta\rho\right)^{2}\right)}}+1={\frac{\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt}{\left(\zeta\left(\zeta\rho\right)^{2}\right)}},~{}~{}\zeta>0.$$ (2.33) It follows from (2.32) and the bound $0\leq|\zeta\rho\left(\zeta\rho\theta^{\prime}\right)|\leq M^{2}~{}~{}\forall\zeta\geq 0$ that $$\displaystyle\lim_{\zeta\to\infty}{\frac{2}{a}}{\frac{\zeta\rho\left(\zeta\rho\theta^{\prime}\right)}{\left(\zeta\left(\zeta\rho\right)^{2}\right)}}=0.$$ (2.34) Taking the limit as $\zeta\to\infty$ to both sides of (2.33), and using (2.34), gives $$\displaystyle 1=\lim_{\zeta\to\infty}{\frac{\int_{0}^{\zeta}\left(t\rho(t)\right)^{2}dt}{\left(\zeta\left(\zeta\rho\right)^{2}\right)}}.$$ (2.35) It follows from (2.16), (2.32), (2.35) and L’Hopital’s rule that $$\displaystyle 1=\lim_{\zeta\to\infty}{\frac{\left(\zeta\rho(\zeta)\right)^{2}}{\left(\zeta\rho(\zeta)\right)^{2}+2\zeta\left(\zeta\rho(\zeta)\right)\left(\zeta\rho(\zeta)\right)^{\prime}}}=\lim_{\zeta\to\infty}{\frac{1}{1+2\left(\zeta\rho(\zeta)\right)^{\prime}/\rho}},$$ (2.36) hence $$\displaystyle\lim_{\zeta\to\infty}{\frac{\left(\zeta\rho(\zeta)\right)^{\prime}}{\rho}}=0,$$ (2.37) which proves the second property in (2.30). Finally, we conclude from (2.37) and the bound $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0$ that $\zeta^{*}>0$ exists such that $$\displaystyle|\left(\zeta\rho(\zeta)\right)^{\prime}|\leq\rho\leq{\frac{M}{\zeta}}~{}~{}\forall\zeta\geq\zeta^{*}.$$ (2.38) It follows from (2.38) that $\left(\zeta\rho(\zeta)\right)^{\prime}\to 0$ as $\zeta\to\infty,$ which proves the third property in (2.30). This completes the proof of Lemma 2.4. The remainder of this section is devoted to proving the fifth property in (2.6), i.e. for every $\rho_{0}>0$ and $a>0,$ the solution of (2.1)-(2.2)-(2.3) satisfies $\lim_{\zeta\to\infty}\theta^{\prime}=0.$ In the next three Lemmas we prove this property by using the results proved above, and we also make extensive use of the functional $$\displaystyle E=\left(\left(\zeta\rho\right)^{\prime}\right)^{2}+\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)+{\frac{1}{2}}\zeta^{2}\rho^{4},$$ (2.39) which satisfies, because $\rho>0~{}~{}\forall\zeta>0,$ $$\displaystyle E(0)=\rho_{0}^{2}~{}~{}{\rm and}~{}~{}E^{\prime}=-\zeta\rho^{4}<0~{}~{}\forall\zeta>0.$$ (2.40) An integration gives $$\displaystyle\left(\left(\zeta\rho\right)^{\prime}\right)^{2}+\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)+{\frac{1}{2}}\zeta^{2}\rho^{4}=\rho_{0}^{2}-\int_{0}^{\zeta}t\rho(t)^{4}dt~{}~{}\forall\zeta\geq 0.$$ (2.41) Remark. Equation (2.41) is the same as equation (1.12) derived by Budd et al [2]. The key properties of the functional $E$ are proved in the next two technical Lemmas. Lemma 2.5 Let $\rho_{0}>0$ and $a>0.$ Then the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle-\infty<E(\infty)\leq 0.$$ (2.42) Proof. We conclude from (2.40), Lemma 2.2 and the bound $0<\zeta\rho\leq M~{}\forall\zeta\geq 0$ that $$\displaystyle E(\infty)=\rho_{0}^{2}-\int_{0}^{\infty}t\rho^{4}(t)dt>-\infty.$$ (2.43) It remains to prove that $$\displaystyle E(\infty)\leq 0.$$ (2.44) Suppose, for contradiction, that $\bar{\rho}_{0}>0$ and $\bar{a}>0$ exist such that, when $(\rho_{0},a)=(\bar{\rho}_{0},\bar{a}),$ the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle E(\infty)=\bar{\lambda}>0.$$ (2.45) The first step in obtaining a contradiction to (2.45) is to write (2.39) as $$\displaystyle\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)=E(\zeta)-\left(\left(\zeta\rho\right)^{\prime}\right)^{2}-{\frac{1}{2}}\zeta^{2}\rho^{4},~{}~{}\zeta\geq 0.$$ (2.46) From the third property in (2.30) and the fact that $0\leq\zeta\rho\leq M$ we conclude that $$\displaystyle\lim_{\zeta\to\infty}\left(\left(\zeta\rho\right)^{\prime}\right)^{2}=0~{}~{}{\rm and}~{}~{}\lim_{\zeta\to\infty}{\frac{1}{2}}\zeta^{2}\rho^{4}=0.$$ (2.47) It follows from (2.45), (2.46) and (2.47) that $\lim_{\zeta\to\infty}\left(\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)\right)={\bar{\lambda}}>0.$ Thus, there exists $\bar{\zeta}>0$ such that $$\displaystyle\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)\geq{\frac{{\bar{\lambda}}}{2}}~{}~{}\forall\zeta\geq{\bar{\zeta}}.$$ (2.48) We conclude from (2.48) that $\left(\zeta\rho\right)^{2}(\theta^{\prime})^{2}\geq{\frac{{\bar{\lambda}}}{2}}~{}~{}\forall\zeta\geq{\bar{\zeta}}.$ Since $\zeta\rho\theta^{\prime}$ is continuous, either $$\displaystyle\zeta\rho\theta^{\prime}\geq\left({\frac{{\bar{\lambda}}}{2}}\right)^{1/2}~{}~{}\forall\zeta\geq{\bar{\zeta}}~{}~{}{\rm or}~{}~{}\zeta\rho\theta^{\prime}\leq-\left({\frac{{\bar{\lambda}}}{2}}\right)^{1/2}~{}~{}\forall\zeta\geq{\bar{\zeta}}.$$ (2.49) Also, since $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0,$ then $\bar{\zeta}$ can be chosen large enough so that $$\displaystyle 1-\rho^{2}>0~{}~{}\forall\zeta\geq{\bar{\zeta}}.$$ (2.50) Now write (2.2) as $$\displaystyle\left(\zeta\rho\right)^{\prime\prime}=\zeta\rho\left((\theta^{\prime})^{2}+a\zeta\theta^{\prime}+1-\rho^{2}\right).$$ (2.51) Suppose that $\zeta\rho\theta^{\prime}\geq\left({\frac{{\bar{\lambda}}}{2}}\right)^{1/2}~{}~{}\forall\zeta\geq{\bar{\zeta}}.$ Then it follows from (2.50) and (2.51) that $$\displaystyle\left(\zeta\rho\right)^{\prime\prime}\geq a\zeta\left({\frac{{\bar{\lambda}}}{2}}\right)^{1/2}~{}~{}\forall\zeta\geq{\bar{\zeta}}.$$ (2.52) An integration from $\bar{\zeta}$ to $\zeta$ gives $$\displaystyle\left(\zeta\rho\right)^{\prime}\geq\left(\zeta\rho\right)^{\prime}|_{\zeta=\bar{\zeta}}+a\left({\frac{{\bar{\lambda}}}{2}}\right)^{1/2}\left({\frac{\zeta^{2}}{2}}-{\frac{{\bar{\zeta}}^{2}}{2}}\right)~{}~{}\forall\zeta\geq{\bar{\zeta}}.$$ (2.53) It follows from (2.53) that $\left(\zeta\rho\right)^{\prime}\to\infty$ as $\zeta\to\infty,$ contradicting the third property in (2.30). Thus, the first possibility in (2.49) cannot occur. It remains to assume, for contradiction, that the second possibility in (2.49) occurs, i.e. $$\displaystyle\zeta\rho\theta^{\prime}\leq-\left({\frac{{\bar{\lambda}}}{2}}\right)^{1/2}~{}~{}\forall\zeta\geq{\bar{\zeta}}.$$ (2.54) First, we conclude from (2.51) and the bound $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0,$ that $$\displaystyle\left(\zeta\rho\right)^{\prime\prime}\leq\zeta\rho\theta^{\prime}\left(\theta^{\prime}+a\zeta\right)+\zeta\rho\leq\zeta\rho\theta^{\prime}\left(\theta^{\prime}+a\zeta\right)+M~{}~{}\forall\zeta\geq 0.$$ (2.55) Also, from (2.13) it follows that $$\displaystyle\theta^{\prime}+{\frac{a\zeta}{2}}\geq 0~{}~{}\forall\zeta\geq 0.$$ (2.56) Combining this property with (2.54) and (2.55), we obtain $$\displaystyle\left(\zeta\rho\right)^{\prime\prime}\leq\zeta\rho\theta^{\prime}\left(\theta^{\prime}+a\zeta\right)+M\leq-\left({\frac{{\bar{\lambda}}}{2}}\right)^{1/2}{\frac{a\zeta}{2}}+M~{}~{}\forall\zeta\geq{\bar{\zeta}}.$$ (2.57) Integrating from $\bar{\zeta}$ to $\zeta,$ we conclude that $\left(\zeta\rho\right)^{\prime}\to-\infty$ as $\zeta\to\infty,$ contradicting the third property in (2.30). Since suppostion (2.45) has led to a contradiction, we conclude that (2.45) cannot occur, hence $E(\infty)\leq 0.$ This completes the proof of Lemma 2.5. In order to complete the proof of Theorem 1.1 we first need to eliminate the possibility that a solution of (2.1)-(2.2)-(2.3) satisfies $E(\infty)=0.$ We do this in Lemma 2.6 Let $\rho_{0}>0$ and $a>0.$ Then the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle E(\infty)<0.$$ (2.58) Proof. Suppose, for contradiction, that $\bar{\rho}_{0}>0,$ $\bar{a}>0$ exist such that property (2.58) does not hold when $(\rho_{0},a)=(\bar{\rho}_{0},\bar{a}).$ This supposition and Lemma 2.5 imply that $$\displaystyle E(\infty)=0.$$ (2.59) The first step in obtaining a contradiction to (2.59) is to show that $$\displaystyle\lim_{\zeta\to\infty}\left(\zeta\rho\theta^{\prime}\right)^{2}=0.$$ (2.60) Before proving property (2.60) we show how it leads to a contradiction of (2.59). First, recall that the third property in (2.30) is $$\displaystyle\lim_{\zeta\to\infty}\left(\left(\zeta\rho\right)^{\prime}\right)^{2}=0.$$ (2.61) Substituting (2.60) and (2.61) into the left side of equation (2.5), and using the bound $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0,$ we conclude that zero-energy conditon (2.5) is satisfied, i.e. $$\displaystyle\lim_{\zeta\to 0}\left(\left(\zeta\rho\right)^{\prime})^{2}+\left(\rho^{2}\left(\zeta^{2}(\theta^{\prime})^{2}+{\frac{2\zeta\theta^{\prime}}{a}}+{\frac{1}{a^{2}}}\right)\right)^{2}\right)^{\frac{1}{2}}=0.$$ (2.62) It follows from (1.13) and (2.62) that $H(Q)=0$ where energy $H(Q)$ is defined in (1.7). In turn, as we pointed out in Section 1 LeMesurier et al [12, 13] proved that when $H(Q)=0$ there exists $k\neq 0$ such that $$\displaystyle Q(\zeta)\sim k\zeta^{-1}e^{-i\ln(\zeta)/a}~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (2.63) From (2.63), and the fact that $Q=\rho e^{i\theta},$ it follows that $$\displaystyle|\zeta Q|=\zeta\rho\to|k|~{}~{}{\rm as}~{}~{}\zeta\to\infty.$$ (2.64) Substituting (2.60), (2.61) and (2.64) into the left side of (2.46), we conclude that $$\displaystyle\lim_{\zeta\to\infty}\left(\left(\left(\zeta\rho\right)^{\prime}\right)^{2}+\left(\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)\right)\right)=-|k|^{2}.$$ (2.65) However, because of (2.59) and properties (2.47) and (2.59), the right side of (2.46) tends to zero as $\zeta\to\infty,$ a contradiction. Thus, it remains to prove (2.60). We assume, for contradiction, that property (2.60) does not hold, hence $$\displaystyle\limsup_{\zeta\to\infty}\left(\zeta\rho\theta^{\prime}\right)^{2}>0.$$ (2.66) From (2.66) and the fact that $|\zeta\rho\theta^{\prime}|\leq M~{}~{}\forall\zeta\geq 0,$ we conclude that a value $\delta>0$ exists, and also a positive, increasing, unbounded sequence $\left(\zeta_{N}\right)$ exists such that $$\displaystyle 4\delta^{2}\leq\left(\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\right)^{2}\leq M^{2}~{}~{}\forall N\geq 1,$$ (2.67) and therefore, by considering subsequences if necessary, either $$\displaystyle-M\leq\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\leq-2\delta~{}~{}\forall N\geq 1~{}~{}{\rm or}~{}~{}2\delta\leq\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\leq M~{}~{}\forall N\geq 1$$ (2.68) Thus, the proof of Lemma 2.6 is complete if we eliminate each of the cases given in (2.68). In order to eliminate these two cases, we first need to derive upper and lower bounds on $\zeta_{N}\rho(\zeta_{N})$ when $N\gg 1.$ For this we evaluate (2.18) at $\zeta=\zeta_{N}$ and get $$\displaystyle\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right){\Big{|}}_{\zeta=\zeta_{N}}=E(\zeta_{N})-\left(\left(\zeta\rho\right)^{\prime}\right)^{2}{\Big{|}}_{\zeta=\zeta_{N}}-{\frac{1}{2}}\zeta_{N}^{2}\rho(\zeta_{N})^{4}~{}~{}\forall N\geq 1.$$ (2.69) From (2.47) and (2.59) it follows that the right side of (2.69) tends to zero as $N\to\infty.$ Thus, the left side of (2.69) satisfies $$\displaystyle\lim_{N\to\infty}\left(\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)\right){\Big{|}}_{\zeta=\zeta_{N}}=0.$$ (2.70) We conclude from (2.67), (2.70), and the bound $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0,$ that $$\displaystyle\delta\leq\zeta_{N}\rho(\zeta_{N})\leq M~{}~{}{\rm when}~{}~{}N\gg 1.$$ (2.71) Also, it follows from the fact that $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0$ and property (2.30) in Lemma 2.4 that ${\hat{\zeta}}>0$ exists such that $$\displaystyle 0<\rho^{2}(\zeta)<1~{}~{}\forall\zeta>\hat{\zeta},$$ (2.72) $$\displaystyle-{\frac{\delta}{2}}\leq\left(\zeta\rho\right)^{\prime}\leq{\frac{\delta}{2}}~{}~{}{\rm and}~{}~{}-{\frac{\delta^{2}}{aM^{2}}}\leq{\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}\leq{\frac{\delta^{2}}{aM^{2}}}~{}~{}\forall\zeta>{\hat{\zeta}}.$$ (2.73) We assume, without loss of genrality, that $\left(\zeta_{N}\right)$ is chosen so that $\zeta_{N}>{\hat{\zeta}}~{}~{}\forall N\geq 1,$ and also (2.71) holds $\forall N\geq 1.$ We now consider the two cases in (2.68). Suppose, first of all, that $$\displaystyle 2\delta\leq\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\leq M~{}~{}\forall N\geq 1.$$ (2.74) Next, we derive a lower bound for $\zeta\rho\theta^{\prime}.$ For this write (2.2) as $$\displaystyle\left(\zeta^{2}\rho^{2}\theta^{\prime}\right)^{\prime}=-a\left(\zeta\rho\right)^{2}{\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}.$$ (2.75) From (2.75), the bound ${\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}\leq{\frac{\delta^{2}}{aM^{2}}}$ in (2.73), and the bound $0\leq\zeta\rho\leq M$ we get $$\displaystyle\left(\zeta^{2}\rho^{2}\theta^{\prime}\right)^{\prime}\geq-\delta^{2}~{}~{}\forall\zeta>\hat{\zeta}.$$ (2.76) It follows from (2.71), (2.74) and an integration of (2.76) that, for each $N\geq 1,$ $$\displaystyle\zeta^{2}\rho^{2}\theta^{\prime}\geq\zeta_{N}^{2}\rho^{2}(\zeta_{N})\theta^{\prime}(\zeta_{N})-\delta^{2}(\zeta-\zeta_{N})\geq 2\delta^{2}-\delta^{2}=\delta^{2},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1.$$ (2.77) Next, write (2.2) as $$\displaystyle(\zeta\rho)^{\prime\prime}=\zeta\rho\left((\theta^{\prime})^{2}+a\zeta\theta^{\prime}+1-\rho^{2}\right).$$ (2.78) It follows from (2.72), (2.77), the fact that $0<\zeta\rho\leq M~{}~{}\forall\zeta>0$ and (2.78) that solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle(\zeta\rho)^{\prime\prime}\geq a\zeta{\frac{\left(\zeta\rho\ \right)^{2}\theta^{\prime}}{\zeta\rho}}\geq{\frac{a\delta^{2}\zeta}{M}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}N\geq 1$$ (2.79) An integration of (2.79) from $\zeta_{N}$ to $\zeta$ gives $$\displaystyle(\zeta\rho)^{\prime}\geq(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}}+{\frac{a\delta^{2}}{M}}\left({\frac{\zeta^{2}}{2}}-{\frac{\zeta_{N}^{2}}{2}}\right),~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}N\geq 1.$$ (2.80) Substituting $\zeta=\zeta_{N}+1$ into (2.80), and using the property $\lim_{\zeta\to\infty}(\zeta\rho)^{\prime}=0,$ we obtain $$\displaystyle(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}+1}\geq(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}}+{\frac{a\delta^{2}}{2M}}\geq{\frac{a\delta^{2}}{4M}}>0~{}~{}{\rm when}~{}~{}N\gg 1,$$ (2.81) contradicting the fact that $\lim_{\zeta\to\infty}(\zeta\rho)^{\prime}=0.$ Thus, property (2.74) cannot hold. It remains to assume, for contradiction, that the first possibility in (2.68) occurs, i.e. $$\displaystyle-M\leq\zeta_{N}\rho(\zeta_{N})\theta^{\prime}(\zeta_{N})\leq-2\delta~{}~{}\forall N\geq 1.$$ (2.82) The first step in obtaining a contradiction to (2.82) is to assume that $\hat{\zeta}$ is chosen to satisfy (2.72) and (2.73), and also $$\displaystyle\hat{\zeta}>{\frac{4M^{2}}{a\delta^{2}}}.$$ (2.83) Next, we combine (2.75) with the lower bound ${\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}\geq-{\frac{\delta^{2}}{aM^{2}}}$ in (2.73), and the fact that $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0,$ and obtain $$\displaystyle\left(\zeta^{2}\rho^{2}\theta^{\prime}\right)^{\prime}\leq\delta^{2}~{}~{}\forall\zeta>\hat{\zeta}.$$ (2.84) It follows from (2.71), (2.82) and an integration of (2.84) that, for each $N\geq 1,$ $$\displaystyle\zeta^{2}\rho^{2}\theta^{\prime}\leq\zeta_{N}^{2}\rho^{2}(\zeta_{N})\theta^{\prime}(\zeta_{N})+\delta^{2}(\zeta-\zeta_{N})\leq-2\delta^{2}+\delta^{2}=-\delta^{2},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1.$$ (2.85) Thus, since $0<\zeta\rho\leq M~{}~{}\forall\zeta>0,$ it follows from (2.85) that $$\displaystyle\zeta\rho\theta^{\prime}\leq-{\frac{\delta^{2}}{\zeta\rho}}\leq-{\frac{\delta^{2}}{M}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}N\geq 1.$$ (2.86) To make use of these properties, we write equation (2.1) as $$\displaystyle(\zeta\rho)^{\prime\prime}=\zeta\rho\theta^{\prime}\left(\theta^{\prime}+a\zeta\right)+\zeta\rho-\zeta\rho^{3}.$$ (2.87) Combining (2.56), (2.83), (2.85) and the fact that $0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0,$ with equation (2.87), we obtain $$\displaystyle(\zeta\rho)^{\prime\prime}\leq-{\frac{\delta^{2}}{M}}\left({\frac{a\zeta}{2}}\right)+M\leq-{\frac{\delta^{2}a\zeta}{4M}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,N\geq 1.$$ (2.88) Integrating (2.88) from $\zeta_{N}$ to $\zeta$ gives $$\displaystyle(\zeta\rho)^{\prime}\leq(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}}-{\frac{\delta^{2}a}{4M}}\left({\frac{\zeta^{2}}{2}}-{\frac{\zeta_{N}^{2}}{2}}\right),~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,N\geq 1.$$ (2.89) Setting $\zeta=\zeta_{N}+1$ in (2.89), and using the fact $(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}}\to 0$ as $N\to\infty,$ we obtain $$\displaystyle(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}+1}\leq(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}}-{\frac{\delta^{2}a}{4M}}<-{\frac{\delta^{2}a}{8M}}<0~{}~{}{\rm when}~{}~{}N\gg 1,$$ (2.90) contradicting the fact that $(\zeta\rho)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}+1}\to 0$ as $N\to\infty.$ We conclude that (2.82) does not hold, hence $E(\infty)=0,$ as claimed. This completes the proof of Lemma 2.6. Our final result is Lemma 2.7 Let $\rho_{0}>0$ and $a>0.$ Then the solution of (2.1)-(2.2)-(2.3) satisfies $$\displaystyle\lim_{\zeta\to\infty}\theta^{\prime}(\zeta)=0~{}~{}{\rm and}~{}~{}~{}~{}\lim_{\zeta\to\infty}\zeta\rho(\zeta)={\sqrt{-E(\infty)}}.$$ (2.91) Remarks. (i) Proving the first property in completes the proof of (2.6), which in turn completes the proof of Theorem 1.1. (ii) Much of the proof uses the same basic approach as in the proof of Lemma 2.6. Due to the importance of Lemma 2.7 we give complete details. Proof. First, recall from (2.7) in Lemma 2.1 that $M>0$ exists such that $$\displaystyle 0\leq\zeta\rho\leq M~{}~{}\forall\zeta\geq 0.$$ (2.92) We also recall from (2.39)-(2.40) that $E$ satisfies $E(0)=\rho_{0}^{2}~{}~{}{\rm and}~{}~{}E^{\prime}<0~{}~{}\forall\zeta>0.$ These properties and Lemma 2.6 imply that $\zeta_{\rho_{0}}>0$ exists such that $$\displaystyle E^{\prime}(\zeta)<0~{}\forall\zeta\in(0,\zeta_{\rho_{0}})~{}~{}{\rm and}~{}~{}E(\zeta_{\rho_{0}})=0,$$ (2.93) $$\displaystyle E<0~{}~{}{\rm and}~{}~{}E^{\prime}<0~{}~{}\forall\zeta>\zeta_{\rho_{0}}~{}~{}{\rm and}~{}~{}-\infty<E(\infty)<0.$$ (2.94) We conclude from (2.94) that $\lambda_{0}>0$ exists such that $$\displaystyle E(\zeta)=\left(\left(\zeta\rho\right)^{\prime}\right)^{2}+\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1+{\frac{\rho^{2}}{2}}\right)<-\lambda_{0}<0~{}~{}\forall\zeta>\zeta_{0}+1.$$ (2.95) It follows from (2.95) that $$\displaystyle\left(\theta^{\prime}(\zeta)\right)^{2}<1~{}~{}\forall\zeta>\zeta_{0}+1.$$ (2.96) Also, we conclude from (2.92), (2.95) and (2.96) that $\liminf_{\zeta\to\infty}\zeta\rho(\zeta)>0.$ From this property and (2.92) it follows that $m\in(0,M]$ exists such such that $$\displaystyle 0<m\leq\zeta\rho\leq M~{}~{}\forall\zeta>\zeta_{0}+1.$$ (2.97) Our next goal is to make use of these properties to prove that $$\displaystyle\lim_{\zeta\to\infty}\left(\theta^{\prime}(\zeta)\right)^{2}=0.$$ (2.98) We assume, for contradiction, that (2.98) doesn’t hold, hence $\limsup_{\zeta\to\infty}\left(\theta^{\prime}(\zeta)\right)^{2}>0.$ This property and the fact that $0\leq\left(\theta^{\prime}(\zeta)\right)^{2}<1$ imply that $\delta\in(0,1)$ exists, and also a positive, increasing, unbounded sequence $\left(\zeta_{N}\right)$ exists such that $$\displaystyle\delta^{2}\leq\left(\theta^{\prime}(\zeta_{N})\right)^{2}<1~{}~{}\forall N\geq 1.$$ (2.99) Therefore, by considering subsequences if necessary, either $$\displaystyle-1<\theta^{\prime}(\zeta_{N})\leq-\delta~{}~{}\forall N\geq 1~{}~{}{\rm or}~{}~{}\delta\leq\theta^{\prime}(\zeta_{N})<1~{}\forall N\geq 1.$$ (2.100) The proof of (2.98) is complete if we obtain a contradiction to each case in (2.100). For this we again use three basic properties of solutions. First, because of (2.92) and the property $\lim_{\zeta\to\infty}\zeta_{N}=\infty,$ we can assume that $\zeta_{1}>\zeta_{0}+1$ is large enough so that $$\displaystyle 0<\rho(\zeta)<1~{}~{}\forall\zeta\geq\zeta_{1}.$$ (2.101) It follows from (2.30) in Lemma 2.4 and the property $\lim_{\zeta\to\infty}\zeta_{N}=\infty,$ that we can also assume that $\zeta_{1}>\zeta_{0}+1$ is large enough so that $$\displaystyle-{\frac{\delta}{2a}}\left({\frac{m}{M}}\right)^{4}<{\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}<{\frac{\delta}{2a}}\left({\frac{m}{M}}\right)^{4}~{}~{}\forall\zeta\geq\zeta_{1},$$ (2.102) $$\displaystyle-{\frac{a\delta m^{3}}{4M^{2}}}<\left(\zeta\rho\right)^{\prime}<{\frac{a\delta m^{3}}{4M^{2}}}~{}~{}\forall\zeta\geq\zeta_{1},$$ (2.103) Suppose, now, that the second case in (2.100) occurs, i.e. $$\displaystyle 0<\delta\leq\theta^{\prime}(\zeta_{N})<1~{}\forall N\geq 1.$$ (2.104) Again, we make use of the equation $$\displaystyle\left(\zeta^{2}\rho^{2}\theta^{\prime}\right)^{\prime}=-a\left(\zeta\rho\right)^{2}{\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}.$$ (2.105) Substituting the upper bounds $(\zeta\rho)^{2}\leq M^{2}$ and ${\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}<{\frac{\delta}{2a}}\left({\frac{m}{M}}\right)^{4}$ into (2.105) gives $$\displaystyle\left(\zeta^{2}\rho^{2}\theta^{\prime}\right)^{\prime}\geq-{\frac{\delta m^{4}}{2M^{2}}},~{}~{}\zeta\geq\zeta_{1}.$$ (2.106) Integrating (2.106) from $\zeta_{N}$ to $\zeta,$ and using $\theta^{\prime}(\zeta_{N})\geq\delta,$ we get $$\displaystyle\zeta^{2}\rho^{2}\theta^{\prime}\geq\zeta_{N}^{2}\rho^{2}(\zeta_{N})\delta-{\frac{\delta m^{4}}{2M^{2}}}(\zeta-\zeta_{N}),~{}~{}\zeta\geq\zeta_{N},~{}~{}N\geq 1.$$ (2.107) Dividing (2.107) by $\zeta^{2}\rho^{2},$ and using the fact that $m^{2}\leq\zeta^{2}\rho^{2}\leq M^{2},$ we obtain $$\displaystyle\theta^{\prime}\geq{\frac{\zeta_{N}^{2}\rho^{2}(\zeta_{N})}{\zeta^{2}\rho^{2}}}\delta-{\frac{\delta m^{4}}{2\zeta^{2}\rho^{2}M^{2}}}\geq{\frac{\delta m^{2}}{2M^{2}}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\geq 1.$$ (2.108) Next, we again make use of the equation $$\displaystyle\left(\zeta\rho\right)^{\prime\prime}=\zeta\rho\left((\theta^{\prime})^{2}+a\zeta\theta^{\prime}+1-\rho^{2}\right).$$ (2.109) Recall from(2.97) that $m\leq\zeta\rho\leq M$ when $\zeta\geq\zeta_{0}+1$, and that $\zeta_{N}\gg 1$ when $N\gg 1.$ Combining these properties with (2.101), (2.108) and equation (2.109), we obtain $$\displaystyle\left(\zeta\rho\right)^{\prime\prime}\geq\zeta\rho\left(a\zeta\theta^{\prime}\right)\geq{\frac{a\delta m^{3}}{2M^{2}}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\gg 1.$$ (2.110) Integrating (2.110) from $\zeta_{N}$ to $\zeta,$ and making use of the lower bound in (2.103), we get $$\displaystyle\left(\zeta\rho\right)^{\prime}\geq-{\frac{a\delta m^{3}}{4M^{2}}}+{\frac{a\delta m^{3}}{2M^{2}}}\left(\zeta-\zeta_{N}\right),~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\gg 1.$$ (2.111) Thus, $$\displaystyle\left(\zeta\rho\right)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}+1}\geq{\frac{a\delta m^{3}}{4M^{2}}}>0,~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\gg 1,$$ (2.112) contradicting the fact that $\lim_{\zeta\to\infty}\left(\zeta\rho\right)^{\prime}=0.$ Thus, (2.104) cannot hold, eliminating the second case in (2.100). Next, suppose that the first case in (2.100) occurs, i.e. $$\displaystyle-1<\theta^{\prime}(\zeta_{N})\leq-\delta~{}\forall N\geq 1.$$ (2.113) Substituting $(\zeta\rho)^{2}\leq M^{2}$ and ${\frac{\left(\zeta\rho\right)^{\prime}}{\rho}}>-{\frac{\delta}{2a}}\left({\frac{m}{M}}\right)^{4}$ into (2.105) gives $$\displaystyle\left(\zeta^{2}\rho^{2}\theta^{\prime}\right)^{\prime}\leq{\frac{\delta m^{4}}{2M^{2}}},~{}~{}\zeta\geq\zeta_{1}.$$ (2.114) Integrating (2.114) from $\zeta_{N}$ to $\zeta,$ and using $\theta^{\prime}(\zeta_{N})\geq\delta,$ we get $$\displaystyle\zeta^{2}\rho^{2}\theta^{\prime}\leq-\zeta_{N}^{2}\rho^{2}(\zeta_{N})\delta+{\frac{\delta m^{4}}{2M^{2}}}(\zeta-\zeta_{N}),~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\gg 1.$$ (2.115) Divide (2.115) by $\zeta^{2}\rho^{2},$ make use the bound $m\leq\zeta\rho\leq M$ and obtain $$\displaystyle\theta^{\prime}\leq-{\frac{m^{2}\delta}{M^{2}}}+{\frac{\delta m^{2}}{2M^{2}}}(\zeta-\zeta_{N})\leq-{\frac{m^{2}\delta}{2M^{2}}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\geq 1.$$ (2.116) Next, recall from (2.96) that $(\theta^{\prime})^{2}<1~{}~{}\forall\zeta>\zeta_{0}.$ From this property, (2.116) and the fact that $\zeta_{N}\to\infty$ as $N\to\infty,$ we conclude that $$\displaystyle(\theta^{\prime})^{2}+a\zeta\theta^{\prime}+1\leq-{\frac{3am^{2}\delta}{8M^{2}}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\gg 1.$$ (2.117) Combining (2.109) with (2.117) and the lower bound $\zeta\rho\geq m$ when $\zeta\geq\zeta_{0}+1$ gives $$\displaystyle\left(\zeta\rho\right)^{\prime\prime}\leq-{\frac{3a\delta m^{3}}{8M^{2}}},~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\gg 1.$$ (2.118) It follows from an integration of (2.118) and the upper bound in (2.103) that $$\displaystyle\left(\zeta\rho\right)^{\prime}\leq{\frac{a\delta m^{3}}{4M^{2}}}-{\frac{3a\delta m^{3}}{8M^{2}}}(\zeta-\zeta_{N}),~{}~{}\zeta_{N}\leq\zeta\leq\zeta_{N}+1,~{}~{}N\gg 1.$$ (2.119) Finally, we conclude from (2.119) that $$\displaystyle\left(\zeta\rho\right)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}+1}\leq-{\frac{a\delta m^{3}}{8M^{2}}}<0,~{}~{}N\gg 1,$$ (2.120) contradicting the property $\lim_{N\to\infty}\left(\zeta\rho\right)^{\prime}{\Big{|}}_{\zeta=\zeta_{N}+1}=0.$ This completes the proof that $\lim_{\zeta\to\infty}\left(\theta^{\prime}(\zeta)\right)^{2}=0,$ Thus, $\lim_{\zeta\to\infty}\theta^{\prime}(\zeta)=0,$ and the the first property in (2.91) is proved. It remains to prove the second property in (2.91), namely $$\displaystyle\lim_{\zeta\to\infty}\zeta\rho(\zeta)={\sqrt{-E(\infty)}}.$$ (2.121) First, we write (2.39) as $$\displaystyle\left(\zeta\rho\right)^{2}\left((\theta^{\prime})^{2}-1\right)=E-{\frac{1}{2}}\zeta^{2}\rho^{4}-\left(\left(\zeta\rho\right)^{\prime}\right)^{2}.$$ (2.122) Since $\left(\theta^{\prime}\right)^{2}<1~{}~{}{\rm and}~{}~{}E<0~{}~{}{\rm when}~{}~{}\zeta>\zeta_{0},$ we can divide (2.122) by $(\theta^{\prime})^{2}-1$ and get $$\displaystyle\left(\zeta\rho\right)^{2}={\frac{E-{\frac{1}{2}}\zeta^{2}\rho^{4}-\left(\left(\zeta\rho\right)^{\prime}\right)^{2}}{(\theta^{\prime})^{2}-1}}.$$ (2.123) We conclude from the third property in (2.30), (2.109) and (2.123) that $$\displaystyle\lim_{\zeta\to\infty}\left(\zeta\rho(\zeta)\right)^{2}=-E(\infty).$$ (2.124) Property (2.121) follows from (2.124) and the fact that $\zeta\rho>0~{}~{}\forall\zeta>0.$ Competing interests and funding. There are no competing interests. References [1] G. V. Arkrivis, V. A. Dougalis, O. A. Karakashian, W. R. McKinney Galerkin-finite element methods for the nonlinear Schrodinger equation, Hellenice research in Mathematics and Informatics ’92, Hellenic Math. Soc., (Athens, 1992), 421-442 [2] C. J. Budd, S. Chen and R. D. Russel, New self-similar solutions of the nonlinear Schrodinger equation with moving mesh computations, J. Comp. Phys. 152 (1999), 756-789 [3] R. Y. Chiao, E. Garmire and C. H. Townes Self-trapping of optical beams, Phys. Rev. Lett. 13 (1964), 479-382 [4] P. K. Lu and L. XinPei, Low temperature plasma technology: methods and applications, CRC Press (2013) [5] G. Fibich Adiabatic law for self-focusing of optical beams, Optics Letters, 21 (1996), 1735-1737 [6] G. Fibich and G. C. Papanicolau Self-focusing in the perturbed and unperturbed nonlinear Schrodinger equation in critical dimension, SIAM J. Appl. Math 60 (1), (1999), 183-240 [7] A. Hasegawa, Optical solitons in fibers, Springer-Verlag, Berlin (1989) [8] S. P. Hastings and J. B. McLeod, Classical Methods in Ordinary Differential Equations With Applications to Boundary Value Problems, Amer. Math. Soc. (2012) [9] N. Kopell and M. Landman, Spatial structure of the focusing singularity of the nonlinear Schrodinger equation. A geometrical analysis. SIAM J. Appl. Math. 55 (1995), 1297-1323 [10] M. J. Landman, G. C. Papanicolau, C. Sulem and P. Sulem, Rate of blowup for solutions of the nonlinear Schrodinger equation at critical dimension, Phys. Rev. A. 38 (1988), 3837-3843 [11] I. Langmuir, Oscillations in ionized gases, Proc. Nat. Acad. Sci. 14 (1928), 627-637 [12] B. LeMesurier, G. C. Papanicolau, C. Sulem and P. Sulem, Focusing and multi-focusing solutions of the nonlinear Schrodinger equation, Physica D 31 (1988), 78-102 [13] B. LeMesurier, G. C. Papanicolau, C. Sulem and P. Sulem, Local structure of the self-focusing nonlinearity of the nonlinear Schrodinger equation, Physica D 32 (1988), 210-220 [14] D. W. McGlaughlin, G. C. Papanicolau, C. Sulem and P. Sulem, Focusing singularity of the cubic nonlinear Schrodinger equation, Phys. Rev. A. 34 (1986), 1200-1210 [15] A. C. Newell, Solitons in Mathematics and Psysics, in CBMS Applied Mathematical Series, SIAM Phil. Pa.48 (1985) [16] J. J. Rasmussen and K. Rypdal Blowup in nonlinear Schrodinger equations. 1. A general review and 2. Similarity structure of the blowup singularity, Physica Scripta 33 (1986), 481-504 [17] V. Rottschafer and T. J. Kaper Blowup in the nonlinear Schrodinger equation near critical dimension, J. Math. Anal. Appl. 268 (2002), 517-549 [18] V. Rottschafer and T. J. Kaper Geometric theory for multi-bump self similar blowup solutions of the cubic nonlinear Schrodinger equation, Nonlinearity 16 (3) (2003), 929-962 [19] C. Sulem and P. Sulem, The Nonlinear Schrodinger Equation, Springer , New York (1999) [20] Y. Tourigny and J. M. Sanz-Serna The numerical study of blowup with application to a nonlinear Schrodinger equation, J. Comp. Phys., 102 (1992), 407-416 [21] X. P. Wang On singular solutions of the nonlinear Schrodinger and Zahkarov equations, Ph.D. Thesis, New York University, 1990 [22] V. E. Zakharhov, bf NEW Collapse of Langmuir waves, Sov. Phys. JETP, Vol. 55, no. 5 (1972), 908-914 [23] V. E. Zakharhov, Collapse and Self-focusing of Langmuir waves, Handbook of Plasma Physics, Vol. 2, M. N. Rosenbluth and R. Z. Sagdeev, eds., Elsevier, Amsterdam (1984)
Dynamical Mean Field Theory of the Antiferromagnetic Metal to Antiferromagnetic Insulator Transition R. Chitra and G. Kotliar Serin Physics Laboratory, Rutgers University, Piscataway, NJ 08855, USA (January 10, 2021) Abstract We study the antiferromagnetic metal to antiferromagnetic insulator using dynamical mean field theory and exact diagonalization methods. We find two qualitatively different behaviors depending on the degree of magnetic correlations. For strong correlations combined with magnetic frustration, the transition can be described in terms of a renormalized slater theory, with a continuous gap closure driven by the magnetism but strongly renormalized by correlations. For weak magnetic correlations, the transition is weakly first order. PACS numbers: 71.30.+h The correlation driven metal insulator transition (MIT) or Mott transition is one of the central problems of condensed matter physics. For generic band structures, it occurs at a finite value of the interaction strength and constitutes a genuinely non perturbative phenomena. Recently, a great deal of progress has been made using the Dynamical Mean Field approach (DMFT), a method which becomes exact in the well defined limit of infinite lattice coordination [1, 2]. However, all studies so far, have been confined to the paramagnetic metal (PM) to paramagnetic insulator (PI) transition. In this letter, we study the transition from the antiferromagnetic metal (AM) to an antiferromagnetic insulator (AI) within DMFT. The motivation for this work is twofold. Experimentally, the interaction or pressure driven MIT in $V_{2}O_{3}$[3, 4] and $NiS_{2-x}Se_{x}$ [5, 4] take place between magnetically ordered states. While we are still quite far from a realistic modeling of these oxides, (in this work we only consider commensurate magnetic order and ignore orbital degeneracy and realistic band structure) we would like to take into account the effects of magnetic long range order present in these systems. On the theoretical side, we would like to understand how magnetic correlations , which controls the scale at which the spin entropy is quenched, affect the MIT. We consider the Hamiltonian, $$H=t_{1}\sum_{nn}c^{\dagger}_{i}c_{j}+t_{2}\sum_{nnn}c^{\dagger}_{i}c_{j}+U\sum% _{i}n_{i\uparrow}n_{i\downarrow}-J\sum_{nn}{\bf S}_{i}.{\bf S}_{j}$$ (1) $t_{1}$ and $t_{2}$ are nearest and next nearest neighbor hoppings respectively and $U$ is the onsite coulomb repulsion. J is a ferromagnetic spin spin interaction that we add to the model in order to independently control the strength of the antiferromagnetic correlations of the system [8]. We parametrise the hoppings as $t_{1}^{2}=(1-\alpha)t^{2}$ and $t_{2}^{2}=\alpha t^{2}$ where $\alpha$ is the degree of frustration. $t_{2}$ breaks the perfect nesting characteristic of the bipartite lattice [6, 7] and allows for a direct AM to AI transition. Within the dynamical mean field approximation, in the presence of magnetic order, the single particle Green’s function on the $A$ and $B$ Neel sublattices has the following form $$G_{\sigma}^{-1}=\pmatrix{i\omega_{n}+\mu_{\sigma}-{\tilde{\epsilon}}_{k}-% \Sigma_{A\sigma}&-\epsilon_{k}\cr-\epsilon_{k}&i\omega_{n}+\mu_{-\sigma}-{% \tilde{\epsilon}}_{k}-\Sigma_{B\sigma}\cr}$$ (2) $\sigma=1,2$ refer to the spin up and down states, $m$ the staggered magnetisation and $\mu_{\sigma}=(\mu-{U\over 2}+(-1)^{\sigma}Jm)$. $\Sigma$ are the self energies which are momentum independent within this approximation. The local Green’s functions $G_{ii\sigma}$ depend on the sublattices (A and B) and obey $G_{A1}=G_{B2}\equiv G_{1}$, $G_{A2}=G_{B1}\equiv G_{2}$. Similarly, $\Sigma_{A1}=\Sigma_{B2}\equiv\Sigma_{1}$ and $\Sigma_{A2}=\Sigma_{B1}\equiv\Sigma_{2}$. Within DMFT, these local self-energies and Greens functions are obtained from an Anderson impurity model $$\displaystyle H_{imp}=$$ $$\displaystyle\sum_{l\sigma}\epsilon_{l\sigma}d^{\dagger}_{l\sigma}d_{l\sigma}+% \sum_{l\sigma}V_{l\sigma}(d^{\dagger}_{l\sigma}f_{\sigma}+h.c.)$$ $$\displaystyle-\mu_{\sigma}\sum_{\sigma}f^{\dagger}_{\sigma}f_{\sigma}+Un_{f% \uparrow}n_{f\downarrow}$$ describing an impurity electron $f$ hybridising with a bath of conduction electrons $d$ via the spin-dependent hybridisation functions $$\Delta_{\sigma}(i\omega_{n})=\sum_{l}{{V_{l\sigma}^{2}}\over{i\omega_{n}-% \epsilon_{l\sigma}}}$$ (4) where $\omega_{n}$ are the Matsubara frequencies. The $\Delta_{\sigma}$ obey self consistency conditions which can be expressed in terms of $G_{1}$, $G_{2}$ and the non-interacting density of states [2]. We restrict ourselves to the Bethe lattice in the following. The non-interacting density of states on the Bethe lattice is given by $D(\epsilon)={2\over{\pi D^{2}}}\sqrt{D^{2}-\epsilon^{2}}$ with $D={\sqrt{2}}t$ being the half bandwidth. The self-consistency conditions now have a simple form $$\Delta_{\sigma}(i\omega_{n})={{t_{1}^{2}}\over 2}G_{-\sigma}(i\omega_{n})+{{t^% {2}_{2}}\over 2}G_{\sigma}(i\omega_{n})$$ (5) We consider only the half-filled case, which on the Bethe lattice is achieved when $\mu$ is chosen to be $U/2$ due to a special particle hole symmetry of the impurity problem : $d^{\dagger}_{l\sigma}\to d_{l-\sigma}$, $d_{l\sigma}\to d^{\dagger}_{l-\sigma}$ , $f_{\sigma}\to f^{\dagger}_{-\sigma}$, $f^{\dagger}_{l\sigma}\to f_{-\sigma}$ and with $(\epsilon,V)_{l\sigma}\to-(\epsilon,V)_{l-\sigma}$ provided $\mu=U/2$. To study these equations we use the algorithm of Ref.9 which uses zero temperature Lanczos method to compute the Green’s functions of the impurity model given by (Dynamical Mean Field Theory of the Antiferromagnetic Metal to Antiferromagnetic Insulator Transition) and iterate the model until the self-consistency given by (7) is achieved. Though the results to be presented were obtained for the case where the bath was represented by $5$ sites, several of them were checked for $N=7$ to establish the results seen for $N=5$. The typical number of iterations varied between $50-65$ for robust convergence. We found substantially different behavior in the cases of strong and weak correlations (induced by a quenching of antiferromagnetism mimicked by ferromagnetic interactions in this paper) which we discuss separately. .1 Strong Magnetic Correlations For small U, the system is in the paramagnetic phase. As $U$ increases beyond a critical $U_{cm}$, antiferromagnetic moments develop and the up and down spectral functions $\rho_{\sigma}(\omega)=(-1/\pi){\rm{Im}}G_{\sigma}(\omega)$ are no longer equal. The spin up spectral function has more spectral weight in the upper Hubbard band than in the lower Hubbard band and vice versa for the down spin spectral function. The low frequency peak in $\rho_{\sigma}(\omega)$ is no longer centered around $\omega=0$ but is split into two peaks centered around some $\pm\omega_{0}$ with a minimum at $\omega=0$. This can be attributed to the fact that the effective staggered magnetic field that is generated when antiferromagnetism develops splits the quasiparticle bands. As $U$ is increased further, the density of states at $\omega=0$ decreases until a critical $U_{MIT}$ where $\rho_{\sigma}(\omega=0)=0$ and the system becomes insulating. For $U\geq U_{MIT}$, a gap $\Delta$ which grows with $U$, opens continuously. A plot of the staggered magnetization $m$ versus $U$ is shown in Fig. 1. $U$ is in units of $t$ in all the graphs. $m$ increases monotonically with $U$ and does not exhibit any special feature at the MIT. Note that though the magnetic moment is quite large at the MIT and increases with increasing $t_{2}/t_{1}$ it saturates only when one is well into the insulating phase. In the presence of magnetic long range order, the self energy does not show any anomalous behavior across the MIT. This should be contrasted with the paramagnetic case where ${\rm Im}\Sigma$ diverges as $(i\omega_{n})^{-1}$ at the MIT. To obtain an insight into the nature of the MIT, we use the following low energy parametrization of $\Sigma_{1}$ and $\Sigma_{2}$ in conjunction with the numerical results $$\displaystyle\Sigma_{1}(i\omega_{n})$$ $$\displaystyle=$$ $$\displaystyle h+(1-{1\over{z}})i\omega_{n}$$ (6) $$\displaystyle\Sigma_{2}(i\omega_{n})$$ $$\displaystyle=$$ $$\displaystyle-h+(1-{1\over{z}})i\omega_{n}$$ where $h$ is the staggered field generated by the interactions in the antiferromagnetic phase. $z$ goes to zero at the PM-PI transition, but in the presence of magnetism it is non-monotonic and remains non-zero even in the insulating phase (cf. Fig.1). In terms of the local Green’s functions, $\Sigma_{\sigma}$ is determined by the following expression on the Bethe lattice $${G}_{\sigma}^{-1}+\Sigma_{\sigma}=i\omega_{n}+\mu_{\sigma}-{{t_{1}^{2}}\over 2% }G_{-\sigma}-{{t_{2}^{2}}\over 2}G_{\sigma}$$ (7) Rewriting (7), we obtain $$\displaystyle 1$$ $$\displaystyle=$$ $$\displaystyle(i\omega_{n}+{\rm Im}\Sigma_{1}+{\rm Re}\Sigma_{1})G_{1}-{{t_{1}^% {2}}\over 2}G_{1}G_{2}-{{t_{2}^{2}}\over 2}G_{1}^{2}$$ (8) $$\displaystyle 1$$ $$\displaystyle=$$ $$\displaystyle(i\omega_{n}+{\rm Im}\Sigma_{2}+{\rm Re}\Sigma_{2})G_{2}-{{t_{1}^% {2}}\over 2}G_{1}G_{2}-{{t_{2}^{2}}\over 2}G_{2}^{2}$$ Using (6) in (8) and continuing to real frequencies we obtain the low frequency spectral functions $$\rho_{1,2}(\omega)=\rho_{(}0)[1\pm{{\omega}\over{hz}}]$$ (9) where $\rho(0)$ is the density of states at the Fermi level and is given by $$\rho(\omega=0)={2\over{\pi D^{2}}}\sqrt{D^{2}-({h\over{\alpha}})^{2}}$$ (10) Note that $\rho(0)$ does not depend on $z$. Since $h$ increases monotonically with $U$, $\rho(0)$ decreases with increasing $U$ in the magnetically ordered phase. This yields a critical value $h_{c}$ where $\rho(0)\to 0$ and a gap opens continuously. This is the point at which the MIT occurs. We, therefore, see that the MIT in this case is of a very different nature and there are no Kondo resonances which disappear discontinuously at the MIT. Our results indicate that in the vicinity of $U_{MIT}$, $h$ increases linearly with $U$ implying that $\rho(0)$ vanishes as $(U_{MIT}-U)^{\frac{1}{2}}$ as we approach the transition from the metallic side. We now turn to the behavior of the coefficient of the linear term of the specific heat $\gamma$. The basic physics controlling the behavior of the specific heat in the antiferromagnetic phase is the competition between the the increase of the effective mass $m^{*}$ and the decrease of the density of states $\rho(0)$. Though $m^{*}\propto[1-{{\partial\Sigma(i\omega_{n})}\over{\partial i\omega_{n}}}]=z^{% -1}$, initially increases, its increase is cut off by the staggered magnetization (cf. Fig. 1). When the staggered magnetization $m$ is large the decrease in $\rho(0)$ is the dominant effect, whereas the first effect dominates when the magnetism is weak. The latter is true in the case of the paramagnetic MIT where $m^{*}$ and hence $\gamma$ are both $\propto z^{-1}$ and diverge as $z\to 0$ at the transition. This divergence is related to the fact that there is a residual entropy in the insulating phase which results from the spin degeneracy at every site. However, such a degeneracy is lost in the case where the insulator is also an antiferromagnet. Infact using Eqs. (6,9,10) we find $${\gamma\over{\gamma_{0}}}=\frac{1}{z}{\sqrt{D^{2}-{{h^{2}}\over{\alpha^{2}}}}}$$ (11) where $\gamma_{0}={{\pi k_{B}^{2}}\over{3t}}$ is the specific heat coefficient of the non-interacting problem. Since $z$ remains nonzero, $\gamma\to 0$ as $(U_{MIT}-U)^{\frac{1}{2}}$ at the MIT. As anticipated, we see in Fig.2 that $\gamma$ increases with $U$ for small $m$ and decreases for larger moments. We can generalize the above to a lattice with a realistic dispersion $E_{\bf k}=\epsilon({\bf k})+{\tilde{\epsilon}}({\bf k})$ where $\epsilon$ and $\tilde{\epsilon}$ are the contributions from the $t_{1}$ and $t_{2}$ hoppings. The Fermi surface of the non-interacting system is defined by $E_{{\bf k}_{F}}=0$. When the interaction $U$ is turned on, the dispersions of the quasiparticles change and are given by the poles of the Green’s function given by Eq. 2. From the preceding analysis, we can say that the main effect of interactions is to modify the band dispersion in the following manner $$E_{\bf k}=z[{\tilde{\epsilon}}_{\bf k}\pm\sqrt{\epsilon_{\bf k}^{2}+h^{2}}]$$ (12) In the non-interacting case ($h=0$ and $z=1$) the two bands overlap and there is no gap at the Fermi surface. For small $U$ in the paramagnetic phase there is no gap in spectrum but the bands are slightly renormalized by the factor $z$. When antiferromagnetic moments set in, the non-zero $h$ changes the band curvature and the renormalized bands start moving away from each other reducing the Fermi surface area. The manner in which the Fermi surface area decreases is described by the density of states $\rho(0)$. For small $h$ there are regions where the bands overlap and there is no gap in the system. At a critical value $h=h_{c}$ a gap opens up in the spectrum as the Fermi surface shrinks to zero signalling the MIT. .2 Weak Magnetic Correlations: $J\neq 0$ The previous scenario was characterized by relatively weak electron correlations, the quasiparticle residue $z$ near the transition, was at most $0.65$, indicating that a relatively high fraction of the spectral weights remain coherent. This can be understood in simple qualitative terms, when the magnetic frustration is weak, the relatively large magnetic exchange produces a large magnetization, which in turn reduces the double occupancy and hence the correlations. All the spin entropy is quenched by the spin ordering, and the presence of a hopping in the same sublattice favors coherent quasiparticle propagation. To access the strongly correlated regime where most of the spectral function is incoherent we we turn on a nonzero $J$ which reduces $h$ and $m$ and hence increases $U_{MIT}$ as suggested by (10). We studied the case $t_{1}^{2}=0.95t^{2}$ and $t_{2}^{2}=0.05t^{2}$ with $t_{2}/t_{1}=0.22$. For $J=0$, this system shows the usual band MIT discussed in the previous section, in the vicinity of $U=2t$. By choosing a $J(U)$ cf. inset of Fig.3, such that the magnetic moment remains very small for a large regime of $U$ as shown in Fig. 3, we move the MIT to $U=3.8t$. In the antiferromagnetic metallic phase, a Kondo like resonance persists in the spectral function up to $U=3.8t$ and disappears suddenly at the transition. This resonance at small $\omega$ splits into two peaks when the magnetic moment becomes appreciable as was seen earlier. However, unlike in the paramagnetic case the height of this resonance is not pinned at the value of the non-interaction density of states but decreases with increasing $U$. We also find that the slope $z$ decreases to values much smaller than was seen in the band transition picture as seen in Fig. 3. $z$ decreases until the moment becomes sufficiently large as to stop its reduction. In the insulator, which still resembles the band-insulator in that there is a pseudogap, the self-energy is still analytic and one can still define a $z$. We find that there is a jump in $z$ and $m$ at the transition in addition to the discontinuous disappearance of the resonance mentioned earlier, reminiscent of a first order transition. On re-tracing the insulating solution as function of decreasing $U$, we find that there is co-existence. The insulator survives up to $U=3.4t$ and there is a MIT of the pure band kind at that point and the solution goes over to one which is a metallic antiferromagnet but with a much higher value of the moment $m$ and a much larger $z$. This metallic solution disappears around $U=2.4t$. We have measured the double occupancies in both these solutions and find that the metal with the smaller moment has higher double occupancy. This in conjunction with the ferromagnetic coupling $J$ ensures that the metal with the lower moment and higher double occupancy is the lowest energy solution. In this weak magnetic correlation regime, we find that the specific heat coefficient $\gamma$ is strongly enhanced cf. Fig.4. This scenario is reminiscent of the behavior seen in $V_{2}O_{3}$ [3]. Note that at the MIT, there is a finite gap which is much smaller than the Hubbard gap. Also there survive coherent quasiparticle peaks albeit with very small weight. These coherent peaks are asymmetric about $\omega=0$ in both $G_{1}$ and $G_{2}$. Close to $U=4.1t$ these peaks vanish completely and one is left only with the incoherent Hubbard band structures. In the coexistent solution, we see that as $U$ is reduced the gap closes continuously and the the weight of these two coherent peaks increases continuously too. To summarize, the character of the AM-AI transition is very different from the PM-PI. We found two clearly distinct scenarios depending on the strength of magnetic correlations and frustration in the system. In the limit of strong magnetic correlations, the transition takes place as a renormalized Slater transition. That is, up to a multiplicative factor which remains finite at the transition, a gap opens continuously, and $\gamma\to 0$ vanish continuously , as the MIT is approached from the metallic side. On the other hand when development of antiferromagnetic moments in the metallic phase is suppressed a new scenario emerges: the gap from the insulating side remains finite at the MIT, and a substantial increase of the specific heat is observed when the MIT is approached from the metallic side. Our results indicate that the transition is weakly first order. These two scenarios, are remarkably similar to the observations reported in $NiS_{2-x}Se_{x}$ [5, 10] and $V_{2-y}O_{3}$ [3] respectively and could be developed further to interpret other experimental observations [4]. References [1] W. Metzner and D. Vollhardt, Phys. Rev. Lett. 62,324 (1989). [2] A. Georges et al , Rev. Mod. Phys. 68,13 (1996) and references therein. [3] S. Carter et al , Phys. Rev. B. 48,16841 (1993). [4] M. Imada, A. Fujimori and T. Tokura, Metal Insulator Transitions to appear in Rev. Mod. Phys. [5] F. Gautier et al, Phys. Lett. 53A, 31 (1975); S. Sudo, J. Magn. Magn. Mater. 114, 57 (1992). [6] M. Rozenberg, G. Kotliar and X.Y. Zhang, Phys. Rev. B 49,10181 (1994). [7] D. Duffy and A. Moreo, Phys. Rev. B 55, 676 (1997). [8] A. Georges and L. Laloux, Mod. Phys. Lett. B 11, 913 (1997). [9] M.Caffarel and W. Krauth, Phys. Rev. Lett. 72, 1545 (1994). [10] S. Miyasaka et al, preprint.
Complexity of propositional Linear-time Temporal Logic with finitely many variables111When this work has been completed, we have become aware that similar results have been previously obtained in [4]. We would like to thank Stéphane Demri for drawing our attention to this fact. Mikhail Rybakov Tver State University and University of the Witwatersrand, Johannesburg Dmitry Shkatov University of the Witwatersrand, Johannesburg Abstract We prove that model-checking and satisfiability problems for propositional Linear-time Temporal Logic with a single variable are PSPACE-complete. 1 Introduction The propositional Linear-time Temporal Logic LTL, proposed in [9], is historically the first temporal logic to be have been used in formal specification and verification of (parallel) non-terminating computer programs [7], such as (components of) operating systems. It has stood the test of time, despite a dizzying variety of temporal logics that have since been introduced for the purpose [3]. The task of verifying that a program conforms to a specification can be carried out by checking whether an LTL formula expressing the specification is satisfied in the structure modelling the execution paths of the program. This corresponds to the model checking problem for LTL: given a formula and a model, check if the formula is satisfied by a specified path of states in the model. The related task of verifying that a specification of a program is consistent—and, thus, can be satisfied by some program—corresponds to the satisfiability problem for LTL: given a formula, check whether there is a model and a path satisfying the formula. Therefore, the complexity of both satisfiability and model checking are of crucial interest when it comes to applications of LTL to formal specification and verification. It has been shown in [10] that both satisfiability and model checking for LTL are PSPACE-complete. It might have been hoped that the complexity of satisfiability, as well as of model checking, may be reduced if we consider a language with only a finite number of propositional variables, which is sufficient for most applications. Indeed, examples are known of logics whose satisfiability problem goes down from “intractable” to “tractable” once we place a limit on the number of propositional variables allowed in the language: thus, satisfiability for the classical propositional logic and the modal logic S5 goes down from NP-complete to polynomial-time once we limit the number of propositional variables by an (arbitrary) finite number. Similarly, satisfiability for the intuitionistic propositional logic goes down from PSPACE-complete to polynomial-time if we allow only a single propositional variable in the language (as follows from [8]). By contrast, for most “natural” modal and temporal logics, even a single variable is sufficient to generate a fragment whose satisfiability is as hard as satisfiability for the entire logic. The first results to this effect have been proven in [1] and [11]. A general method of proving such results for PSPACE-complete logics has been proposed in [5]; even though [5] considers only a handful of logics, the method can be generalised to large classes of logics, often in the language without propositional variables [6, 2]. This method, however, is not applicable to LTL, as it relies on the ability to construct models with an arbitrary branching factor, which contradicts the semantics of LTL, where formulas are evaluated with respect to paths, which are strict linear orderings on the set of states of the model. As the present paper shows, it is, however, possible to prove that satisfiability for LTL in the language containing only one propositional variable is PSPACE-complete, by modifying the construction used in [10] for proving PSPACE-completeness of the entire logic. The same holds true for the model checking problem: even if we are interested in formulas containing at most one variable, the model checking problem remains PSPACE-complete. Therefore, the sheer restriction on the number of propositional variables allowed in the language does not make either satisfiability or model checking for LTL tractable. 2 Syntax and semantics The language of LTL contains an infinite set of propositional variables $\textit{{Var}}=\{p_{1},p_{2},\ldots\}$, the Boolean constant $\perp$ (“falsehood”), the Boolean connective $\rightarrow$ (“if …, then …”), and the temporal operators $\!\raisebox{-0.86pt}{ \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,$ (“next”) and $\hskip 2.0pt\mathcal{U}$ (“until”). The formulas are defined by the following BNF expression: $$\varphi:=p\mid\ \perp\ \mid(\varphi\rightarrow\varphi)\mid\!\raisebox{-0.86pt}% { \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,\varphi\mid(\varphi\hskip 2.0pt% \mathcal{U}\varphi),$$ where $p$ ranges over Var. We also define $\top:=({\perp}\rightarrow{\perp})$, $\neg\varphi:=(\varphi\rightarrow{\perp})$, $(\varphi\wedge\psi):=\neg(\varphi\rightarrow\neg\psi)$, $\Diamond\varphi:=(\top\hskip 2.0pt\mathcal{U}\varphi)$ and $\Box\varphi:=\neg\Diamond\neg\varphi$. We adopt the usual conventions about omitting parentheses. For every formula $\varphi$ and every $n\geqslant 0$, we inductively define the formula $\!\raisebox{-0.86pt}{ \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,^{n}\varphi$ as follows: $\!\raisebox{-0.86pt}{ \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,^{0}\varphi:=\varphi$, $\!\raisebox{-0.86pt}{ \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,^{n+1}\varphi:=\!\raisebox{-0.86% pt}{ \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,\!\raisebox{-0.86pt}{ \mbox{% \begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,^{n}\varphi$. Formulas are evaluated in Kripke models (often referred to as “transition systems”). A Kripke model is a tuple $\mathfrak{M}=(\mathcal{S},\longmapsto,V)$, where $\mathcal{S}$ is a non-empty set (of states), $\longmapsto$ is a binary (transition) relation on $\mathcal{S}$ that is serial (i.e., for every $s\in\mathcal{S}$, there exists $s^{\prime}\in\mathcal{S}$ such that $s\longmapsto s^{\prime}$), and $V$ is a (valuation) function $V:\textit{{Var}}\rightarrow 2^{\mathcal{S}}$. An infinite sequence $s_{0},s_{1},\ldots$ of states of $\mathfrak{M}$ such that, for every $i\geqslant 0$ in the domain of the sequence, $s_{i}\longmapsto s_{i+1}$, is called a path. Given a path $\pi$ and some $i\geqslant 0$, we denote by $\pi[i]$ the $i$th element of $\pi$ and by $\pi[i,\infty]$ the suffix of $\pi$ beginning with its $i$th element. Formulas are evaluated with respect to paths. The satisfaction relation between models $\mathfrak{M}$, paths $\pi$, and formulas $\varphi$ is inductively defined as follows: • $\mathfrak{M},\pi\models p_{i}$ $\leftrightharpoons$ $\pi[0]\in V(p_{i})$; • $\mathfrak{M},\pi\models\bot$ never holds; • $\mathfrak{M},\pi\models(\varphi_{1}\rightarrow\varphi_{2})$ $\leftrightharpoons$ $\mathfrak{M},\pi\models\varphi_{1}$ implies $\mathfrak{M},\pi\models\varphi_{2}$; • $\mathfrak{M},\pi\models\!\raisebox{-0.86pt}{ \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,\varphi_{1}$ $\leftrightharpoons$ $\mathfrak{M},\pi[1,\infty]\models\varphi_{1}$; • $\mathfrak{M},\pi\models\varphi_{1}\hskip 2.0pt\mathcal{U}\varphi_{2}$ $\leftrightharpoons$ $\mathfrak{M},\pi[i,\infty]\models\varphi_{2}$ for some $i\geqslant 0$ and $\mathfrak{M},\pi[j,\infty]\models\varphi_{1}$ for every $j$ with $0\leqslant j<i$. A formula is satisfiable if it is satisfied by some path of some model. A formula is valid if it is satisfied by every path of every model. We now state the two computational problems considered in the following section. The satisfiability problem for LTL: given a formula $\varphi$, determine whether there exists a model $\mathfrak{M}$ and a path $\pi$ in $\mathfrak{M}$ such that $\mathfrak{M},\pi\models\varphi$. The model-checking problem for LTL: given a formula $\varphi$, a model $\mathfrak{M}$, and a path $\pi$ in $\mathfrak{M}$, determine whether $\mathfrak{M},\pi\models\varphi$. Clearly, formula $\varphi$ is valid if, and only if, $\neg\varphi$ is not satisfiable; thus any deterministic algorithm that solves the satisfiability problem also solves the validity problem, and vice versa. By the variable-free fragment of LTL we mean the set of LTL formulas containing no propositional variables. 3 Complexity of satisfiability and model-checking for finite-variable fragments It is well-known that both model-checking and satisfiability for LTL are PSPACE-complete [10] if we consider arbitrary formulas. In this section, we consider the complexity of both problems for finite-variable fragments of LTL. We begin by noting that, for the variable-free fragment, both problems are polynomially decidable. Indeed, it is easy to check that every variable-free LTL formula is equivalent to either $\perp$ or $\top$ (for example, ${\top}\hskip 2.0pt\mathcal{U}{\top}$ is equivalent to $\top$ and ${\top}\hskip 2.0pt\mathcal{U}{\perp}$ is equivalent to $\perp$). Thus, to check for satisfiability of a variable-free formula $\varphi$, all we need to do is to recursively replace each subformula of $\varphi$ by either $\perp$ or $\top$, which is linear is the size of $\varphi$. Likewise for model-checking. We next show that both model checking and satisfiability for the single-variable fragment of LTL is PSPACE-complete. As the upper bound immediately follows from the results in [10], we only need to prove PSPACE-hardness of both problems. This is done by appropriately modifying the construction from [10], where an arbitrary problem “$x\in A?$” solvable by polynomially-space bounded (deterministic) Turing machines is reduced to model-checking for LTL. (The authors of [10] then reduce the model-checking problem for LTL to the satisfiability problem for LTL.) We modify the construction from [10] so that we simultaneously reduce the problem “$x\in A?$” to both model checking and satisfiability for LTL using formulas containing only one variable. Let $M=(Q,\Sigma,q_{0},q_{1},a_{0},a_{1},\delta)$ be a (deterministic) Turing machine, where $Q$ is the set of states, $\Sigma$ is the alphabet, $q_{0}$ is the starting state, $q_{1}$ is the final state, $a_{0}$ is the blank symbol, $a_{1}$ is the symbol marking the leftmost cell, and $\delta$ is the machine’s program. We adopt the convention that $M$ gives a positive answer if, at the end of the computation, the tape is blank save for $a_{0}$ written in the leftmost cell. We assume, for technical reasons, that $\delta$ contains an instruction to the effect that the “yes” configuration yields itself. Given an input on length $n$, we assume that the amount of space $M$ uses is $S(n)$, for some polynomial $S$. We now construct a model $\mathfrak{M}$, a path $\pi$ in $\mathfrak{M}$, and a formula $\psi$—of a single variable, $p$—such that $x\in A$ if, and only if, $\mathfrak{M},\pi\models\varphi$. It will also be the case that $x\in A$ if, and only if, $\psi$ is LTL-valid. The model $\mathfrak{M}$ intuitively corresponds, in the way described below, to the computation of $M$ on input $x$. First, we need the ability to model natural numbers within a certain range, say $1$ through $k$. To that end, we use models based on the frame $\mathfrak{F}_{k}$, depicted in Figure 1, which is a line made up of $k$ states. By making $p$ true exactly at the $i$th state of $\mathfrak{F}_{k}$, where $1\leqslant i\leqslant k$, we obtain a model representing the natural number $i$. We denote the model representing the number $m$ by $\mathfrak{N}_{m}$. We next use models $\mathfrak{N}_{m}$ to build a model representing all possible contents of a single cell of $M$. Let $|Q|=n_{1}$ and $|\Sigma|=n_{2}$. As each cell of $M$ may contain either a symbol from $\Sigma$ or a sequence $qa$, where $q\in Q$ and $a\in\Sigma$, indicating that $M$ is scanning the present cell, where $a$ is written, there are $n_{2}\times(n_{1}+1)$ possibilities for the contents of a single cell of $M$. Let $k=n_{2}\times(n_{1}+1)$; clearly, $k$ is independent of the size of the input $x$. To model the contents of a single cell, we use models $\mathfrak{N}_{1}$ through $\mathfrak{N}_{k}$ to build a model $\mathfrak{C}$, depicted in Figure 2, where small boxes represent models $\mathfrak{N}_{1}$ through $\mathfrak{N}_{k}$. In Figure 2, an arrow from $s_{0}$ to a box corresponding to model $\mathfrak{N}_{m}$ represents a transition from $s_{0}$ to the first state of $\mathfrak{N}_{m}$, and an arrow from a box corresponding to model $\mathfrak{N}_{m}$ to $s_{1}$ represents a transition from the last state $\mathfrak{N}_{m}$ to $s_{1}$. On the states in $\mathfrak{N}_{m}$ ($1\leqslant m\leqslant k$), the evaluation of $p$ in $\mathfrak{C}$ agrees with the evaluation of $p$ in $\mathfrak{N}_{m}$; in addition, $p$ is false both at $s_{0}$ and $s_{1}$. Let the length of $x$ be $n$. We next use $S(n)$ copies of $\mathfrak{C}$ to represent a single configuration of $M$. This is done with the model $\mathfrak{M}$, depicted in Figure 3. In $\mathfrak{M}$, a chain made up of $S(n)$ copies of $\mathfrak{C}$ is preceded by a model $\mathfrak{B}$ marking the beginning of a configuration; the use of $\mathfrak{B}$ allows us to separate configurations from each other. All that is required of the shape of $\mathfrak{B}$ is for it to contain a pattern of states (with an evaluation) that does not occur elsewhere in $\mathfrak{M}$; thus, we may use the frame $\mathfrak{F}_{3}$ and define the evaluation to make $p$ true at its every state. This completes the construction of the model $\mathfrak{M}$. One might think of $\mathfrak{M}$ as consisting of “cycles,” each cycle representing a single configuration of $M$ in the following way: to obtain a particular configuration of $M$, pick a path from the first state of $\mathfrak{B}$ to the last state of the last copy of $\mathfrak{C}$ that traverses the model $\mathfrak{N}_{i}$ withing the $j$th copy of $\mathfrak{C}$ exactly when the $j$th cell of the tape of $M$ contains the $i$th “symbol” from the alphabet $\Sigma\,\cup\,Q\times\Sigma$. We now describe how to build a formula $\psi$ whose satisfaction we want to check with respect to an infinite path beginning with the first state of $\mathfrak{B}$. It is rather straightforward to write out the following formulas: • A formula $\psi_{start}$ describing the initial configuration of $M$ on $x$; • A formula $\psi_{positive}$ describing the configuration of $M$ corresponding to the positive answer. The length of both $\psi_{start}$ and $\psi_{positive}$ is proportionate to $k\times S(n)$. Next, we can write out a formula $\psi_{\delta}$ describing the program $\delta$ of $M$. This can be done by starting with formulas of the form $\!\raisebox{-0.86pt}{ \mbox{\begin{picture}(2.0,2.0) \put(1.0,1.0){\circle{2.0}} \end{picture}}}\,^{j}\sigma$, where $j$ is the number of states in a path leading from the first state of $\mathfrak{B}$ to the last state of the last copy of $\mathfrak{C}$ in a single “cycle” in $\mathfrak{M}$, to describe the change in the contents of the cells from one configuration to the next, and then, for each instruction $I$ from $\delta$, writing a formula $\alpha(I)$ of the form $\bigwedge_{i=0}^{S(n)}\Box\chi$, where $\chi$ describes changes occurring in each cell of $M$. Clearly, the length of each $\alpha(I)$ is proportionate to $k\times S(n)$. Then, $\psi_{\delta}=\bigwedge_{I\in\delta}\alpha(I)$. As the number of instructions in $\delta$ is independent from the length of the input, the length of $\psi_{\delta}$ is proportionate to $c\times S(n)$, for some constant $c$. Lastly, we define $$\psi=\psi_{start}\wedge\Box\psi_{\delta}\rightarrow\Diamond\psi_{positive}.$$ One can then show, by induction on the length of the computation of $M$ on $x$, that $M(x)=yes$ if, and only if, $\psi$ is satisfied in $\mathfrak{M}$ by an infinite path corresponding, in the way described above, to the computation of $M$ on $x$. This gives us the following: Theorem 3.1 The model-checking problem for LTL formulas with at most one variable is PSPACE-complete. Likewise, we can show that $M(x)=yes$ if, and only if, $\psi$ is satisfiable, which gives us the following: Theorem 3.2 The satisfiability problem for LTL formulas with at most one variable is PSPACE-complete. 4 Conclusion We have shown both model-checking and satisfiability for the single-variable fragment of the propositional Linear-time Temporal Logic LTL is PSPACE-complete, and therefore has the same complexity as the corresponding problems for the entire logic. Therefore, restricting the language of LTL to contain only a finite number of propositional variables does not yield fragments—as is the case with the classical propositional logic, for example—that are computationally more tractable that the entire logic. This negative result should be borne in mind when searching for tractable fragments of LTL, a task of continuing relevance in the field of formal verification of systems. References [1] Patrick Blackburn and Edith Spaan. A modal perspective on the computational complexity of attribute value grammar. Journal of Logic, Language, and Information, 2:129–169, 1993. [2] Alexander Chagrov and Mikhail Rybakov. How many variables does one need to prove PSPACE-hardness of modal logics? In Advances in Modal Logic, volume 4, pages 71–82, 2003. [3] Stéphane Demri, Valentin Goranko, and Martin Lange. Temporal Logics in Computer Science. Cambridge University Press, 2016. [4] Stéphane Demri and Philippe Schnoebelen. The complexity of propositional linear temporal logics in simple cases. Information and Computation, 174:84–103, 2002. [5] Joseph Y. Halpern. The effect of bounding the number of primitive propositions and the depth of nesting on the complexity of modal logic. Aftificial Intelligence, 75(2):361–372, 1995. [6] Edith Hemaspaandra. The complexity of poor man’s logic. Journal of Logic and Computation, 11(4):609–622, 2001. [7] Michael Huth and Mark Ryan. Logic in Computer Science: Modelling and Reasoning about Systems. Cambridge University Press, 2nd edition, 2004. [8] Iwao Nishimura. On formulas of one variable in intuitionistic propositional calculus. Journal of Symbolic Logic, 25(4):327–331, 1960. [9] Amir Pnueli. The temporal logic of programs. In Proceedings of the 18th IEEE Symposium on Foundations of Computer Science, pages 46–67, 1977. [10] A. Prasad Sistla and Edmund M. Clarke. The complexity of propositional linear temporal logics. Journal of ACM, 32(3):733–749, 1985. [11] Vítěslav Švejdar. The decision problem of provability logic with only one atom. Archive for Mathematical Logic, 42(8):763–768, 2003.
A General Framework for Fair Allocation with Matroid Rank Valuations Vignesh Viswanathan and Yair Zick University of Massachusetts, Amherst {vviswanathan, yzick}@umass.edu Abstract We study the problem of fairly allocating a set of indivisible goods among agents with matroid rank valuations. We present a simple framework that efficiently computes any fairness objective that satisfies some mild assumptions. Along with maximizing a fairness objective, the framework is guaranteed to run in polynomial time, maximize utilitarian social welfare and ensure strategyproofness. We show how our framework can be used to achieve four different fairness objectives: (a) Prioritized Lorenz dominance, (b) Maxmin fairness, (c) Weighted leximin, and (d) Max weighted Nash welfare. In particular, our framework provides the first polynomial time algorithms to compute weighted leximin and max weighted Nash welfare allocations for matroid rank valuations. 1 Introduction Consider the problem of assigning class seats to university students. Students have preferences over the classes they want to take; however, there is a limited number of seats available. The assignment must satisfy several additional constraints; for example, students may not take classes with conflicting schedules, and universities often impose limits on the maximal number of classes (or course credits) one can take in a semester. In addition, students often have priorities: seniors should be afforded more leeway than juniors on course selection, so that they can graduate on time; similarly, degree majors have priority in selecting classes from their home department. Course allocation can be naturally modeled as an instance of a fair allocation problem: students are agents who have a utility for receiving bundles of items (course seats). Our objective is to efficiently compute an allocation (an assignment of seats to students) that satisfies certain justice criteria. For example, one might be interested in computing an efficient allocation — in the course allocation setting, this would be an allocation that maximizes the number of course seats assigned to students who are willing and able to take them. Alternatively, one might want to find an envy-free assignment — one where every student prefers their assigned set of classes to that of any other student. Finding good allocations is computationally intractable under general agent utilities (Bouveret et al., 2016). However, student preferences can be modeled as a well-structured utility function: student preferences over classes are submodular; in economic jargon, they have decreasing returns to scale. In addition, assuming students (or rather, university administrators) are only interested in taking the maximal number of classes relevant to them, their gain from taking an additional class is either $0$ or $1$. This class of preferences is known as binary submodular valuations (Benabbou et al., 2021). Recently, Viswanathan and Zick (2022) introduce the Yankee Swap algorithm for computing fair and efficient allocations. The algorithm starts with all items unassigned, and at every round picks an agent to play. The agent can choose to either pick an unassigned item to add to its bundle, or to steal an item from another agent. If they chose to steal, then they initiate a transfer path, where each agent steals an item from another, until the final agent takes an unassigned item. This continues until no agents can take unassigned items they want, or no items are left. If agents have binary submodular valuations, then Yankee Swap is guaranteed to output a Lorenz dominating allocation. As shown by Babaioff et al. (2021a), Lorenz dominating allocations satisfy a broad range of fairness and efficiency criteria: they are leximin fair, which implies that they maximize social welfare, Nash welfare, envy-freeness up to one item, and guarantee every agent at least half of their maximin share. In addition, they are truthful: no agent has an incentive to misreport their preferences over items. While Babaioff et al. (2021a) present a poly-time algorithm for computing Lorenz dominating allocations, Yankee Swap is significantly simpler to implement, and has a faster running time. Due to its simplicity, the Yankee Swap algorithm can be easily modified to achieve other fairness objectives. In this paper, we answer the question What fairness objectives can be achieved by modifying Yankee Swap? As it turns out, the answer to this question is: surprisingly many. 1.1 Our Contribution In this work, we show that a minor modification to the Yankee Swap algorithm — the order in which it lets agents play — allows it to compute allocations satisfying a broad range of justice criteria. In particular, this allows us to compute a weighted variant of leximin allocations. In addition, Yankee Swap can compute allocations maximizing the weighted Nash welfare (Chakraborty et al., 2021a). We also show that Yankee Swap can be used to compute other fairness objectives like Lorenz dominance and maxmin fair share. Furthermore, the allocations that Yankee Swap outputs always maximize social welfare, and are always truthful. More generally, we identify certain necessary conditions a justice criterion must satisfy in order for Yankee Swap to compute it. In more detail, a justice criterion $\psi$ describes an order over item allocations; for example, under the Nash welfare criterion an allocation $X$ is better than $Y$ if the product of agent utilities under $X$ is greater than under $Y$. In order for Yankee Swap to work, a justice criterion must satisfy Pareto dominance — if $Y$ Pareto dominates $X$ then $\psi(Y)\geq\psi(X)$ — and it must admit a gain function $\phi$. Intuitively, a gain function measures the benefit of letting an agent $i$ choose an item in a given round of Yankee Swap. When these two simple conditions are met, the justice criterion $\psi$ can be maximized by Yankee Swap. 1.2 Related Work We study fair allocation when agents have matroid rank valuations. Benabbou et al. (2021) first show that it is possible to compute a welfare maximizing envy free up to one good (EF1) allocation in polynomial time. Babaioff et al. (2021a) extend this result and show that it is possible to compute Lorenz dominating allocations in polynomial time; Lorenz dominance is a stronger notion of fairness than leximin and it implies a host of other fairness properties. Recently, Viswanathan and Zick (2022) show that Lorenz dominating allocations can be computed using a simple Yankee Swap based mechanism. Barman and Verma (2021) show that it is possible to compute an allocation which guarantees each agent their maxmin share. Our results use some technical lemmas from their work. Fair allocation with assymetric agents is a well studied problem. Several weighted fairness metrics have been proposed and studied in the literature. Farhadi et al. (2019) introduce and study the weighted maxmin share. Chakraborty et al. (2021a) introduce the notion of weighted envy freeness and propose algorithms to compute allocations which are weighted envy free up to one good. This notion was generalized by Chakraborty et al. (2022). Aziz et al. (2020) introduce and study the notion of weighted proportionality. Babaioff et al. (2021b) propose and study additional extensions of the maxmin share to the weighted setting. Other works study the computation of existing weighted fairness measures. Chakraborty et al. (2021b) study the properties of weighted fairness existing picking sequences satisfy. Garg et al. (2021) study the computation of allocations which maximize the weighted Nash welfare. Li et al. (2022) compute weighted proportional allocations for chores. Aziz et al. (2019) study the weighted maxmin share in the presence of indivisible chores. The work arguably closest to ours is Suksompong and Teh (2022). Suksompong and Teh (2022) propose an algorithm to compute a max weighted Nash welfare allocation when agents have binary additive valuations. Their algorithm uses a path transfer based argument similar to ours (Algorithm 1). Our result builds on that of Suksompong and Teh (2022) in two ways. First, our result holds for a more general class of valuations. Second, our framework can compute several other fairness properties in addition to maximizing the weighted Nash welfare. 2 Preliminaries We use $[t]$ to denote the set $\{1,2,\dots,t\}$. For ease of readability, we replace $A\cup\{g\}$ and $A\setminus\{g\}$ with $A+g$ and $A-g$ respectively. We have a set of $n$ agents $N=[n]$ and a set of $m$ goods $G=\{g_{1},g_{2},\dots,g_{m}\}$. Each agent $i\in N$ has a valuation function $v_{i}:2^{G}\rightarrow\mathbb{R}^{+}$; $v_{i}(S)$ specifies the value agent $i$ has for the set of goods $S\subseteq G$. Each agent $i$ also has a positive weight $w_{i}\in\mathbb{R}^{+}$ that corresponds to their entitlement; we do not place any other constraint on the entitlement other than positivity. We use $\Delta_{i}(S,g)=v_{i}(S+g)-v_{i}(S)$ to denote the marginal gain of adding the good $g$ to the bundle $S$ for the agent $i$. Throughout the paper, we assume each $v_{i}$ is a matroid rank function (MRF). A function $v_{i}$ is a matroid rank function if (a) $v_{i}(\emptyset)=0$, (b) for any $g\in G$ and $S\subseteq G$, we have $\Delta_{i}(S,g)\in\{0,1\}$, and (c) for any $S\subseteq T\subseteq G$ and a good $g$, we have $\Delta_{i}(S,g)\geq\Delta_{i}(T,g)$. These functions are also referred to as binary submodular valuations; we use the two terms interchangably. An allocation $X$ is a partition of the set of goods into $n+1$ sets $(X_{0},X_{1},\dots,X_{n})$ where each agent $i\in N$ gets the bundle $X_{i}$ and $X_{0}$ consists of the unallocated goods. An allocations $X$ is said to be non-redundant if for all $i\in N$, we have $v_{i}(X_{i})=|X_{i}|$. For any allocation $X$, $v_{i}(X_{i})$ is referred to as the utility or value of agent $i$ under the allocation $X$. For ease of analysis, we sometimes treat $0$ as an agent with valuation function $v_{0}(S)=|S|$ and bundle $X_{0}$. None of our fairness measures take the agent $0$ into account. By our choice of $v_{0}$, any allocation which is non-redundant for the set of agents $N$ is redundant for the set of agents $N+0$. We have this simple useful result about non-redundant allocations. Variants of this result have been shown by Benabbou et al. (2021) and Viswanathan and Zick (2022). Lemma 2.1 (Benabbou et al. (2021), Viswanathan and Zick (2022)). Let $X$ be an allocation. There exists a non-redundant allocation $X^{\prime}$ such that $v_{i}(X^{\prime}_{i})=v_{i}(X_{i})$ for all $i\in N$. Proof. For each agent $i$, create $X^{\prime}_{i}$ starting at $\emptyset$ and add goods one by one from $X_{i}$ as long as the good added provides a marginal gain of $1$; stop when no good in $X_{i}\setminus X^{\prime}_{i}$ can provide a marginal gain of $1$ to $X^{\prime}_{i}$. Allocate all leftover goods to $X_{0}$. It is easy to see that $X^{\prime}$ is non-redundant; we only added goods when they had a marginal gain of $1$. It remains to show that $v_{i}(X^{\prime}_{i})=v_{i}(X_{i})$. Let $X_{i}\setminus X^{\prime}_{i}=\{g_{1},g_{2},\dots,g_{k}\}$. Let $Z^{j}=\{g_{1},g_{2},\dots,g_{j-1}\}$ ($Z^{1}=\emptyset$). By the definition of $X^{\prime}_{i}$, we have $\Delta_{i}(X^{\prime}_{i},g_{j})=0$ for all $j\in[k]$. By the submodular property, this implies that $\Delta_{i}(X^{\prime}_{i}\cup Z^{j},g_{j})=0$ as well for all $j\in[k]$. We have $$\displaystyle v_{i}(X_{i})=v_{i}(X^{\prime}_{i})+\sum_{j=1}^{k}\Delta_{i}(X^{\prime}_{i}\cup Z^{j},g_{j})=v_{i}(X^{\prime}_{i})$$ This completes the proof. ∎ We define the exchange graph of a non-redundant allocation $X$ (denoted by $\mathcal{G}(X)$) as a directed graph defined over the set of goods $G$. An edge exists from good $g\in X_{i}$ to another good $g^{\prime}$ if $v_{i}(X_{i}-g+g^{\prime})=v_{i}(X_{i})$. Intuitively, this means that from the perspective of agent $i$ (who owns $g$), $g$ can be replaced with $g^{\prime}$ with no decrease to agent $i$’s utility. The exchange graph is a useful representation since it can be used to compute valid transfers of goods between agents. Let $P=(g_{1},g_{2},\dots,g_{t})$ be a path in the exchange graph for the allocation $X$. We define a transfer of goods along the path $P$ in the allocation $X$ as the operation where $g_{t}$ is given to the agent who has $g_{t-1}$, $g_{t-1}$ is given to the agent who has $g_{t-2}$ and so on till finally $g_{1}$ is discarded. This transfer is called path augmentation; the bundle $X_{i}$ after path augmentation with the path $P$ is denoted by $X_{i}\Lambda P$ and defined as $X_{i}\Lambda P=(X_{i}-g_{t})\oplus\{g_{j},g_{j+1}:g_{j}\in X_{i}\}$ where $\oplus$ denotes the symmetric set difference operation. While the conventional notation for path augmentation uses $\Delta$ (Barman and Verma, 2021; Schrijver, 2003), we replace it with $\Lambda$ to avoid confusion with the other definition of $\Delta$ as the marginal gain of adding an item to a bundle. For any non-redundant allocation $X$ and agent $i$, we define $F_{i}(X)=\{g\in G:\Delta_{i}(X_{i},g)=1\}$ as the set of goods which give agent $i$ a marginal gain of $1$. For any agent $i$, let $P=(g_{1},\dots,g_{t})$ be the shortest path from $F_{i}(X)$ to $X_{j}$ for some $j\neq i$. Then path augmentation with the path $P$ and giving $g_{1}$ to $i$ results in an allocation where $i$’s value for their bundle goes up by $1$, $j$’s value for their bundle goes down by $1$ and all the other agents do not see any change in value. This is formalized below and exists both in Viswanathan and Zick (2022, Lemma 5) and Barman and Verma (2021, Lemma 1). Lemma 2.2 (Viswanathan and Zick (2022); Barman and Verma (2021)). Let $X$ be a non-redundant allocation. Let $P=(g_{1},\dots,g_{t})$ be the shortest path from $F_{i}(X)$ to $X_{j}$ for some $i\in N+0$ and $j\in N+0-i$. Then, we have for all $k\in N-i-j$, $v_{k}(X_{k}\Lambda P)=v_{k}(X_{k})$, $v_{i}(X_{i}\Lambda P+g_{1})=v_{i}(X_{i})+1$ and $v_{j}(X_{j}\Lambda P)=v_{j}(X_{j})-1$. Furthermore, the new allocation is non-redundant. This lemma is particularly useful when the path ends at some good in $X_{0}$; then transferring goods along the path creates an allocation where no agent loses any value but one agent strictly gains in value. We say there is a path from some agent $i$ to some agent $j$ in an allocation $X$ is there is a path from $F_{i}(X)$ to $X_{j}$ in the exchange graph $\mathcal{G}(X)$. Viswanathan and Zick (2022, Theorem 3) establish a sufficient condition for a path to exist; we present it below. Lemma 2.3 (Viswanathan and Zick (2022)). Let $X$ and $Y$ be two non-redundant allocations. Let $S^{-}$ be the set of agents $k\in N+0$ such that $|X_{k}|<|Y_{k}|$, $S^{=}$ be the set of agents $k\in N+0$ such that $|X_{k}|=|Y_{k}|$ and $S^{+}$ be the set of agents $k\in N+0$ such that $|X_{k}|>|Y_{k}|$. Then, for any $i\in S^{-}$, there is a path from $F_{i}(X)$ to $X_{j}$ in the exchange graph $\mathcal{G}(X)$ where $j\in S^{+}$. We define the utility vector of an allocation $X$ as $\vec{u}^{X}=(v_{1}(X_{1}),v_{2}(X_{2}),\dots,v_{n}(X_{n}))$. In general, a fairness objective (denoted by $\psi$) can be viewed as a function that maps the utility vector of an allocation to a totally ordered space $\mathcal{Y}$. The goal is to compute an allocation with a utility vector that maximizes $\psi$. We sometimes abuse notation and use $\psi(X)$ to denote $\psi(\vec{u}^{X})$. For example, a popular fairness objective is the Nash welfare defined as $\prod_{i\in N}(v_{i}(X_{i}))$. In this case, the output of $\psi$ is a real valued number equal to $\prod_{i\in N}(v_{i}(X_{i}))$. The commonly associated goal with Nash welfare is to maximize it i.e. to maximize $\psi$. For ease of readability, in this section, we only define a few necessary terms from the fair division literature. We define specific terms like specific fairness objectives as and when required. Definition 2.4. An allocation $X$ is said to maximize utilitarian social welfare (denoted by MAX-USW) if it maximizes $\sum_{i\in N}v_{i}(X_{i})$. Definition 2.5. Let $\vec{x},\vec{y}\in\mathbb{R}^{c}$ be two vectors for some positive integer $c$. $\vec{x}$ is said to lexicographically dominate $\vec{y}$ if there exists a $k\in[c]$ such that for all $j\in[k-1]$, we have $\vec{x}_{j}=\vec{y}_{j}$ and we have $\vec{x}_{k}>\vec{y}_{k}$. A real valued vector $\vec{x}$ is lexicographically dominating with respect to a set of vectors $V$ if there exists no $\vec{y}\in V$ which lexicographically dominates $\vec{x}$. This definition can be extended to allocations as well. An allocation $X$ is said to lexicographically dominate an allocation $Y$ if the utility vector of $X$ lexicographically dominates the utility vector of the allocation $Y$. Similarly, an allocation $X$ is lexicographically dominating with respect to a set of allocations $\mathcal{V}$ if there exists no $Y\in\mathcal{V}$ which lexicographically dominates $X$. Definition 2.6. An allocation $X$ is said to pareto dominate another allocation $Y$ for all $h\in N$, we have $v_{h}(X_{h})\geq v_{h}(Y_{h})$ with the inequality being strict for at least one $h\in N$. 3 General Yankee Swap In this section, we present our framework to compute fair allocations. Our framework is a generalization of the Yankee Swap algorithm by Viswanathan and Zick (2022). In the Yankee Swap algorithm, all goods are initially unallocated. At every round, the agent with the least utility picks a good they like from the unallocated pile or initiate a transfer path where they steal a good they like from another agent who then steals a good they like from another agent and so on until an agent finally takes a good they like from the unallocated pile of goods. If there is no such path to the unallocated pile of goods, the agent is removed from the game (denoted by their removal from the set $P$). We terminate once there are no agents playing. We now present General Yankee Swap (Algorithm 1). We make only one change in the implementation: instead of picking the least utility agent at every round, we pick an agent that maximizes a general gain function $\phi$. $\phi$ takes as input a non-redundant allocation (the partial allocation we have so far) and an index $i\in N$; its output is a $b$-dimensional vector. When $b>1$, $\phi(X,i)>\phi(X,j)$ if $\phi(X,i)$ lexicographically dominates $\phi(X,j)$. We break ties by choosing the the agent with the least index. The gain function $\phi$ depends on the fairness objective we optimize. The original Yankee Swap is a specific case of the general Yankee Swap where $\phi(X,i)=\frac{1}{v_{i}(X_{i})}$. We make an additional change in notation to the Yankee Swap in Viswanathan and Zick (2022). We replace transfer path with the shortest path in the exchange graph. Viswanathan and Zick (2022) show that a transfer path is equivalent to the shortest path in the exchange graph, so this does not affect the final output of the algorithm. General Yankee Swap works when the fairness objective $\psi:\mathbb{R}^{n}\mapsto\mathcal{Y}$ has the following properties: (C1) – Pareto Dominance: $\psi(X)\leq\psi(Y)$ whenever $Y$ Pareto dominates $X$. (C2) – Gain Function: $\psi$ admits a gain function $\phi$ that maps each non-redundant allocation to a real-valued $b$-dimensional vector with the following properties: (G1) Let $X$ be a non-redundant allocation. Let $Y_{1}$ be the non-redundant allocation resulting from giving the good $g$ to $i$ under $X$, such that $\Delta(X,i)=1$. Similarly, let $Y_{2}$ be the non-redundant allocation resulting from giving the good $g$ to $j$ under $X$ such that $\Delta(X,j)=1$. Then, if $\phi(X,i)\geq\phi(X,j)$, $\psi(Y_{1})\geq\psi(Y_{2})$. Equality holds if and only if $\phi(X,j)=\phi(X,i)$. (G2) For any non-redundant allocation $X$, $\phi(X,i)$ is a function of $|X_{i}|$ such that for any two non-redundant allocations $X$ and $Y$, if $|X_{i}|\leq|Y_{i}|$, then $\phi(X,i)\geq\phi(Y,i)$ with equality holding if $|X_{i}|=|Y_{i}|$. Intuitively, $\phi$ can be thought of as a function describing the marginal ‘gain’ of giving a good to agent $i$ given some allocation $X$. The higher the value of $\phi(X,i)$, the more valuable it is to add a good to the bundle $X_{i}$ for some allocation $X$. General Yankee Swap outputs a non-redundant $\psi$ maximizing allocation which is also welfare maximizing. Moreover, among all allocations that maximize $\psi$, the output of General Yankee Swap is lexicographically dominating. This is a stronger statement and is required to show strategyproofness — a technique used by Halpern et al. (2020) and Suksompong and Teh (2022). While we use the same idea and notation of Viswanathan and Zick (2022, Theorem 2) in our proof, our arguments are more general. Theorem 3.1. Let $\psi$ be a fairness objective that satisfies (C1) and (C2) with a gain function $\phi$. When agents have matroid rank valuations, General Yankee Swap with input $\phi$ maximizes $\psi$. Moreover, among all the allocations which maximize $\psi$, the allocation output by General Yankee Swap is lexicographically dominating. Proof. It is easy to show that Algorithm 1 always terminates: at every iteration, we either reduce the number of unallocated goods or remove some agent from $P$ while not changing the number of unallocated goods. Let $X$ be the output of Algorithm 1. Note that $X$ is guaranteed to be non-redundant at every iteration. We start with the non-redundant empty allocation, and using Lemma 2.2, non-redundancy is maintained after every transfer. We next show that $X$ maximizes $\psi$ and among all the allocations which maximize $\psi$, $X$ is lexicographically dominating. Let $Y$ be a non-redundant allocation that maximizes $\psi$ — such an allocation is guaranteed to exist since $\mathcal{Y}$ induces a total order over allocations, and every allocation can be made non-redundant (Lemma 2.1). If there are multiple such $Y$, pick one that lexicographically dominates all others. If for all $h\in N$, $v_{h}(X_{h})\geq v_{h}(Y_{h})$, then $X$ maximizes $\psi$ (using (C1)) and is lexicographically dominating — we are done. Assume for contradiction that this does not hold. Let $i\in N$ be the agent with highest $\phi(X,i)$ in $X$ such that $v_{i}(X_{i})<v_{i}(Y_{i})$; if there are multiple, break ties in favor of the lowest index agent. Let $W$ be the non-redundant allocation at the start of the iteration where $i$ was removed from $P$. Note that in this iteration, $i$ was the agent with highest $\phi(W,i)$ among all the agents in $P$. We use $t$ to denote this iteration. We have the following lemma. Lemma 3.2. For all $h\in N$, $|Y_{h}|\geq|W_{h}|$ Proof. Assume for contradiction that this is not true. Let $j\in N$ be the agent with highest $\phi(Y,j)$ such that $|Y_{j}|<|W_{j}|$; if there are multiple, break ties in favor of the one with the least index. Consider the bundle $W_{j}$. Let $W^{\prime}$ be the allocation when $j$ moved from a bundle of size $|W_{j}|-1$ to $|W_{j}|$. When $j$ moved from a bundle of size $|W_{j}|-1$ to $|W_{j}|$, it was the agent with maximum $\phi(W^{\prime},j)$ within $P$. Combining this with the fact that bundle sizes for each agent monotonically increase and that agents never return to $P$ once removed, we have $\phi(Y,j)\geq\phi(W^{\prime},j)\geq\phi(W^{\prime},i)\geq\phi(W,i)$. The first and third inequality hold due to (G2). If $\phi(Y,j)=\phi(W^{\prime},j)=\phi(W^{\prime},i)=\phi(W,i)$, then $j<i$ since the General Yankee Swap breaks final ties using the index of the agent and $j$ was chosen at $W^{\prime}$ instead of $i$. Therefore, we have Observation 3.3. $\phi(Y,j)\geq\phi(W,i)$. If equality holds, then $j<i$. Invoking Lemma 2.3 with allocations $Y$ and $W$ and agent $j$, there must be a path from $F_{j}(Y)$ to $Y_{k}$ for some $k\in N+0$ where $|Y_{k}|>|W_{k}|$ in the exchange graph of $Y$. Invoking Lemma 2.2, transferring goods along the shortest path from $F_{j}(Y)$ to $Y_{k}$ results in a non-redundant allocation $Z$ where $|Z_{j}|=|Y_{j}|+1$, $|Z_{k}|=|Y_{k}|-1$ and for all other agents, the bundle size remains the same. If $k=0$, we are done since $Z$ pareto dominates $Y$. Therefore using (C1), $\psi(Y)\leq\psi(Z)$. Combining this with the fact that pareto dominance implies lexicographic dominance, we contradict our assumption on $Y$. We therefore assume $k\neq 0$. Consider $\phi(W,k)$. If $\phi(W,k)>\phi(W,i)$, since $i$ was chosen as the agent with highest $\phi(W,i)$ among the agents in $P$ at iteration $t$, we must have that $k\notin P$ at iteration $t$. This gives us $v_{k}(X_{k})=v_{k}(W_{k})<v_{k}(Y_{k})$. Combining this with our initial assumption that $\phi(W,k)>\phi(W,i)$, we get $\phi(X,k)=\phi(W,k)>\phi(W,i)\geq\phi(X,i)$ using (G2). This contradicts our choice of $i$; $i$ is not the agent with highest $\phi(X,i)$ utility such that $|X_{i}|<|Y_{i}|$. $\phi(X,k)>\phi(X,i)$ and $k$ satisfies the condition. This gives us the following observation (combined with Observation 3.3). Observation 3.4. $\phi(W,k)\leq\phi(W,i)\leq\phi(Y,j)$ Let $Y^{\prime}$ be a non-redundant allocation starting at $Y$ and removing any good from $k$. Note that, to show $\psi(Z)\geq\psi(Y)$, it suffices to show that $\phi(Y^{\prime},j)\geq\phi(Y^{\prime},k)$ (using (G1)). From Observation 3.4 and (G2), we have $\phi(Y^{\prime},k)\leq\phi(W,k)\leq\phi(W,i)\leq\phi(Y,j)=\phi(Y^{\prime},j)$. If any of these inequalities are strict, we have $\phi(Y^{\prime},j)>\phi(Y^{\prime},k)$ which implies $\psi(Z)>\psi(Y)$ — a contradiction. This gives us the following observation. Observation 3.5. $\phi(Y^{\prime},k)=\phi(W,k)=\phi(W,i)=\phi(Y,j)=\phi(Y^{\prime},j)$ Since $\phi(W,k)=\phi(W,i)$ and the algorithm picked $i$ at iteration $t$, we must have $i<k$. Assume for contradiction that this is not true. If $k\in P$ at iteration $t$, the algorithm would have picked $k$ instead of $i$, a contradiction. If $k\notin P$, then $\phi(X,k)=\phi(W,k)=\phi(W,i)\geq\phi(X,i)$ using (G2). Combining this with $i>k$ contradicts our choice of $i$. Therefore we have, $i<k$. Combining this with Observation 3.3, we have: Observation 3.6. $j<i<k$ Combining Observations 3.5 and 3.6, we get that $\phi(Y^{\prime},k)=\phi(Y,j)=\phi(Y^{\prime},j)$ using (G2). Therefore $Z$ maximizes $\psi$ using (G1). However, since $j<k$, $Z$ lexicographically dominates $Y$. This is because, all agents $h<j$ receive the same value in both allocations and $v_{j}(Z_{j})>v_{j}(Y_{j})$. This contradicts our assumption on $Y$ and completes the proof. ∎ Using Lemma 3.2, we construct a path from $i$ to $0$ in $W$. We create a new non-redundant allocation $Z$ where each $Z_{h}$ is an arbitrary $|W_{h}|$ sized subset of $Y_{h}$ for all $h\in N-i$. This can be done thanks to Lemma 3.2. $Z_{i}=Y_{i}$ and $Z_{0}$ is defined as all the goods not in any $Z_{h}$ for $h\in N$. Note that $|W_{i}|<|Z_{i}|$, $|W_{0}|>|Z_{0}|$ and $|W_{h}|=|Z_{h}|$ for all $h\in N-i$. Using Lemma 2.3 with the allocations $W$ and $Z$ and the agent $i$, we get that there is a path from $i$ to $0$ in $W$ — $0$ is the only agent in $S^{+}$. This is a contradiction since we chose the start of the iteration where $i$ was removed from $P$, indicating that there was no path from $i$ to $0$ in $W$. ∎ Due to the similarity of our algorithm with that of Viswanathan and Zick (2022), their time complexity result applies to our algorithm as well. Theorem 3.7. The Weighted Yankee Swap algorithm runs in $O([m^{2}(n+\tau)+n(b+T_{\phi}(n,m))](m+n))$ time where $\tau$ upper bounds the complexity of computing the value of any bundle of goods for any agent and $T_{\phi}(n,m)$ is the complexity of computing $\phi$. Proof. Our proof uses the arguments of Viswanathan and Zick (2022) who show that the Yankee Swap algorithm runs in $O(m^{2}(n+\tau)(m+n))$ time. We make three observations. First, the algorithm runs for at most $(m+n)$ iterations. At each round either $|X_{0}|$ reduces by $1$ or an agent is removed from $P$. $X_{0}$ monotonically decreases and agents do not return to $P$; therefore we can only have at most $m+n$ iterations. Second, constructing the exchange graph, checking for a path and transferring goods along the path can be done in $O(m^{2}(n+\tau))$ time. This is shown by Viswanathan and Zick (2022, Section 3.4). Finally, finding $i$ involves computing $\phi$ for each agent and then comparing them. Since the output of $\phi$ has $b$ components, each comparison takes at most $O(b)$ time, finding $i$ takes $n(b+T_{\phi}(n+m))$ time. Combining these three observations we get the required time complexity. ∎ Remark 3.8. As we shall see in the coming sections, $b+T_{\phi}(n,m)$ is usually $O(1)$. We also show that, for any $\phi$, the output of the General Yankee Swap is always MAX-USW. Proposition 3.9. For any $\phi$, the output of the General Yankee Swap is MAX-USW Proof. Let $X$ be the non-redundant allocation output by General Yankee Swap. Assume for contradiction that $X$ is not MAX-USW. Let $Y$ be a non-redundant MAX-USW allocation which minimizes $\sum_{h\in N}|v_{h}(X_{h})-v_{h}(Y_{h})|$. If $|X_{h}|\leq|Y_{h}|$ for all $h\in N$, there must be at least one agent $i$ such that $|X_{i}|<|Y_{i}|$. Consider the allocation $W$ at the start of the iteration where $i$ was removed from $P$. Create a new allocation $Z$ where each $Z_{h}$ is an arbitrary $|W_{h}|$ sized subset of $Y_{h}$ for all $h\in N-i$. This can be done since $|W_{h}|\leq|X_{h}|\leq|Y_{h}|$ by assumption. $Z_{i}=Y_{i}$ and $Z_{0}$ is defined as all the goods not in any $Z_{h}$ for $h\in N$. Note that $|W_{i}|<|Z_{i}|$, $|W_{0}|>|Z_{0}|$ and $|W_{h}|=|Z_{h}|$ for all $h\in N-i$. Using Lemma 2.3 with the allocations $W$ and $Z$ and the agent $i$, we get that there is a path from $i$ to $0$ in $W$ — $0$ is the only agent in $S^{+}$. This is a contradiction since we chose the start of the iteration where $i$ was removed from $P$ indicating that there was no path from $i$ to $0$ in $W$. Therefore, we must have at least one agent $j$ such that $|Y_{j}|<|X_{j}|$. Applying Lemma 2.3 with allocations $X$, $Y$ and the agent $j$, we get that there is a path from $j$ to some agent $k\in N+0$ in the exchange graph of $Y$ such that $|X_{k}|<|Y_{k}|$. Transferring goods along the shortest path from $j$ to $k$, using Lemma 2.2, leads to a non-redundant allocation $Z$ where $|Z_{j}|=|Y_{j}|+1$ and $|Z_{k}|=|Y_{k}|-1$. If $k=0$, $Z$ has a higher USW than $Y$ contradicting our assumption on $Y$. If $k\neq 0$, then $\sum_{h\in N}|v_{h}(X_{h})-v_{h}(Y_{h})|>\sum_{h\in N}|v_{h}(X_{h})-v_{h}(Z_{h})|$ and $Z$ is MAX-USW; again contradicting our assumption on $Y$. Therefore, $X$ is MAX-USW. ∎ 4 The Yankee Swap Mechanism In this section, we show that if preferences are elicited before running the General Yankee Swap, being truthful is the dominant strategy. A mechanism is said to be strategyproof if no agent can get a better outcome by not reporting a false valuation function. We define the steps of the Yankee Swap Mechanism as follows: 1. Elicit the valuation function $v_{i}$ of each agent $i\in N$. If an agent’s valuation function is not an MRF, set the agent’s valuation of every bundle to $0$. 2. Use General Yankee Swap to compute a non-redundant allocation that maximizes $\psi$ for the valuation profile $\{v_{i}\}_{i\in N}$. Before we show the final result, we prove some useful lemmas. Our proof uses the same ideas as the strategyproofness result in Babaioff et al. (2021a, Theorem 5). Given a set $T\subseteq G$, we define the function $f_{T}:2^{G}\rightarrow\mathbb{R}^{+}$ as $f_{T}(S)=|S\cap T|$. Note that for any $T$, $f_{T}$ is an MRF. Lemma 4.1. Let $X$ be the output allocation of the Yankee Swap mechanism with valid input gain function $\phi$ and valuation profile $\{v_{i}\}_{i\in N}$. For some agent $i\in N$, replace $v_{i}$ with some $f_{T}$ such that $T\subseteq X_{i}$ and run the mechanism again to get an allocation $Y$. We must have $Y_{i}=T$. Proof. Since the allocation $Y$ is non-redundant, we have that $Y_{i}\subseteq T$. Assume for contradiction that $Y_{i}\neq T$. Define an allocation $Z$ as $Z_{h}=X_{h}$ for all $h\in N-i$ and $Z_{i}=T$; allocate the remaining goods in $Z$ to $Z_{0}$. Note that both $Y$ and $Z$ are non-redundant under both valuation profiles (with the old $v_{i}$ and the new valuation function $f_{T}$). Let us compare $Y$ and $Z$. Let $p\in N$ be the agent with highest $\phi(Y,p)$ such that $|Y_{p}|<|Z_{p}|$; choose the agent with the least $p$ if there are ties. Such an element is guaranteed to exist since $|Y_{i}|<|Z_{i}|$. If there exists no $q\in N$ such that $|Y_{q}|>|Z_{q}|$, we must have $|Y_{0}|>|Z_{0}|$ (since $|Y_{i}|<|Z_{i}|$). Using Lemma 2.3 with agent $p$ and the allocations $Y$ and $Z$, we get that there is a path from $p$ to $0$ in the exchange graph of $Y$. Transferring goods along the shortest path results in an allocation with a higher USW than $Y$ under the new valuation profile contradicting the fact that $Y$ is MAX-USW. Let $q\in N$ be the agent with highest $\phi(Z,q)$ such that $|Y_{q}|>|Z_{q}|$; break ties by choosing the least $q$. Further, note that since $Z$ and $X$ only differ in $i$’s bundle and $|Y_{i}|<|Z_{i}|$, we must have $|X_{q}|=|Z_{q}|$. Consider two cases: (i) $\phi(X,q)>\phi(Y,p)$, (ii) $\phi(X,q)=\phi(Y,p)$ and $q<p$ If any of the above two conditions occur, then invoking Lemma 2.3 with allocations $X$, $Y$ and the agent $q$, there exists a transfer path from $q$ to some agent $k$ in the exchange graph of $X$ (w.r.t. the old valuations) where $|Y_{k}|<|X_{k}|$. Transferring along the shortest such path gives us a non-redundant allocation $X^{\prime}$ where $|X^{\prime}_{q}|=|X_{q}|+1$ and $|X^{\prime}_{k}|=|X_{k}|-1$ (Lemma 2.2). Let $X^{\prime\prime}$ be an allocation starting at $X^{\prime}$ and removing one good from $X^{\prime}_{q}$. If $\phi(X^{\prime\prime},q)>\phi(X^{\prime\prime},k)$, then $\psi(X^{\prime})>\psi(X)$ (using (G1)) contradicting our assumption on $X$. If $k=0$, we improve USW contradicting the fact that $X$ is MAX-USW with respect to the original valuations $\{v\}_{h\in N}$. For case (i): If $k\neq 0$, we have $\phi(X^{\prime\prime},q)=\phi(X,q)>\phi(Y,p)\geq\phi(Y,k)\geq\phi(X^{\prime\prime},k)$ (using (G1) and (G2)). Therefore $\phi(X^{\prime\prime},q)>\phi(X^{\prime\prime},k)$ and $X$ does not maximize $\psi$ — a contradiction. For case (ii): If $k\neq 0$, we have $\phi(X^{\prime\prime},q)=\phi(X,q)=\phi(Y,p)\geq\phi(Y,k)\geq\phi(X^{\prime\prime},k)$. If any of these weak inequalities are strict, we can use analysis similar to that of case (i) to show that $X$ does not maximize $\psi$. Therefore, all the weak inequalities must be equalities and we must have $\phi(X^{\prime\prime},q)=\phi(X,q)=\phi(Y,p)=\phi(Y,k)=\phi(X^{\prime\prime},k)$. This implies that $\psi(X)=\psi(X^{\prime})$ using (G1). Moreover, by our choice of $p$ we have $p<k$ and by assumption, we have $q<p$. Combining the two, this gives us $q<k$. Therefore, $X^{\prime}$ lexicographically dominates $X$ — a contradiction to Theorem 3.1. Let us now move on to the remaining two possible cases (iii) $\phi(Z,q)<\phi(Y,p)$, (iv) $\phi(Z,q)=\phi(Y,p)$ and $q>p$ Recall that both $Y$ and $Z$ are non-redundant with respect to the new valuation profile (with $f_{T}$). If any of the above two conditions occur, then invoking Lemma 2.3 with allocations $Y$, $Z$ and the agent $p$, there exists a transfer path from $p$ to some agent $l$ in the exchange graph of $Y$ where $|Y_{l}|>|Z_{l}|$. Transferring along the shortest such path gives us a non-redundant allocation $Y^{\prime}$ where $|Y^{\prime}_{p}|=|Y_{p}|+1$ and $|Y^{\prime}_{l}|=|Y_{l}|-1$ (Lemma 2.2). Let $Y^{\prime\prime}$ be an allocation starting at $Y^{\prime}$ and removing one good from $Y^{\prime}_{p}$. If $\phi(Y^{\prime\prime},p)>\phi(Y^{\prime\prime},l)$, then $\psi(Y^{\prime})>\psi(Y)$ contradicting our assumption on $Y$. If $l=0$, we improve USW contradicting the fact that $Y$ is MAX-USW with respect to the new valuation $f_{T}$. For case (iii): If $l\neq 0$, we have $\phi(Y^{\prime\prime},p)=\phi(Y,p)>\phi(Z,q)\geq\phi(Z,l)\geq\phi(Y^{\prime\prime},l)$ (using (G1) and (G2)). Therefore $\phi(Y^{\prime\prime},p)>\phi(Y^{\prime\prime},l)$ and $Y$ does not maximize $\psi$ — a contradiction. For case (iv): If $l\neq 0$, we have $\phi(Y^{\prime\prime},p)=\phi(Y,p)=\phi(Z,q)\geq\phi(Z,l)\geq\phi(Y^{\prime\prime},l)$. If any of the weak inequalities are strict, similar to that of case (iii) to show that $Y$ does not maximize $\psi$. Therefore, all the weak inequalities must be equalities and we must have $\phi(Y^{\prime\prime},p)=\phi(Y,p)=\phi(Z,q)=\phi(Z,l)=\phi(Y^{\prime\prime},l)$. This implies that $\psi(X)=\psi(X^{\prime})$. Moreover, by our choice of $q$ we have $q<l$ and by assumption, we have $p<q$. Combining the two, this gives us $p<l$. Therefore, $X^{\prime}$ lexicographically dominates $X$ — a contradiction to Theorem 3.1. Since $|Z_{q}|=|X_{q}|$, $\phi(Z,q)=\phi(X,q)$. Therefore, cases (i)–(iv) cover all possible cases. Each of the cases lead to a contradiction. Therefore, our proof is complete and $Y_{i}=T$. ∎ Lemma 4.2. Let $X$ be the output allocation of the weighted leximin mechanism with input valuation profile $\{v_{i}\}_{i\in N}$. For some agent $i\in N$, replace $v_{i}$ with some $v^{\prime}_{i}$ such that $v^{\prime}_{i}(S)\geq v_{i}(S)$ for all $S\subseteq G$ and run the mechanism again to get an allocation $Y$. We must have $|Y_{i}|\geq|X_{i}|$. Proof. This proof is very similar to that of Lemma 4.1. Define the new valuation functions by $\{v^{\prime}_{j}\}_{j\in N}$ where $v^{\prime}_{j}=v_{j}$ for all $j\in N-i$. To prevent any ambiguity, whenever we discuss the $\psi$ value of $X$, we implicitly discuss it with respect to the valuations $v$. Similarly, whenever we discuss the $\psi$ value of $Y$ we implicitly discuss it with respect to the valuations $v^{\prime}$. Assume for contradiction that $|Y_{i}|<|X_{i}|$. Let $T$ be a subset of $Y_{i}$ such that $|T|=v_{i}(T)=v_{i}(Y_{i})$ (such a set exists using an argument similar to Lemma 2.1). Define an allocation $Z$ as $Z_{h}=Y_{h}$ for all $h\in N-i$ and $Z_{i}=T$; allocate the remaining goods in $Z$ to $Z_{0}$. Note that both $X$ and $Z$ are non-redundant under both valuation profiles. This implies for all $j\in N$, $v^{\prime}_{j}(X_{j})=v_{j}(X_{j})=|X_{j}|$ and $v^{\prime}_{j}(Z_{j})=v_{j}(Z_{j})=|Z_{j}|$. Let us compare $X$ and $Y$. Let $p\in N$ be the agent with highest $\phi(Y,p)$ such that $|Y_{p}|<|X_{p}|$; break ties by choosing the least $p$. Such an element is guaranteed to exist since $|Y_{i}|<|X_{i}|$. If there exists no $q\in N$ such that $|Y_{q}|>|X_{q}|$, we must have $|Y_{0}|>|X_{0}|$ (since $|X_{i}|>|Y_{i}|$). Using Lemma 2.3 with the allocations $Y$ and $X$ with $p$, we get that there is a path from $p$ to $0$ in the exchange graph of $Y$. Transferring goods along the shortest such path results in an allocation with a higher USW than $Y$ contradicting the fact that $Y$ is MAX-USW. Let $q\in N$ be the agent with highest $\phi(X,q)$ such that $|Y_{q}|>|X_{q}|$. Break ties by choosing the least $q$. Note that since $Z$ and $Y$ only differ in $i$’s bundle and $|Y_{i}|<|X_{i}|$, we must have $|Y_{q}|=|Z_{q}|$. Therefore, $|Z_{q}|>|X_{q}|$ as well. Consider two cases: (i) $\phi(X,q)>\phi(Y,p)$, (ii) $\phi(X,q)=\phi(Y,p)$ and $q<p$ If any of the above two conditions occur, then invoking Lemma 2.3 with allocations $X$, $Z$ and the agent $q$, there exists a transfer path from $q$ to some agent $k$ where $|Z_{k}|<|X_{k}|$. Transferring along the shortest such path gives us a non redundant (w.r.t. the valuation profile $v$) allocation $X^{\prime}$ where $|X^{\prime}_{q}|=|X_{q}|+1$ and $|X^{\prime}_{k}|=|X_{k}|-1$ (Lemma 2.2). Since $v^{\prime}_{j}(X^{\prime}_{j})\geq v_{j}(X^{\prime}_{j})$ for all $j\in N$, we must have that $X^{\prime}$ is non-redundant with respect to both valuation profiles $v$ and $v^{\prime}$. By our definition of $Z$, if $k\in N-i$, we have $|X_{k}|>|Z_{k}|=|Y_{k}|$ and if $k=i$, we have $|X_{i}|>|Y_{i}|$. Therefore, we have $|X_{k}|>|Y_{k}|$. Let $X^{\prime\prime}$ be an allocation obtained from starting at $X$ and removing a good from $k$. From (G1), we have that if $\phi(X^{\prime\prime},k)\leq\phi(X^{\prime\prime},q)$, we have $\psi(X)\leq\psi(X^{\prime})$ (with respect to the valuations $v$) with equality holding if and only if $\phi(X^{\prime\prime},k)=\phi(X^{\prime\prime},q)$. $X,X^{\prime},X^{\prime\prime}$ and $Y$ are non-redundant with respect to the new valuation function profile $v^{\prime}$. We can therefore, compare their $\phi$ values directly. If $k=0$, we improve USW (w.r.t. the valuation profile $v$) contradicting the fact that $X$ is weighted leximin. For case (i): If $k\neq 0$, we have $\phi(X^{\prime\prime},k)=\phi(X^{\prime},k)\leq\phi(Y,k)\leq\phi(Y,p)<\phi(X,q)=\phi(X^{\prime\prime},q)$. This gives us $\phi(X^{\prime\prime},k)<\phi(X^{\prime\prime},q)$: a contradiction. For case (ii): If $k\neq 0$, we have $\phi(X^{\prime\prime},k)\leq\phi(Y,k)\leq\phi(Y,p)=\phi(X,q)=\phi(X^{\prime\prime},q)$. If any of these weak inequalities are strict, we can use analysis similar to that of case (i) to show that $X$ does not maximize $\psi$. Therefore, all the weak inequalities must be equalities and we must have $\phi(X^{\prime\prime},k)=\phi(Y,k)=\phi(Y,p)=\phi(X,q)=\phi(X^{\prime\prime},q)$. Since $\phi(X^{\prime\prime},k)=\phi(X^{\prime\prime},p)$, we have $\psi(X^{\prime})=\psi(X)$. Moreover, by our choice of $p$ we have $p<k$ and by assumption, we have $q<p$. Combining the two, this gives us $q<k$. Therefore, $X^{\prime}$ lexicographically dominates $X$ — a contradiction to Theorem 3.1. Let us now move on to the remaining two possible cases (iii) $\phi(X,q)<\phi(Y,p)$, (iv) $\phi(X,q)=\phi(Y,p)$ and $q>p$ Recall that $X$ and $Y$ are non-redundant with respect to the new valuation functions $v^{\prime}$. If any of the above two conditions occur, then invoking Lemma 2.3 with allocations $Y$, $X$ and the agent $p$, there exists a transfer path from $p$ to some agent $l$ in the exchange graph of $Y$ where $|Y_{l}|>|X_{l}|$. Transferring along the shortest such path gives us a non-redundant allocation $Y^{\prime}$ (according to the valuation $v^{\prime}$) where $|Y^{\prime}_{p}|=|Y_{p}|+1$ and $|Y^{\prime}_{l}|=|Y_{l}|-1$ (Lemma 2.2). Let $Y^{\prime\prime}$ be an allocation obtained from starting at $Y$ and removing a good from $l$. From (G1), we have that if $\phi(Y^{\prime\prime},l)\leq\phi(Y^{\prime\prime},p)$, we have $\psi(Y)\leq\psi(Y^{\prime})$ (with respect to the valuations $v^{\prime}$) with equality holding if and only if $\phi(Y^{\prime\prime},l)=\phi(Y^{\prime\prime},p)$. If $l=0$, we improve USW (according to the valuations $v^{\prime}$) contradicting the fact that $Y$ is leximin. For case (iii): If $l\neq 0$, we have $\phi(Y^{\prime\prime},l)\leq\phi(X,l)\leq\phi(X,q)<\phi(Y,p)=\phi(Y^{\prime\prime},p)$. Compressing the inequality, we get that $\phi(Y^{\prime\prime},l)<\phi(Y^{\prime\prime},p)$: a contradiction. For case (iv): If $l\neq 0$, we have $\phi(Y^{\prime\prime},l)\leq\phi(X,l)\leq\phi(X,q)=\phi(Y,p)=\phi(Y^{\prime\prime},p)$. If any of these weak inequalities are strict, we can use analysis similar to that of case (iii) to show that $Y$ does not maximize $\psi$. Therefore, all the weak inequalities must be equalities and we must have $\phi(Y^{\prime\prime},l)=\phi(X,l)=\phi(X,q)=\phi(Y,p)=\phi(Y^{\prime\prime},p)$. This implies that $\psi(Y^{\prime})=\psi(Y)$. Moreover, by our choice of $q$ we have $q<l$ and by assumption, we have $p<q$. Combining the two, this gives us $p<l$. Therefore, $X^{\prime}$ lexicographically dominates $X$ — a contradiction to Theorem 3.1. Since cases (i)–(iv) cover all possible cases, our proof is complete. ∎ We are now ready to show strategyproofness. Theorem 4.3. When agents have matroid rank valuations, the Yankee Swap mechanism is strategyproof. Proof. Assume an agent $i$ reports $v^{\prime}_{i}$ instead of their true valuation $v_{i}$ to generate the allocation $X^{\prime}$ via the Yankee Swap mechanism. Let $X$ be the allocation generated by the mechanism had they reported their true valuation $v_{i}$. We need to show that $v_{i}(X_{i})\geq v_{i}(X^{\prime}_{i})$. We can assume w.l.o.g. that $v^{\prime}_{i}$ is an MRF; otherwise, $i$ gets nothing and $v_{i}(X^{\prime}_{i})=0$. Let $B$ be a subset of $X^{\prime}_{i}$ such that $|B|=v_{i}(B)=v_{i}(X^{\prime}_{i})$ — such a set can be constructed using an argument similar to Lemma 2.1. Using Lemma 4.1, we get that replacing $v_{i}$ with $f_{B}$ will result in an allocation $Y$ where $Y_{i}=B$. Using Lemma 4.2, we get that replacing $f_{B}$ with $v_{i}$ gives us the allocation $X$ and the guarantee $v_{i}(X_{i})=|X_{i}|\geq|B|=v_{i}(B)$. Note that we can apply Lemma 4.2 since by construction we have $v_{i}(S)\geq v_{i}(S\cap B)=|S\cap B|=f_{B}(S)$ for all $S$ such that $S\cap B\neq\emptyset$ and $v_{i}(S)\geq 0=f_{B}(S)$ otherwise. Since $v_{i}(B)=v_{i}(X^{\prime}_{i})$, the proof is complete. ∎ 5 Applying General Yankee Swap In this section, we show how Yankee Swap can be applied to optimize commonly used fairness metrics. The key challenge here is to encode fairness properties using a valid fairness metric $\psi$ such that a straightforward gain function $\phi$ exists. This section also showcases how simple the problem of optimizing fairness objectives becomes when using Yankee Swap. While we do not prove this explicitly, note that in each case $b+T_{\phi}(n,m)=O(1)$. 5.1 Prioritized Lorenz Dominating Allocations As a sanity check, we first show how General Yankee Swap computes prioritized Lorenz dominating allocations. An allocation $X$ is Lorenz dominating if for all allocations $Y$ and for any $k\in[n]$, the sum of the utilities of the $k$ agents with least utility in $X$ is greater than the sum of the utilities of the $k$ agents with least utility in $Y$. An allocation $X$ is leximin if it maximizes the lowest utility and subject to that; maximizes the second lowest utility and so on. Both these metrics can be formalized using the sorted utility vector. The sorted utility vector of an allocation $X$ (denoted by $\vec{s}^{X}$) is defined as the utility vector $\vec{u}^{X}$ sorted in ascending order. An allocation $X$ is Lorenz dominating if for all allocations $Y$ and all $k\in[n]$, we have $\sum_{j\in[k]}s_{j}^{X}\geq\sum_{j\in[k]}s_{j}^{Y}$. An allocation $X$ is leximin if the sorted utility vector of $X$ lexicographically dominates all other sorted utility vectors. A Lorenz dominating allocation is not guaranteed to exist, but when it does, it is equivalent to a leximin allocation (which is guaranteed to exist). This result holds for arbitrary valuation functions. Lemma 5.1. When a Lorenz dominating allocation exists, an allocation is leximin if and only if it is Lorenz dominating. Proof. Let $Y$ be any Lorenz dominating allocation and let $X$ be any leximin allocation. Assume for contradiction that they do not have the same sorted utility vector. Let $k$ be the lowest index such that $s^{X}_{k}\neq s^{Y}_{k}$. If $s^{X}_{k}<s^{Y}_{k}$, then $Y$ lexicographically dominates $X$ contradicting the fact that $X$ is leximin. If $s^{X}_{k}>s^{Y}_{k}$, then $Y$ is not Lorenz dominating. Since $s_{k}^{X}=s_{k}^{Y}$ for all $k$, both allocations have the same sorted utility vector. This implies that $X$ is Lorenz dominating and $Y$ is leximin. ∎ Babaioff et al. (2021a) introduce and study the concept of prioritized Lorenz dominating allocations. Each agent is given a priority which is represented using a permutation $\pi:[n]\mapsto[n]$. When agents have MRF valuations $\{v_{i}\}_{i\in N}$, prioritized Lorenz dominating allocations are defined as Lorenz dominating allocations for the fair allocation instance where valuations are defined as $v^{\prime}_{i}(S)=v_{i}(S)+\frac{\pi(i)}{n^{2}}$. Babaioff et al. (2021a) show that when agents have MRF valuations, a prioritized Lorenz dominating allocation is guaranteed to exist and satisfies several desirable fairness properties such as leximin, envy freeness up to any good (EFX) and maximizing Nash welfare. Using Lemma 5.1, these prioritized Lorenz dominating allocations are equivalent to leximin allocations with respect to the valuations $v^{\prime}$. Given this, we have an obvious choice of fairness objective $\psi$. We define $\psi(X)$ as the sorted utility vector of $X$ with respect to the valuations $v^{\prime}$. $\psi(X)>\psi(Y)$ if $\psi(X)$ lexicographically dominates $\psi(Y)$. Any leximin allocation with respect to the valuations $v^{\prime}$ maximizes $\psi(X)$. As Benabbou et al. (2021) show, leximin allocations are also Pareto optimal, which implies that $\psi$ satisfies (C1) (Pareto Dominance). For (C2), we define the gain function $\phi(X,i)=(-|X_{i}|,-\pi(i))$ for any non-redundant allocation $X$ and agent $i$. Lemma 5.2. The function $\phi(X,i)=(-|X_{i}|,-\pi(i))$ is a valid gain function for the fairness objective $\psi(X)=\vec{s}^{X}$ with respect to the valuations $v^{\prime}$. Proof. This function trivially satisfies (G2), as the term $-|X_{i}|$ is a decreasing function of $|X_{i}|$. Let us next show that $\phi$ satisfies (G1). Let $X$ be a non-redundant allocation. Let $Y$ be the non-redundant allocation resulting from giving a good $g$ to $i$ under $X$, such that $\Delta(X,i)=1$. Let $Z$ be the non-redundant allocation resulting from giving a good $g$ to $j$ under $X$ such that $\Delta(X,j)=1$. We assume that $\phi(X,i)>\phi(X,j)$, and show that $\psi(Y)\geq\psi(Z)$. Note that since $\pi(i)\neq\pi(j)$, $\phi(X,i)$ can never equal $\phi(X,j)$. To show that $\phi$ satisfies (G1), we need the following Lemma. Lemma 5.3. Let $r$ be any positive real valued number. Let $Y$ and $Z$ be two non-redundant allocations such that at most two agents ($j,k\in N$) receive different utilities in the two allocations. If $\min\{v_{j}(Z_{j}),v_{k}(Z_{k})\}>r$ and $\min\{v_{j}(Y_{j}),v_{k}(Y_{k})\}\leq r$, then $\vec{s}^{Z}$ lexicographically dominates $\vec{s}^{Y}$. Proof. Since all the other elements of the sorted utility vector have the same value, it suffices to only compare the sorted utility vectors restricted to the agents $i$ and $j$. In other words, to decide lexicographic dominance, we need to only compare the sorted version of $(v_{j}(Z_{j}),v_{k}(Z_{k}))$ and $(v_{j}(Y_{j}),v_{k}(Y_{k}))$. By assumption, we have $\min\{v_{j}(Z_{j}),v_{k}(Z_{k})\}>\min\{v_{j}(Y_{j}),v_{k}(Y_{k})\}$. Therefore $\vec{s}^{Z}$ lexicographically dominates $\vec{s}^{Y}$. ∎ If $\phi(X,i)>\phi(X,j)$ one of the following two cases must be true. Case 1: $|X_{i}|<|X_{j}|$. If this is true, we have $v^{\prime}_{j}(Y_{j})=|X_{j}|+\frac{\pi(j)}{n^{2}}>|X_{i}|+\frac{\pi(i)}{n^{2}}=v^{\prime}_{i}(X_{i})=v^{\prime}_{i}(Z_{i})$ since $\frac{\pi(i)}{n^{2}}-\frac{\pi(j)}{n^{2}}<1\leq|X_{j}|-|X_{i}|$. Therefore, invoking Lemma 5.3 with allocations $Y$ and $Z$ and $r=v^{\prime}_{i}(X_{i})$, we get that $\vec{s}^{Y}$ lexicographically dominates $\vec{s}^{Z}$ i.e. $\psi(Y)>\psi(Z)$. Case 2: $|X_{i}|=|X_{j}|$ and $\pi(i)<\pi(j)$. If this is true, we have $v^{\prime}_{j}(Y_{j})=|X_{j}|+\frac{\pi(j)}{n^{2}}>|X_{i}|+\frac{\pi(i)}{n^{2}}=v^{\prime}_{i}(X_{i})=v^{\prime}_{i}(Z_{i})$ by assumption. Therefore, invoking Lemma 5.3 with allocations $Y$ and $Z$ and $r=v^{\prime}_{i}(X_{i})$, we get that $\vec{s}^{Y}$ lexicographically dominates $\vec{s}^{Z}$ i.e. $\psi(Y)>\psi(Z)$. ∎ Invoking Lemma 5.2, we immediately obtain the following claim. Theorem 5.4. When agents have MRF valuations, General Yankee Swap with $\phi(X,i)=(-|X_{i}|,-\pi(i))$ computes prioritized Lorenz dominating allocations with respect to priority $\pi$. As a sanity check, this is the exact tie breaking scheme used by Viswanathan and Zick (2022) in the original Yankee Swap to compute prioritized Lorenz dominating allocations. 5.2 Weighted Leximin Allocations Let us next consider the case where agents have entitlements. When each agent $i$ has a positive weight $w_{i}$, the weighted utility of an agent $i$ is defined as $\frac{v_{i}(X_{i})}{w_{i}}$. A weighted leximin allocation maxmizes the least weighted utility and subject to that, maximizes the second least weighted utility and so on. More formally, we define the weighted sorted utility vector of an allocation $X$ (denoted by $\vec{e}^{X}$) as $\left(\frac{v_{1}(X_{1})}{w_{1}},\frac{v_{2}(X_{2})}{w_{2}},\dots,\frac{v_{n}(X_{n})}{w_{n}}\right)$ sorted in ascending order. An allocation $X$ is weighted leximin if for no other allocation $Y$, $\vec{e}^{Y}$ lexicographically dominates $\vec{e}^{X}$. For the weighted leximin objective, we use $\psi(X)=\vec{e}^{X}$, $\psi(X)>\psi(Y)$ if $\vec{e}^{X}$ lexicographically dominates $\vec{e}^{Y}$. It is easy to see that the allocation that maximizes $\psi$ is weighted leximin. It is also easy to see that weighted leximin trivially satisfies Pareto dominance (C1): any Pareto improvement increases the utility of one agent, while not decreasing the utility of any other agent, resulting in a strictly dominating sorted weighted utility vector. For (C2), we define the gain function as $\phi(X,i)=(-\frac{|X_{i}|}{w_{i}},-w_{i})$ for any non-redundant allocation $X$. Lemma 5.5. The function $\phi(X,i)=(-\frac{|X_{i}|}{w_{i}},-w_{i})$ is a valid gain function for the weighted leximin fairness objective. Proof. $\phi(X,i)$ clearly satisfies (G2). We use a proof technique similar to Lemma 5.2 to show that $\phi$ is a valid gain function. Lemma 5.6. Let $r$ be any positive real valued number. Let $Y$ and $Z$ be two non-redundant allocations such that at most two agents ($j,k\in N$) receive different utilities in the two allocations. If $\min\{\frac{v_{j}(Z_{j})}{w_{j}},\frac{v_{k}(Z_{k})}{w_{k}}\}>r$ and $\min\{\frac{v_{j}(Y_{j})}{w_{j}},\frac{v_{k}(Y_{k})}{w_{k}}\}\leq r$, then $\vec{e}^{Z}$ lexicographically dominates $\vec{e}^{Y}$ . Proof. Since all the other elements of the weighted sorted utility vector have the same value, it suffices to only compare the sorted utility vectors restricted to the agents $i$ and $j$. In other words, to decide lexicographic dominance, we need to only compare the sorted version of $(\frac{v_{j}(Z_{j})}{w_{j}},\frac{v_{k}(Z_{k})}{w_{k}})$ and $(\frac{v_{j}(Y_{j})}{w_{j}},\frac{v_{k}(Y_{k})}{w_{k}})$. By assumption, we have $\min\{\frac{v_{j}(Z_{j})}{w_{j}},\frac{v_{k}(Z_{k})}{w_{k}}\}>\min\{\frac{v_{j}(Y_{j})}{w_{j}},\frac{v_{k}(Y_{k})}{w_{k}}\}$. Therefore $\vec{e}^{Z}$ lexicographically dominates $\vec{e}^{Y}$. ∎ Assume $\phi(X,i)>\phi(X,j)$ for some agents $i,j\in N$ and non-redundant allocation $X$. Let $Y$ be the allocation that results from adding one unit of utility to $i$ in $X$ and let $Z$ be the allocation that results from adding one unit of utility to $j$ in $X$. If $\phi(X,i)>\phi(X,j)$, then one of the following two cases must be true. Case 1: $\frac{|X_{i}|}{w_{i}}<\frac{|X_{j}|}{w_{j}}$. Invoking Lemma 5.3 with allocations $Z$ and $Y$ and $r=\frac{|X_{i}|}{w_{i}}$, we get that $\psi(Y)>\psi(Z)$. Case 2: $\frac{|X_{i}|}{w_{j}}=\frac{|X_{j}|}{w_{j}}$ and $w_{i}<w_{j}$. If this is true, we have $\frac{|Y_{j}|}{w_{j}}=\frac{|Z_{i}|}{w_{i}}$ by assumption. However, $\frac{|Y_{i}|}{w_{i}}=\frac{|X_{i}|+1}{w_{i}}>\frac{|X_{j}|+1}{w_{j}}=\frac{|Z_{j}|}{w_{j}}$. Since the two allocations differ only in the utilities allocated to $j$ and $i$, this implies that $\psi(Y)>\psi(Z)$. This implies that when $\phi(X,i)>\phi(X,j)$, we have $\psi(Y)>\psi(Z)$ as required. When $\phi(X,i)=\phi(X,j)$, we must have $\frac{|X_{i}|}{w_{j}}=\frac{|X_{j}|}{w_{j}}$ and $w_{i}=w_{j}$. This gives us $\frac{|Y_{j}|}{w_{j}}=\frac{|Z_{i}|}{w_{i}}$ and $\frac{|Y_{i}|}{w_{i}}=\frac{|X_{i}|+1}{w_{i}}=\frac{|X_{j}|+1}{w_{j}}=\frac{|Z_{j}|}{w_{j}}$ which implies that both $Y$ and $Z$ have the same weighted sorted utility vector. This implies that $\psi(Y)=\psi(Z)$. We conclude that $\phi$ satisfies (G1) as well. ∎ Since the weighted leximin justice criterion admits a valid gain function, we obtain the following theorem. Theorem 5.7. When agents have MRF valuations and each agent $i$ has a weight $w_{i}$, General Yankee Swap with $\phi(X,i)=(-\frac{|X_{i}|}{w_{i}},-w_{i})$ computes a weighted leximin allocation. 5.3 Individual Fair Share Allocations We now turn to justice criteria that guarantee each agent a minimum fair share amount. One such popular notion is the maxmin share. An agent’s maxmin share (Budish, 2011) is defined as the utility an agent would receive if they divided the set of goods into $n$ bundles themselves and picked the worst bundle. More formally, the maxmin share of an agent $i$ (denoted by $\texttt{MMS}_{i}$) is defined as $\texttt{MMS}_{i}=\max_{X=(X_{1},X_{2},\dots,X_{n})}\min_{j\in[n]}v_{i}(X_{j})$. There are several other generalizations of the maxmin share popular in the literature (Farhadi et al., 2019; Babaioff et al., 2021b). All of these metrics have the same objective — each agent $i$ has an instance dependent fair share $c_{i}\geq 0$ and the goal is to compute allocations that guarantee each agent a high fraction of their share (Procaccia and Wang, 2014; Ghodsi et al., 2018). We define the fair share fraction of an agent $i$ in an allocation $X$ as $\frac{v_{i}(X_{i})}{c_{i}}$ when $c_{i}>0$ and $0$ when $c_{i}=0$. When $c_{i}=0$, any bundle of goods (even the empty bundle) will provide $i$ its fair share; therefore, agents $i$ with $c_{i}=0$ can be ignored when allocating bundles. When agents have matroid rank valuations, Yankee Swap can be used to maximize the lowest fair share fraction received by an agent and subject to that, maximize the second lowest fair share fraction and so on. Using a proof very similar to Theorem 5.7, we can show that the appropriate $\phi(X,i)$ to achieve such a fairness objective is defined as follows: $$\displaystyle\phi(X,i)=\begin{cases}(-\frac{|X_{i}|}{c_{i}},-c_{i})&c_{i}>0\\ (-M,-c_{i})&c_{i}=0\end{cases}$$ (1) where $M$ is a large positive number greater than any possible $\frac{|X_{i}|}{c_{i}}$. This can be seen as setting the weight of each agent $i$ to their share $c_{i}$ and computing a weighted leximin allocation. The only minor change we make is take into account cases where $c_{i}$ could potentially be $0$: in such a case we make $\phi(X,i)$ the lowest possible value it can take so we do not allocate any goods to these agents. The only time Yankee Swap will allocate goods to these agents is when all the other agents with positive shares do not derive a positive marginal gain from any of the remaining unallocated goods. More formally, we have the following Theorem whose proof is omitted due its similarity to Theorem 5.7. Theorem 5.8. When agents have MRF valuations and every agent $i$ has a fair share $c_{i}$, then General Yankee Swap run with $\phi(X,i)$ given by (1) computes an allocation which maximizes the lowest fair share fraction received by an agent and subject to that, the second lowest fair share fraction and so on. With Theorem 5.8, it is no longer necessary to find a fair share fraction that can be guaranteed to all agents and then design an algorithm which allocates each agent at least this fraction of their fair share. General Yankee Swap will automatically compute an allocation which maximizes the lowest fair share fraction received by any agent. A straightforward corollary is that, when there exists an allocation that guarantees each agent their fair share, Yankee Swap outputs one such allocation. Barman and Verma (2021) show that when agents have MRF valuations, an allocation which guarantees each agent their maxmin share always exists. Using their result with Theorem 5.8, we have the following Corollary. Corollary 5.9. When agents have MRF valuations and every agent has $c_{i}=\texttt{MMS}_{i}$, General Yankee Swap run with $\phi(X,i)$ given by (1) computes a MAX-USW allocation which gives each agent their maxmin share. Barman and Verma (2021) also present a polynomial time algorithm to compute the maxmin share of each agent. Since the procedure to compute maxmin shares is not necessarily strategyproof, the overall procedure of computing a maxmin share allocation using Yankee Swap may not be strategyproof either. 5.4 Max Weighted Nash Welfare Allocations We continue to assume that each agent $i$ has a weight $w_{i}>0$. For any allocation $X$, let $P_{X}$ be the set of agents who receive a positive utility under $X$. An allocation $X$ is said to be max weighted Nash welfare (denoted by MWNW) if it first minimizes the number of agents who receive a utility of zero; subject to this, $X$ maximizes $\prod_{i\in P_{X}}v_{i}(X_{i})^{w_{i}}$ (Chakraborty et al., 2021a; Suksompong and Teh, 2022). Note that the exponent is necessary since simply using the product of the weighted utilities of each agent is equivalent to the Nash welfare with equal entitlements when assuming that all agents receive a positive utility. This is because any allocation that maximizes $\prod_{i\in N}v_{i}(X_{i})$ also maximizes $\prod_{i\in N}\frac{v_{i}(X_{i})}{w_{i}}$. The definition of $\psi$ for weighted Nash welfare is straightforward: $\psi(X)=(|P_{X}|,\prod_{i\in P_{X}}v_{i}(X_{i})^{w_{i}})$. It is easy to see that any allocation that maximizes $\psi(X)$ is MWNW. Trivially, $\psi$ satisfies (C1). We define the gain function $\phi(X,i)$ as follows: $$\displaystyle\phi(X,i)=\begin{cases}(1+\frac{1}{|X_{i}|})^{w_{i}}&(|X_{i}|>0)\\ M&(|X_{i}|=0)\end{cases}$$ (2) where $M$ is a large number greater than any possible $(1+\frac{1}{|X_{i}|})^{w_{i}}$. Lemma 5.10. $\phi(X,i)$ given by (2) is a valid gain function when $\psi(X)=(|P_{X}|,\prod_{i\in P_{X}}v_{i}(X_{i})^{w_{i}})$. Proof. This function trivially satisfies (G2). To show that $\phi$ satisfies (G1), some minor case work is required. Let $X$ be a non-redundant allocation and $i,j\in N$ be two agents. Let $Y$ be the allocation resulting from adding one unit of utility to $i$ in $X$ and similarly, let $Z$ be the allocation resulting from adding one unit of utility to $j$ in $Y$. We need to show that if $\phi(X,i)<\phi(X,j)$ then $\psi(Y)<\psi(Z)$ and if $\phi(X,i)=\phi(X,j)$, then $\psi(Y)=\psi(Z)$. Case 1: $|X_{i}|=|X_{j}|=0$. In this case, it is easy to see that both $\phi(X,i)=\phi(X,j)$ and $\psi(Y)=\psi(Z)$. Case 2: $|X_{i}|>|X_{j}|=0$. Then, by construction $\phi(X,j)>\phi(X,i)$. We also have $\psi(Z)>\psi(Y)$ since $|P_{Z}|>|P_{Y}|$. Note that this argument also covers the case where $|X_{j}|>|X_{i}|=0$. Case 3: $|X_{i}|>0$ and $|X_{j}|>0$. In this case, note that $$\displaystyle\phi(X,i)=\frac{(|X_{i}|+1)^{w_{i}}}{|X_{i}|^{w_{i}}}=\frac{|Y_{i}|^{w_{i}}}{|X_{i}|^{w_{i}}}=\frac{\prod_{h\in P_{Y}}|Y_{h}|^{w_{h}}}{\prod_{h\in P_{X}}|X_{h}|^{w_{h}}}$$ since $|P_{Y}|=|P_{X}|>0$. Similarly, we have $\phi(X,j)=\frac{\prod_{h\in P_{Z}}|Z_{h}|^{w_{h}}}{\prod_{h\in P_{X}}|X_{h}|^{w_{h}}}$. Therefore, since $|P_{Y}|=|P_{Z}|>0$, we have $\phi(X,i)>\phi(Y,i)$ if and only if $\psi(Y)>\psi(Z)$. Similarly, we have $\phi(X,i)=\phi(Y,i)$ if and only if $\psi(Y)=\psi(Z)$. The above analysis shows that $\phi$ satisfies (G1) as well. ∎ Invoking Lemma 5.10, we have the following Theorem. Theorem 5.11. When agents have MRF valuations and each agent $i$ has a weight $w_{i}$, General Yankee Swap with $\phi$ given by (2) computes a max weighted Nash welfare allocation. When all weights are uniform, leximin and max Nash welfare allocations have the same sorted utility vector (Babaioff et al., 2021a). This implies that they are equivalent notions of fairness. However, when agents have different weights, weighted leximin and weighted Nash welfare are not equivalent and can have different sorted utility vectors. Consider the following example: Example 5.12. Consider an example with two agents $N=\{1,2\}$ and six goods; $w_{1}=2$ and $w_{2}=8$. Both agents have additive valuations, and value all items at $1$. Any weighted leximin allocation allocates two goods to agent $1$ and four goods to agent $2$, with a weighted sorted utility vector of $(\frac{1}{2},1)$. However, any max weighted Nash welfare allocation allocates one good to agent $1$ and five goods to agent $2$. This allocation has a worse weighted sorted utility vector $(\frac{1}{2},\frac{5}{8})$, but a higher weighted Nash welfare. 6 Limitations The previous section describes several fairness objectives for which Yankee Swap works. This raises the natural question: is there any reasonably fairness notion where Yankee Swap does not work? The major limitation of Yankee Swap is that it cannot be used to achieve envy based fairness properties. An allocation $X$ is said to be envy free if $v_{i}(X_{i})>v_{i}(X_{j})$ for all $i,j\in N$. Indeed, this is not always possible to achieve. This impossibility has resulted in several relaxations like envy free up to one good (EF1) (Lipton et al., 2004; Budish, 2011) and envy free up to any good (EFX) (Caragiannis et al., 2016; Plaut and Roughgarden, 2017). However, at its core, these envy relaxations are still fairness objectives that violate the pareto dominance property (C1): by increasing the utility of an agent currently being envied by other agents, we decrease the fairness of the allocation while pareto dominating the allocation. One work around for this is using Yankee Swap to compute leximin allocations and hope that leximin allocations have good envy guarantees. This works when all the agents have equal weights — prioritized Lorenz dominating allocations are guaranteed to be EFX. However, when agents have different weights, Yankee Swap fails to compute weighted envy free up to one good (WEF1) allocations (Chakraborty et al., 2021a). For clarity, an allocation $X$ is WEF1 when for all $i,j\in N$ $\frac{v_{i}(X_{i})}{w_{i}}>\frac{v_{i}(X_{j}-g)}{w_{j}}$ for some $g\in X_{j}$. Yankee Swap fails mainly due to the fact that when agents have MRF valuations, it may be the case that no MAX-USW allocation is WEF1. Therefore, irrespective of the choice of $\phi$, Yankee Swap cannot always compute a WEF1 allocation (Proposition 3.9) This is illustrated in the following example. Example 6.1. Consider an example with two agents $\{1,2\}$ and four goods $\{g_{1},g_{2},g_{3},g_{4}\}$. We have $w_{1}=10$ and $w_{2}=1$. The valuation function of every agent $i$ is the MRF $v_{i}(S)=\min\{|S|,2\}$. Any MAX-USW allocation will assign two goods to both agents. However, no such allocation is WEF1: agent $1$ (weighted) envies agent $2$. This is because agent $1$ receives a weighted utility of $\frac{2}{10}$ but agent $2$ receives a weighted utility of $2$; even after dropping any good, agent $2$’s weighted utility (as seen by both agents) will be $1$. There does exist a WEF1 allocation in this example but achieving WEF1 comes at an unreasonable cost of welfare. To achieve WEF1, we must allocate $3$ goods to agent $1$. However, the third good offers no value to agent $1$. In effect, we are obligated to “burn” an item in order to satisfy WEF1. This example seems to suggest that WEF1 is not a suitable envy relaxation for MRF valuations. We leave the question of creating suitable envy relaxations to future work. 7 Conclusion and Future Work In this work, we study the fair division of goods when agents have MRF valuations. Our main contribution is a flexible framework that can be used to optimize several fairness objectives. The General Yankee Swap framework is fast, strategyproof and always maximizes utilitarian social welfare. The General Yankee Swap framework is a strong theoretical tool. We believe it has several applications outside the ones described in Section 5. This is a very promising direction for future work. One specific example is the computation of weighted maxmin share allocations. Theorem 5.8 shows that to compute an allocation which gives each agent the maximum fraction of their fair share possible, it suffices to simply input these fair shares into Yankee Swap. Therefore, the problem of computing fair share allocations have effectively been reduced to the problem of computing fair shares when agents have MRF valuations. It would be very interesting to see the different kinds of weighted maxmin shares that can be computed and used by Yankee Swap. Another interesting direction is to show the tightness of this framework. It is unclear whether the conditions (C1) and (C2) are absolutely necessary for Yankee Swap to work. If they are not, it would be interesting to see how they can be relaxed to allow for the optimization of a larger class of fairness properties. References Aziz et al. [2019] Haris Aziz, Hau Chan, and Bo Li. Weighted maxmin fair share allocation of indivisible chores. In Proceedings of the 28th International Joint Conference on Artificial Intelligence (IJCAI), pages 46–52, 2019. Aziz et al. [2020] Haris Aziz, Hervé Moulin, and Fedor Sandomirskiy. A polynomial-time algorithm for computing a pareto optimal and almost proportional allocation. Operations Research Letters, 48(5):573–578, 2020. Babaioff et al. [2021a] Moshe Babaioff, Tomer Ezra, and Uriel Feige. Fair and truthful mechanisms for dichotomous valuations. In Proceedings of the 35th AAAI Conference on Artificial Intelligence (AAAI), pages 5119–5126, 2021a. Babaioff et al. [2021b] Moshe Babaioff, Tomer Ezra, and Uriel Feige. Fair-share allocations for agents with arbitrary entitlements. In Proceedings of the 22nd ACM Conference on Economics and Computation (EC), page 127, 2021b. Barman and Verma [2021] Siddharth Barman and Paritosh Verma. Existence and computation of maximin fair allocations under matroid-rank valuations. In Proceedings of the 20th International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), pages 169–177, 2021. Benabbou et al. [2021] Nawal Benabbou, Mithun Chakraborty, Ayumi Igarashi, and Yair Zick. Finding fair and efficient allocations for matroid rank valuations. ACM Transactions on Economics and Computation, 9(4), 2021. Bouveret et al. [2016] Sylvain Bouveret, Yann Chevaleyre, and Nicolas Maudet. Fair allocation of indivisible goods. In Felix Brandt, Vincent Conitzer, Ulle Endriss, Jérôme Lang, and Ariel D. Procaccia, editors, Handbook of Computational Social Choice, chapter 12. Cambridge University Press, 2016. Budish [2011] Eric Budish. The combinatorial assignment problem: Approximate competitive equilibrium from equal incomes. Journal of Political Economy, 119(6):1061 – 1103, 2011. Caragiannis et al. [2016] Ioannis Caragiannis, David Kurokawa, Hervé Moulin, Ariel D. Procaccia, Nisarg Shah, and Junxing Wang. The unreasonable fairness of maximum nash welfare. In Proceedings of the 17th ACM Conference on Economics and Computation (EC), page 305–322, 2016. Chakraborty et al. [2021a] Mithun Chakraborty, Ayumi Igarashi, Warut Suksompong, and Yair Zick. Weighted envy-freeness in indivisible item allocation. ACM Transactions on Economics and Computation, 9, 2021a. ISSN 2167-8375. Chakraborty et al. [2021b] Mithun Chakraborty, Ulrike Schmidt-Kraepelin, and Warut Suksompong. Picking sequences and monotonicity in weighted fair division. Artificial Intelligence, 301, 2021b. Chakraborty et al. [2022] Mithun Chakraborty, Erel Segal-Halevi, and Warut Suksompong. Weighted fairness notions for indivisible items revisited. Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI), pages 4949–4956, 2022. Farhadi et al. [2019] Alireza Farhadi, Mohammad Ghodsi, MohammadTaghi Hajiaghayi, Sébastien Lahaie, David Pennock, Masoud Seddighin, Saeed Seddighin, and Hadi Yami. Fair allocation of indivisible goods to asymmetric agents. Journal of Artificial Intelligence Research, 64(1):1–20, 2019. Garg et al. [2021] Jugal Garg, Edin Husić, Aniket Murhekar, and László Végh. Tractable fragments of the maximum nash welfare problem, 2021. URL https://arxiv.org/abs/2112.10199. Ghodsi et al. [2018] Mohammad Ghodsi, MohammadTaghi HajiAghayi, Masoud Seddighin, Saeed Seddighin, and Hadi Yami. Fair allocation of indivisible goods: Improvements and generalizations. In Proceedings of the 19th ACM Conference on Economics and Computation (EC), pages 539–556, 2018. Halpern et al. [2020] Daniel Halpern, Ariel D. Procaccia, Alexandros Psomas, and Nisarg Shah. Fair division with binary valuations: One rule to rule them all. In Proceedings of the 16th Conference on Web and Internet Economics (WINE), page 370–383, 2020. Li et al. [2022] Bo Li, Yingkai Li, and Xiaowei Wu. Almost (weighted) proportional allocations for indivisible chores. In Proceedings of the ACM Web Conference 2022, page 122–131, 2022. Lipton et al. [2004] R. J. Lipton, E. Markakis, E. Mossel, and A. Saberi. On approximately fair allocations of indivisible goods. In Proceedings of the 5th ACM Conference on Economics and Computation (EC), page 125–131, 2004. Plaut and Roughgarden [2017] Benjamin Plaut and Tim Roughgarden. Almost envy-freeness with general valuations. ArXiv, abs/1707.04769, 2017. Procaccia and Wang [2014] Ariel D. Procaccia and Junxing Wang. Fair enough: Guaranteeing approximate maximin shares. In Proceedings of the 15th ACM Conference on Economics and Computation (EC), pages 675–692, 2014. Schrijver [2003] A. Schrijver. Combinatorial Optimization - Polyhedra and Efficiency. Springer, 2003. Suksompong and Teh [2022] Warut Suksompong and Nicholas Teh. On maximum weighted nash welfare for binary valuations. Mathematical Social Sciences, 117:101–108, 2022. ISSN 0165-4896. Viswanathan and Zick [2022] Vignesh Viswanathan and Yair Zick. Yankee swap: a fast and simple fair allocation mechanism for matroid rank valuations, 2022. URL https://arxiv.org/abs/2206.08495.
How Parallel Circuit Execution Can Be Useful for NISQ Computing? Siyuan Niu LIRMM, University of Montpellier 34090, Montpellier, France [email protected]    Aida Todri-Sanial LIRMM, University of Montpellier, CNRS 34090, Montpellier, France [email protected] Abstract Quantum computing is performed on Noisy Intermediate-Scale Quantum (NISQ) hardware in the short term. Only small circuits can be executed reliably on a quantum machine due to the unavoidable noisy quantum operations on NISQ devices, leading to the under-utilization of hardware resources. With the growing demand to access quantum hardware, how to utilize it more efficiently while maintaining output fidelity is becoming a timely issue. A parallel circuit execution technique has been proposed to address this problem by executing multiple programs on hardware simultaneously. It can improve the hardware throughput and reduce the overall runtime. However, accumulative noises such as crosstalk can decrease the output fidelity in parallel workload execution. In this paper, we first give an in-depth overview of state-of-the-art parallel circuit execution methods. Second, we propose a Quantum Crosstalk-aware Parallel workload execution method (QuCP) without the overhead of crosstalk characterization. Third, we investigate the trade-off between hardware throughput and fidelity loss to explore the hardware limitation with parallel circuit execution. Finally, we apply parallel circuit execution to VQE and zero-noise extrapolation error mitigation method to showcase its various applications on advancing NISQ computing. I Introduction Today’s quantum computers are qualified as Noisy Intermediate-Scale Quantum (NISQ) devices with 50 to hundreds of qubits [21]. It is limited by several physical constraints and noisy quantum operations. There are not enough qubits to realize the quantum error correction codes (QECC) [3] for a universal fault-tolerant quantum computer. Current quantum chips give reliable results only when executing a small circuit with shallow depth, causing a waste of hardware resources. Moreover, there is a growing demand to access quantum devices via the cloud, which leads to a large number of jobs in the queue and long waiting times for users. For example, it takes several days to get the result if we submit a circuit on IBM public quantum chips. Therefore, how to efficiently make use of quantum hardware to reduce the total runtime of circuits is becoming a timely problem. The parallel circuit execution technique was firstly proposed by [4] to target this problem. It allows a user to execute several quantum programs on a quantum chip simultaneously, or multiple users can share one quantum device at the same time. It improves the quantum hardware throughput and reduces the users’ waiting time. But the results show that the output fidelities of these circuits are decreased. Other approaches [14, 19, 20] have been proposed to enhance this technique, introducing different circuit partition methods, mapping algorithms, and taking crosstalk into account. Their results demonstrate that parallel circuit execution can be particularly of interest to quantum applications requiring simultaneous sub-problem executions. In this paper, we focus on investigating how parallel circuit execution can be useful for NISQ computing. Our major contributions can be listed as follows: • We provide an in-depth overview of parallel workload executions and outline the advantages and limitations. • We propose a Quantum Crosstalk-aware Parallel workload execution method (QuCP) which considers crosstalk error while eliminating the significant overhead of crosstalk characterization methods. • We perform parallel circuit execution on IBM quantum devices and analyze the hardware limitation of executing multiple circuits simultaneously. • We apply parallel circuit execution to VQE and zero-noise extrapolation (ZNE) to demonstrate its applications on NISQ algorithms and error mitigation techniques. II Background and State of the Art II-A Introduction of Parallel Circuit Execution As the size of a quantum chip increases, there is a need to execute multiple shallow depth circuits in parallel. This improves not only hardware throughput (the number of used qubits divided by the total number of qubits) but also reduces the overall runtime (waiting time + execution time). Fig. 0(a) shows an example of executing one 4-qubit quantum circuit on IBM Q 16 Melbourne. The circuit is mapped to a reliable region with tense-connectivity. In this case, the hardware throughput is only 26.7% and most of the qubits are unused. It is possible to find another reliable region to run two 4-qubit circuits in parallel, as shown in Fig. 0(b). The hardware throughput is increased to 53.3%, and the total runtime is reduced by half. However, as hardware throughput increases, output fidelity is reduced because: (1) Qubits with high fidelities are sparsely distributed, and it is difficult to execute all the quantum circuits on reliable regions. (2) Running multiple circuits in parallel can introduce a higher chance of crosstalk error [22]. How to trade off the output fidelity and hardware throughput to benchmark the IBM quantum hardware limitation is the focus of our work and discussed in section IV-B. II-B Review and Comparison with State of the Art There are mainly two steps to realize a parallel circuit execution method: (1) Allocate partitions to multiple circuits and make sure they do not interact with each other during execution. (2) Make all the circuits executable on hardware using parallel qubit mapping approach. Here, we compare the state-of-the-art methods: MultiQC [4], QuCloud [14], QuMC [19], and CNA [20], and discuss the key features to design a parallel workload execution algorithm. Crosstalk. It is one of the major noise sources in NISQ devices and can reduce the output fidelity significantly [17]. When multiple quantum operations are executed simultaneously, the state of one qubit might be corrupted by the operations on the other qubits if crosstalk exists. In parallel circuit execution, as several circuits are executed simultaneously, the probability of crosstalk is increased. Crosstalk is considered at partition-level to be avoided between partitions in QuMC, whereas CNA considers it at gate-level during the qubit mapping process. Characterization of crosstalk. In order to consider crosstalk in parallel circuit execution, we must figure out how to characterize it. Both QuMC and CNA use Simultaneous Randomized Benchmarking (SRB) [7] to characterize crosstalk of the target quantum device because of its ability to quantify the crosstalk impact between simultaneous CNOT operations. However, this approach becomes expensive with the increase in the size of a quantum chip. Better crosstalk characterization or crosstalk mitigation method is needed. Qubit partitioning. This process aims to allocate reliable partitions to each program. Except CNA, all the previous works propose their qubit partition algorithms, taking hardware topology and calibration data into account. In addition, QuMC considers crosstalk during qubit partition. Qubit mapping. The objective is to make quantum circuits executable on quantum hardware regarding the hardware topology. It includes two parts: initial mapping and routing. MultiQC and QuMC use a noise-aware mapping approach [18], whereas CNA chooses another noise-adaptive method [16] while considering crosstalk. QuCloud considers both inter and intra-program SWAPs to reduce the SWAP number but introduces potential crosstalk error. Task scheduling. All these methods use As Late As Possible (ALAP) approach for task scheduling, allowing qubits to remain in the ground state for as long as possible. It avoids the extra decoherence error caused by parallel execution of circuits with different depths and is the default scheduling method used in Qiskit compiler [6]. Independent vs Correlated. One important question in parallel circuit execution is to determine the number of circuits to execute simultaneously. QuCloud and QuMC propose different metrics to estimate the fidelity of allocated partition. QuMC further introduces a fidelity threshold to select the optimal number of simultaneous circuits. In conclusion, QuMC covers all the important factors in designing a parallel circuit execution method and has reported the best results in their paper compared with MultiQC and QuCloud. However, it still has the drawback of the large overhead when performing SRB for crosstalk characterization, which limits its performance when applied to large-scale quantum devices. It is essential to address this problem because the parallel circuit execution technique is especially dedicated to small benchmarks on large machines. III Quantum Crosstalk-aware Parallel Execution To address the drawbacks of the previous works, we propose a Quantum Crosstalk-aware Parallel workload execution method (QuCP) which emulates the crosstalk impact without the overhead of characterizing it. Simultaneous Randomized Benchmarking is the most popular approach to quantify the crosstalk properties of a quantum device. For example, suppose we want to characterize the crosstalk effect between one pair of two CNOTs $g_{i}$ and $g_{j}$. In that case, we need to first perform Randomized Benchmarking (RB) on the two CNOTs individually and then make simultaneous RB sequences on this pair. This process introduces a significant overhead if applied to large devices. Crosstalk is shown to be significant between neighbor CNOT pairs and [17] proposed several optimization methods to lower SRB overhead by grouping CNOT pairs separated by more than one-hop distance and performing SRB on them simultaneously. However, SRB is still expensive even with these optimization methods. The overhead of performing SRB on two quantum chips: IBM Q 27 Toronto and IBM Q 65 Manhattan, is shown in Table I. The one-hop pairs (neighbor CNOT pairs) are allocated to a minimum number of groups. We choose 5 seeds to ensure the precise result of SRB, and the number of jobs needed to perform SRB is 135 and 165, respectively, which takes a significant amount of time. The cost becomes even worse as the size of the quantum chip increases. Despite the expensive cost, SRB also requires users to master this technique to characterize crosstalk, which is not trivial. Inspired by QuMC, which mitigates crosstalk error at partition-level, we introduce a crosstalk parameter $\sigma$ to represent the crosstalk impact on CNOT pairs without the need of learning and performing SRB. Given a list of circuits to execute simultaneously, we first use the heuristic qubit partitioning method from QuMC to allocate the partition for the first circuit and add these qubits to a list of allocated qubits $q_{allocate}$. For the rest of the circuits, each time when we construct the possible partition candidates, we check if there are some pairs inside of the partition candidate are a one-hop distance from the pairs inside of $q_{allocate}$ according to the hardware topology. If there exists some, we can collect a list of potential crosstalk pairs $q_{crosstalk}$. To select the best partition, we calculate the Estimated Fidelity Score ($EFS$) of all the partition candidates shown in (1). $$EFS=Avg_{2q(cross)}\times\#2q+Avg_{1q}\times\#1q+\sum_{Q_{i}\in P}R_{Q_{i}}$$ (1) $Avg_{2q(cross)}$ is the average 2-qubit (CNOT) error inside of the partition $P$. Note that if $q_{crosstalk}$ is not empty, we use the crosstalk parameter $\sigma$ times the CNOT errors of the pairs inside of $q_{crosstalk}$ to indicate the crosstalk effect. Similarly, $Avg_{1q}$ is the average 1-qubit error rate, and $R_{Q_{i}}$ is the readout error of the qubit $Q_{i}$ belonging to partition $P$. We can use this metric $EFS$ to emulate the impact of crosstalk and avoid it at partition-level without performing SRB to characterize the crosstalk properties of a quantum device. IV Experimental results In this section, we first evaluate the performance of our QuCP method by comparing it with state-of-the-art crosstalk-aware parallel circuit execution algorithms. Second, we explore the hardware limitation when performing parallel circuit execution. Finally, we demonstrate the benefit of applying parallel circuit execution to VQE and ZNE algorithms. IV-A Crosstalk-aware Parallel Circuit Execution We compare our QuCP with two crosstalk-aware parallel workload execution approaches, QuMC and CNA, which are different in terms of crosstalk-mitigation method, qubit partitioning, and qubit mapping process. Both of the two methods need to perform SRB for crosstalk characterization. We use SRB to characterize the crosstalk properties of IBM Q 27 Toronto (Fig. 2). Table II shows the benchmarks that we use to compare these algorithms. They are collected from [12, 24], including several functions about logical operations, error correction, and quantum simulation, etc. We calculate the output fidelity of the simultaneous circuits to evaluate the performance of these algorithms. Some of the benchmarks have one certain output, and we use the Probability of a Successful Trial (PST) metric defined in (2). Whereas for other benchmarks, their results are supposed to be a distribution. We choose Jensen-Shanno Divergence (JSD) to compare the distance of two probability distributions, shown in (3), where $P$ and $Q$ are two distributions to compare and $M=\frac{1}{2}(P+Q)$. It is based on Kullback-Leibler divergence, shown in (4), with the benefit of always having a finite value and being symmetric. $$PST=\frac{\text{Number of successful trials}}{\text{Total number of trials}}$$ (2) $$JSD(P||Q)=\frac{1}{2}D(P||M)+\frac{1}{2}D(Q||M)$$ (3) $$D_{KL}(P||Q)=\sum_{x\in\mathcal{X}}P(x)\log(\frac{P(x)}{Q(x)})$$ (4) We execute three benchmarks on IBM Q 27 Toronto in parallel. The optimization_level in Qiskit compiler is set to 3, which is the highest level for circuit optimizations. First, we tune the crosstalk parameter $\sigma$ used in QuCP to verify its ability for crosstalk-mitigation at partition-level without SRB by comparing its partitioning results with QuMC. When $\sigma\geq 4$, QuCP provides the same results as QuMC. This number is reasonable as we need to calculate the average CNOT error rate inside of the partition (see (1)), which can decrease the impact of crosstalk on CNOT pair. Based on this experiment, we set $\sigma$ to 4 and compare QuCP with CNA to show the influence of crosstalk-mitigation at partition-level or gate-level for parallel circuit execution. The results in terms of JSD and PST are shown in Fig. 3. Note that a lower JSD or a higher PST is desirable. The benchmarks include unitary and various combinations. Comparing QuCP with CNA, the fidelity characterized by JSD and PST is improved by 10.5% and 89.9%, respectively. The fidelity improvement is realized by different partitioning and mapping methods, which are two other important factors to consider for parallel circuit execution. QuCP has better results and achieves crosstalk-mitigation with low overhead. IV-B Trade-off Between Hardware Throughput and Output Fidelity Enabling parallel circuit execution can improve the hardware throughput significantly. However, it reduces the circuit output fidelity at the same time. It is important to trade-off between them and explore the hardware limitation of performing parallel circuit execution. We use our QuCP method to benchmark the hardware limitation. We first estimate the output fidelity difference between independent and parallel circuit executions based on $EFS$ (see (2)), then introduce a fidelity threshold to determine how many circuits can be executed simultaneously. Experiments are performed on IBM Q 65 Manhattan, which is IBM’s largest quantum chip. We choose two circuits from Table II: 4mod5-v1_22 and alu-v0_27. We vary the value of the fidelity threshold to execute the same circuit an increasing number of times in parallel, and the results are shown in Fig. 4. When the fidelity threshold is zero, which indicates no fidelity difference between independent and simultaneous circuit execution, only one circuit is executed at each time. A larger threshold enables more circuits to be executed simultaneously. The number of parallel circuit executions varies from one to six, corresponding to hardware throughput from 7.7% to 46.2% and total runtime reduction up to six times. There is a significant fidelity loss when hardware throughput is over 38%, which points out the hardware limitation when performing parallel executions for circuits with a similar size as the benchmarks. IV-C Parallel Circuit Execution and VQE Algorithm Variational Quantum Eigensolver (VQE) [10] is one of the most promising algorithms to achieve quantum advantage in the NISQ era and is recently widely used in quantum chemistry. It can be used to prepare approximations to the ground state energy of a Hamiltonian as a hybrid classical-quantum method with shallow circuits. However, it needs to split the computation into $O(N^{4})$ sub-problems, introducing a large overhead of measurement circuits [9]. Parallel circuit execution has been used in [5] to execute distinct molecular geometries at the same time to increase hardware throughput during VQE routine. Whereas our focus is to investigate parallel circuit execution on independent VQE problem to reduce its measurement overhead. A Hamiltonian can be expressed by the sum of tensor products of Pauli operators. For naive measurement, we need quantum circuits for each Pauli term and compute their expectation values to obtain the state energy. This overhead can be reduced by performing simultaneous measurements, grouping commuting Pauli terms to measure them at the same time [15, 9]. Here, we apply parallel circuit execution to VQE to estimate the ground state of molecular $H_{2}$ at equilibrium bond length (0.735 angstroms) in the singlet state and with no charge, which can further reduce the measurement overhead. We first map the fermionic operators of the Hamiltonian to qubit operators using parity mapping [1] and we obtain a two-qubit Hamiltonian composed of 5 Pauli terms $\{II,IZ,ZI,ZZ,XX\}$. Naive measurements would require one circuit for each Pauli term to calculate the expectation value of the ansatz. These Pauli terms can be partitioned into two commuting groups using simultaneous measurement: $\{II,IZ,ZI,ZZ\}$ and $\{XX\}$. Note that the grouping result is not unique, but two groups are needed. We construct a heuristic ansatz state [10] composed of two repetitions. Each repetition layer has a set of $R_{y}R_{z}$ gates on each qubit, and each qubit is entangled with the others. Overall, we have 12 parameters for single-qubit rotations and two CNOTs for entanglers. For the sake of simplicity, we set the same value for these parameters each time and regard them as one parameter. We choose 8, 10, and 12 parameters, which correspond to 16, 20, and 24 measurement circuits using the Pauli operator grouping (labeled as PG) simultaneous measurement method. We apply our QuCP method to PG to execute these circuits simultaneously and compare the independent process (PG) with the parallel process (QuCP + PG). Fig. 5 shows the results, and we use the result calculated by the simulator as the baseline. We pick the minimum value from the results as the ground state energy estimation to calculate the error rate compared with the baseline ($\Delta{E}$_base). Moreover, we use the calculation of Scipy’s eigensolver as the theory result and check the error rate when comparing the obtained result with the theory result ($\Delta{E}$_theory). The information of error rates and hardware throughput of these three experiments is shown in Table III. The hardware throughput can be up to 73.8% with an error rate of less than 10%. Such improvement of hardware throughput is because of the small size of the ansatz circuit with shallow depth. IV-D Parallel Circuit Execution and Error Mitigation As quantum error correction (QEC) requires a huge overhead of qubits to implement, an alternative scheme named quantum error mitigation (QEM) was proposed for error suppression on NISQ devices. There are many different error-mitigation techniques, including zero-noise extrapolation [13], dynamical decoupling [23], measurement error mitigation [2], etc. Among them, the zero-noise extrapolation (ZNE) method is the simplest but powerful technique that is based on error extrapolation. ZNE was introduced in [13]. The basic idea is to first execute the circuit in different noise levels and then extrapolate an estimated error-free value. It can be implemented in two steps: (1) Noise-scaling. (2) Extrapolation. A digital ZNE approach was proposed in [8] to scale noises by increasing the number of gates or circuit depth. A list of folded circuits with different circuit depths is generated, and we calculate their expectation values. This method only requires the programmer’s gate-level access to the processor. There are several methods for error extrapolation, such as polynomial extrapolation, linear extrapolation, and Richardson extrapolation, etc. However, ZNE approach introduces an overhead of executing one circuit multiple times with various depths to extrapolate the noise-free expectation value. Here, we demonstrate how to reduce this overhead by applying parallel circuit execution to the digital ZNE approach. In our experiment, we first use fold_gates_at_random method from Mitiq package [11]. It selects gates randomly and folds them to modify the circuit depth that represents different noise levels. A list of folded circuits can be generated based on scale factors. Then, we execute these circuits simultaneously on IBM Q 65 Manhattan using the QuCP approach, and we can obtain the expectation values corresponding to different noise levels. Finally, we perform various extrapolation methods integrated in Mitiq, including LinearFactory, PolyFactory, and RichardsonFactory. These methods are used to calculate the estimated error-free result. One of the limitations of ZNE is that the extrapolation methods are sensitive to noises, so that the extrapolated values are strongly dependent on the selected extrapolation method. Therefore, we only show the best estimated result among these methods, which is the result that is closest to the ideal result calculated by the simulator. We generate four folded circuits with scale factors from 1 to 2.5 with step 0.5. Three processes are included for comparison: (1) Execute the independent circuit on the best partition selected by the QuCP method without the ZNE method (labeled as Baseline). (2) Execute the folded circuits simultaneously using QuCP to perform the ZNE method (labeled as QuCP+ZNE). (3) Execute the folded circuits independently to perform the ZNE method (labeled as ZNE). The experimental results are shown in Fig 6. The absolute error is represented by the difference between the ideal expectation value calculated by the simulator and the obtained expectation value. According to the results, baseline always has the largest error rate due to lack of mitigation technique. In most cases, ZNE gives the lowest error rate but requires multiple circuit executions. Whereas using parallel circuit execution technique (QuCP+ZNE), the error rate can be decreased significantly compared to the baseline with the same number of circuit executions. Also, the improvement of the hardware throughput and the reduction of overall runtime is three times. On average, the error rate is reduced by 2x, and in the best case (benchmark alu-v0_27), the error rate is reduced by 11x. Even though ZNE method was designed to scale the noise levels of the same occupied qubits, however, the errors can still be mitigated significantly by enlarging the circuit depth on different qubit partitions. It reveals some underlying similarities of the errors between different qubits which is interesting to explore in the future work. V Conclusion As the size of quantum chips grows and the demand for their accessibility increases, how to efficiently use the hardware resources is becoming a concern. The parallel circuit execution mechanism has been introduced to improve the hardware throughput and reduce the task total runtime by enabling to execute multiple circuits simultaneously. In this article, we explore the parallel circuit execution technique on NISQ hardware. We first compare the state-of-the-art methods and discuss their short-comes and the impact of different factors. Second, we propose a crosstalk-aware parallel workload execution method without the overhead of crosstalk characterization. We also evaluate the NISQ hardware limitation of performing parallel circuit execution. The experiments of investigating parallel circuit execution on VQE and ZNE error mitigation method demonstrate how it can be useful for NISQ computing. It is a key enabler for quantum algorithms requiring parallel sub-problem executions, especially in NISQ era. References [1] Sergey Bravyi, Jay M Gambetta, Antonio Mezzacapo, and Kristan Temme. Tapering off qubits to simulate fermionic hamiltonians. arXiv preprint arXiv:1701.08213, 2017. [2] Sergey Bravyi, Sarah Sheldon, Abhinav Kandala, David C Mckay, and Jay M Gambetta. Mitigating measurement errors in multiqubit experiments. Physical Review A, 103(4):042605, 2021. [3] A Robert Calderbank, Eric M Rains, PM Shor, and Neil JA Sloane. Quantum error correction via codes over gf (4). IEEE Transactions on Information Theory, 44(4):1369–1387, 1998. [4] Poulami Das, Swamit S Tannu, Prashant J Nair, and Moinuddin Qureshi. A case for multi-programming quantum computers. In Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture, pages 291–303, 2019. [5] Andrew Eddins, Mario Motta, Tanvi P Gujarati, Sergey Bravyi, Antonio Mezzacapo, Charles Hadfield, and Sarah Sheldon. Doubling the size of quantum simulators by entanglement forging. arXiv preprint arXiv:2104.10220, 2021. [6] Abraham Asfaw et. al. Learn quantum computation using qiskit, 2020. [7] Jay M Gambetta, Antonio D Córcoles, Seth T Merkel, Blake R Johnson, John A Smolin, Jerry M Chow, Colm A Ryan, Chad Rigetti, S Poletto, Thomas A Ohki, et al. Characterization of addressability by simultaneous randomized benchmarking. Physical review letters, 109(24):240504, 2012. [8] Tudor Giurgica-Tiron, Yousef Hindy, Ryan LaRose, Andrea Mari, and William J Zeng. Digital zero noise extrapolation for quantum error mitigation. In 2020 IEEE International Conference on Quantum Computing and Engineering (QCE), pages 306–316. IEEE, 2020. [9] Pranav Gokhale, Olivia Angiuli, Yongshan Ding, Kaiwen Gui, Teague Tomesh, Martin Suchara, Margaret Martonosi, and Frederic T Chong. Optimization of simultaneous measurement for variational quantum eigensolver applications. In 2020 IEEE International Conference on Quantum Computing and Engineering (QCE), pages 379–390. IEEE, 2020. [10] Abhinav Kandala, Antonio Mezzacapo, Kristan Temme, Maika Takita, Markus Brink, Jerry M Chow, and Jay M Gambetta. Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets. Nature, 549(7671):242–246, 2017. [11] Ryan LaRose, Andrea Mari, Peter J. Karalekas, Nathan Shammah, and William J. Zeng. Mitiq: A software package for error mitigation on noisy quantum computers, 2020. [12] Ang Li and Sriram Krishnamoorthy. Qasmbench: A low-level qasm benchmark suite for nisq evaluation and simulation. arXiv preprint arXiv:2005.13018, 2020. [13] Ying Li and Simon C Benjamin. Efficient variational quantum simulator incorporating active error minimization. Physical Review X, 7(2):021050, 2017. [14] Lei Liu and Xinglei Dou. Qucloud: A new qubit mapping mechanism for multi-programming quantum computing in cloud environment. [15] Jarrod R McClean, Jonathan Romero, Ryan Babbush, and Alán Aspuru-Guzik. The theory of variational hybrid quantum-classical algorithms. New Journal of Physics, 18(2):023023, 2016. [16] Prakash Murali, Jonathan M Baker, Ali Javadi-Abhari, Frederic T Chong, and Margaret Martonosi. Noise-adaptive compiler mappings for noisy intermediate-scale quantum computers. In Proceedings of the Twenty-Fourth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 1015–1029, 2019. [17] Prakash Murali, David C McKay, Margaret Martonosi, and Ali Javadi-Abhari. Software mitigation of crosstalk on noisy intermediate-scale quantum computers. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, pages 1001–1016, 2020. [18] Siyuan Niu, Adrien Suau, Gabriel Staffelbach, and Aida Todri-Sanial. A hardware-aware heuristic for the qubit mapping problem in the nisq era. IEEE Transactions on Quantum Engineering, 1:1–14, 2020. [19] Siyuan Niu and Aida Todri-Sanial. Enabling multi-programming mechanism for quantum computing in the nisq era. arXiv preprint arXiv:2102.05321, 2021. [20] Yasuhiro Ohkura. Crosstalk-aware nisq multi-programming. [21] John Preskill. Quantum computing in the NISQ era and beyond. Quantum, 2:79, 2018. [22] Sarah Sheldon, Easwar Magesan, Jerry M Chow, and Jay M Gambetta. Procedure for systematically tuning up cross-talk in the cross-resonance gate. Physical Review A, 93(6):060302, 2016. [23] Alexandre M Souza, Gonzalo A Álvarez, and Dieter Suter. Robust dynamical decoupling. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 370(1976):4748–4769, 2012. [24] Robert Wille, Daniel Große, Lisa Teuber, Gerhard W Dueck, and Rolf Drechsler. Revlib: An online resource for reversible functions and reversible circuits. In 38th International Symposium on Multiple Valued Logic (ismvl 2008), pages 220–225. IEEE, 2008.
An M-estimator of spatial tail dependence John Einmahl Department of Econometrics & OR and CentER, Tilburg University, P.O. Box 90153, 5000 LE Tilburg, The Netherlands. E-mail: [email protected] Anna Kiriliouk Institut de Statistique, Biostatistique et Sciences Actuarielles, Université catholique de Louvain, Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium. E-mail: [email protected], [email protected] Andrea Krajina Institute of Mathematical Stochastics, University of Göttingen, Göttingen, Germany. E-mail:[email protected] Johan Segers Institut de Statistique, Biostatistique et Sciences Actuarielles, Université catholique de Louvain, Voie du Roman Pays 20, B-1348 Louvain-la-Neuve, Belgium. E-mail: [email protected], [email protected] Abstract Tail dependence models for distributions attracted to a max-stable law are fitted using observations above a high threshold. To cope with spatial, high-dimensional data, a rank-based M-estimator is proposed relying on bivariate margins only. A data-driven weight matrix is used to minimize the asymptotic variance. Empirical process arguments show that the estimator is consistent and asymptotically normal. Its finite-sample performance is assessed in simulation experiments involving popular max-stable processes perturbed with additive noise. An analysis of wind speed data from the Netherlands illustrates the method. Keywords: Brown–Resnick process; exceedances; multivariate extremes; ranks; spatial statistics; stable tail dependence function 1 Introduction Max-stable random processes have become the standard for modelling extremes of environmental quantities, such as wind speed, precipitation, or snow depth. In such a context, data are modelled as realizations of spatial processes, observed at a finite number of locations. The statistical problem then consists of modelling the joint tail of a multivariate distribution. This problem can be divided into two separate issues: modelling the marginal distributions and modelling the dependence structure. A popular practice is to transform the marginals to an appropriate form and to fit a max-stable model to componentwise monthly or annual maxima using composite likelihood methods. This is done either in a frequentist setting (Padoan et al., 2010; Davison et al., 2012) or in a Bayesian one (Reich and Shaby, 2012; Cooley et al., 2012). Alternatively, Yuen and Stoev (2013) propose an M-estimator based on finite-dimensional cumulative distribution functions. Popular parametric models for max-stable processes include the ones proposed by Smith (1990), Schlather (2002) and Kabluchko et al. (2009), going back to Brown and Resnick (1977). Recent review articles on spatial extremes include Cooley et al. (2012), Davison et al. (2012) and Ribatet (2013). The above approaches consider block maxima, whereas more information can be extracted from the data by using all data vectors of which at least one component is large. Although threshold-based methods are common in multivariate extreme value theory, in spatial extremes they are only starting to be developed. A first example is de Haan and Pereira (2006), where several one- and two-dimensional models for spatial extremes are proposed. Another parametric model for spatial tail dependence is introduced in Buishand et al. (2008). The parameter estimator is shown to be asymptotically normal and the method is applied to daily rainfall data. In Huser and Davison (2014), a pairwise censored likelihood is used to analyze space-time extremes. The method is applied to an extension of Schlather’s model. Another study of space-time extremes can be found in Davis et al. (2013), where asymptotic normality of the pairwise likelihood estimators of the parameters of a Brown–Resnick process is proven for a jointly increasing number of spatial locations and time points. In Jeon and Smith (2012), bivariate threshold exceedances are modelled using a composite likelihood procedure. Asymptotic normality of the estimator is obtained by assuming second-order regular variation for the distribution function that is in the max-domain of attraction of an extreme value distribution. A numerical study comparing two distinct approaches for composite likelihoods can be found in Bacro and Gaetan (2013). In Wadsworth and Tawn (2013), a censored Poisson process likelihood is considered in order to simplify the likelihood expressions in the Brown–Resnick process and in Engelke et al. (2014), the distribution of extremal increments of processes that are in the max-domain of attraction of the Brown–Resnick process is investigated. Finally, in Bienvenüe and Robert (2014), a censored likelihood procedure is used to fit high-dimensional extreme value models for which the tail dependence function has a particular representation. The above methods all require estimation of the tails of the marginal distributions. This is not necessarily an easy task if the number of variables is large. Moreover, they are likelihood-based and therefore cannot be used to fit, e.g., spectrally discrete max-stable models (Wang and Stoev, 2011). The aim of this paper is to propose a new method for fitting multivariate tail dependence models to high-dimensional data arising for instance in spatial statistics. No likelihoods come into play as our approach relies on the stable tail dependence function, which is related to the upper tail of the underlying cumulative distribution function. The method is threshold-based in the sense that a data point is considered to be extreme if the rank of at least one component is sufficiently high. The only assumption is that, upon standardization of the margins, the underlying distribution is attracted to a parametrically specified multivariate extreme value distribution, see (2.2) below. By reducing the data to their ranks, the tails of the univariate marginal distributions need not be estimated. Indeed, the marginal distributions are not even required to be attracted to an extreme value distribution. Another advantage of the rank-based approach is that the estimator is invariant under monotone transformations of the margins, notably for Box–Cox type of transformations. Our starting point is Einmahl et al. (2012), where an M-estimator for a parametrically modelled tail dependence function in dimension $d$ is derived. However, that method crucially relies on $d$-dimensional integration, which becomes intractable in high dimensions. This is why we consider tail dependence functions of pairs of variables only. Our estimator is constructed as the minimizer of the distance between a vector of integrals of parametric pairwise tail dependence functions and the vector of their empirical counterparts. The asymptotic variance of the estimator can be minimized by replacing the Euclidean distance by a quadratic form based on a weight matrix estimated from the data. In the simulation studies we will compute estimates in dimensions up to 100. We show that our estimator is consistent under minimal assumptions and asymptotically normal under an additional condition controlling the growth of the threshold. In our analysis, we take into account the variability stemming from the rank transformation, the randomness of the threshold, the random weight matrix and, in particular, the fact that the max-stable model is only an approximation in the tail. A final point worth noticing is the generality of our methodology. Where many studies focus on a concrete parametric (tail) model, ours is generic and makes weak assumptions only. Also, the field of application of extreme value analysis in high dimensions is not restricted to environmental studies: see for example Dematteo et al. (2013), where a spectral clustering approach is introduced and applied to gas pressure data in the shipping industry. The paper is organized as follows. Section 2 presents the necessary background on multivariate extreme value theory and extremes of stochastic processes. Section 3 contains the definition of the pairwise M-estimator and the main theoretical results on consistency and asymptotic normality, as well as the practical aspects of the choice of the weight matrix. In Section 4 the tail dependence functions of the anisotropic Brown–Resnick process and the Smith model are presented, as well as several simulation studies: two for a large number of locations, illustrating the computational feasibility of the estimator in high dimensions, and one for a smaller number of locations, presenting the benefits of the weight matrix. In addition, we compare the performance of our estimator to the one proposed in Engelke et al. (2014). Finally, we present an application to wind speed data from the Netherlands. Proofs and computations are deferred to the appendices. 2 Background 2.1 Multivariate extreme value theory Let $\bm{X}_{i}=(X_{i1},\ldots,X_{id})$, $i\in\{1,\ldots,n\}$, be independent random vectors in $\mathbb{R}^{d}$ with common continuous distribution function $F$ and marginal distribution functions $F_{1},\ldots,F_{d}$. Write $M_{nj}=\max_{i=1,\ldots,n}X_{ij}$ for $j=1,\ldots,d$. We say that $F$ is in the max-domain of attraction of an extreme-value distribution $G$ if there exist sequences $a_{nj}>0$ and $b_{nj}\in\mathbb{R}$ for $j=1,\ldots,d$ such that $$\lim_{n\rightarrow\infty}\mathbb{P}\bigg{[}\frac{M_{n1}-b_{n1}}{a_{n1}}\leq x_% {1},\ldots,\frac{M_{nd}-b_{nd}}{a_{nd}}\leq x_{d}\bigg{]}=G(\bm{x}),\qquad\bm{% x}\in\mathbb{R}^{d}.$$ (2.1) The margins, $G_{1},\ldots,G_{d}$, of $G$ are univariate extreme value distributions and the function $G$ is determined by $$G(\bm{x})=\exp{\{-\ell(-\log G_{1}(x_{1}),\ldots,-\log G_{d}(x_{d}))\}},$$ where $\ell:[0,\infty)^{d}\rightarrow[0,\infty)$ is called the stable tail dependence function. The distribution function of $(1/\{1-F_{j}(X_{1j})\})_{j=1,\ldots,d}$ is in the max-domain of attraction of the extreme-value distribution $G_{0}(\bm{z})=\exp{\{-\ell(1/z_{1},\ldots,1/z_{d})\}}$, $\bm{z}\in(0,\infty)^{d}$, and we can retrieve the function $\ell$ via $$\ell(\bm{x})=\lim_{t\downarrow 0}t^{-1}\,\mathbb{P}[1-F_{1}(X_{11})\leq tx_{1}% \text{ or }\ldots\text{ or }1-F_{d}(X_{1d})\leq tx_{d}],\qquad\color[rgb]{% 0,0,0}\bm{x}\in[0,\infty)^{d}.$$ (2.2) Note that $G_{0}$ has unit Fréchet margins, $G_{0,j}(z_{j})=\exp{(-1/z_{j})}$ for $z_{j}>0$ and $j=1,\ldots,d$. If $F$ is already an extreme-value distribution, then it is attracted by itself. Otherwise, relation (2.1) is equivalent to relation (2.2) and convergence of the $d$ marginal distributions in (2.1). From now on we will only assume the weaker relation (2.2) instead of (2.1), making no assumptions on the marginal distributions $F_{1},\ldots,F_{d}$ except for continuity. The function $\ell$ is convex, homogeneous of order one and satisfies $\ell(0,\ldots,0,x_{j},0,\ldots,0)=x_{j}$ for $j=1,\ldots,d$. We assume that $\ell$ belongs to some parametric family $\{\ell(\cdot\,;\theta):\theta\in\Theta\}$, with $\Theta\subset\mathbb{R}^{p}$. There are numerous such parametric models and new families of models continue to be invented. We will see some examples of parametric stable tail dependence functions in Section 4. For more examples and background on multivariate extreme value theory, see Coles (2001), Beirlant et al. (2004), or de Haan and Ferreira (2006). 2.2 Extremes of stochastic processes Max-stable processes arise in the study of component-wise maxima of random processes rather than of random vectors. Let $\mathcal{S}$ be a compact subset of $\mathbb{R}^{2}$ and let $\mathbb{C}(\mathcal{S})$ denote the space of continuous, real-valued functions on $\mathcal{S}$, equipped with the supremum norm $\|f\|_{\infty}=\sup_{\bm{s}\in\mathcal{S}}|f(\bm{s})|$ for $f\in\mathbb{C}(\mathcal{S})$. The restriction to $\mathbb{R}^{2}$ is for convenience only. In the applications to spatial data that we have in mind, $\mathcal{S}$ will represent the region of interest. Consider independent copies $\{X_{i}(\bm{s})\}_{\bm{s}\in\mathcal{S}}$ for $i\in\{1,\ldots,n\}$ of a process $\{X(\bm{s})\}_{\bm{s}\in\mathcal{S}}$ in $\mathbb{C}(\mathcal{S})$. Then $X$ is in the max-domain of attraction of the max-stable process $Z$ if there exist sequences of continuous functions $a_{n}(\bm{s})>0$ and $b_{n}(\bm{s})$ such that $$\bigg{\{}\frac{\max_{i=1,\ldots,n}X_{i}(\bm{s})-b_{n}(\bm{s})}{a_{n}(\bm{s})}% \bigg{\}}_{\bm{s}\in\mathcal{S}}\overset{w}{\rightarrow}\{Z(\bm{s})\}_{\bm{s}% \in\mathcal{S}},\qquad\text{ as }n\rightarrow\infty,$$ where $\overset{w}{\rightarrow}$ denotes weak convergence in $\mathbb{C}(\mathcal{S})$; see de Haan and Lin (2001) for a full characterization of max-domain of attraction conditions for the case $\mathcal{S}=[0,1]$. A max-stable process $Z$ is called simple if its marginal distribution functions are all unit Fréchet. Although our interest lies in the underlying stochastic processes $X_{i}$, data are always obtained on a finite subset of $\mathcal{S}$ only, i.e., at fixed locations $\bm{s}_{1},\ldots,\bm{s}_{d}$. As a consequence, statistical inference is based on a sample of $d$-dimensional random vectors. The finite-dimensional distributions of $Z$ are multivariate extreme value distributions. This brings us back to the ordinary, multivariate setting. 3 M-estimator 3.1 Estimation As in Section 2.1, let $\bm{X}_{1},\ldots,\bm{X}_{n}$ be an independent random sample from a $d$-variate distribution $F$ with continuous margins and with stable tail dependence function $\ell$, see equation (2.2). Assuming that $\ell$ belongs to a parametric family $\{\ell(\cdot\,;\theta):\theta\in\Theta\}$ with $\Theta\subset\mathbb{R}^{p}$, the goal is to estimate the parameter vector $\theta$. To this end, we first define a nonparametric estimator of $\ell$. Let $R_{ij}^{n}$ denote the rank of $X_{ij}$ among $X_{1j},\ldots,X_{nj}$ for $j=1,\ldots,d$. Replacing $F$ and $F_{1},\ldots,F_{d}$ in (2.2) by their empirical counterparts and replacing $t$ by $k/n$ yields $$\widehat{\ell}_{n,k}(\bm{x})\coloneqq\frac{1}{k}\sum_{i=1}^{n}\mathbbm{1}\left% \{R_{i1}^{n}>n+\frac{1}{2}-kx_{1}\text{ or }\ldots\text{ or }R_{id}^{n}>n+% \frac{1}{2}-kx_{d}\right\}.$$ (3.1) For the estimator to be consistent, we need $k=k_{n}\in\{1,\ldots,n\}$ to depend on $n$ in such a way that $k\rightarrow\infty$ and $k/n\rightarrow 0$ as $n\rightarrow\infty$. The estimator was originally defined in the bivariate case in Huang (1992) and Drees and Huang (1998). Let $\theta_{0}$ denote the true parameter value, $\ell=\ell(\cdot\,;\theta_{0})$, and let $g=(g_{1},\ldots,g_{q})^{T}:[0,1]^{d}\rightarrow\mathbb{R}^{q}$ with $q\geq p$ denote a column vector of integrable functions. In Einmahl et al. (2012), an M-estimator of $\theta_{0}$ is defined by $$\widehat{\theta}^{\prime}_{n}\coloneqq\operatorname*{arg\,min}_{\theta\in% \Theta}\sum_{m=1}^{q}\left(\int_{[0,1]^{d}}g_{m}(\bm{x})\left\{\widehat{\ell}_% {n,k}(\bm{x})-\ell(\bm{x};\theta)\right\}\,\mathrm{d}\bm{x}\right)^{2}.$$ (3.2) Under suitable conditions, the estimator $\widehat{\theta}^{\prime}_{n}$ is consistent and asymptotically normal. The use of ranks via the nonparametric estimator in (3.1) permits to avoid having to fit a model to the (tails of the) marginal distributions. In fact, the only assumption on $F$, the existence of the stable tail dependence function $\ell$ in (2.2), is even weaker than the assumption that $F$ belongs to the maximal domain of attraction of a max-stable distribution. However, the approach is ill-adapted to the spatial setting, where data are gathered from dozens of locations. In high dimensions, the computation of $\widehat{\theta}^{\prime}_{n}$ becomes infeasible due to the presence of $d$-dimensional integrals in the objective function in (3.2). Akin to composite likelihood methods, we opt for a pairwise approach, minimizing over quadratic forms of vectors of two-dimensional integrals. Let $q$ represent the number of pairs of locations that we wish to take into account, so that $p\leq q\leq d(d-1)/2$. Let $\pi$ be the function from $\{1,\dots,q\}$ to $\{1,\dots,d\}^{2}$ that describes these pairs, that is, for $m\in\{1,\ldots,q\}$, we have $\pi(m)=(\pi_{1}(m),\pi_{2}(m))=(u,v)$ with $1\leq u<v\leq d$. In the spatial setting (cf. Section 2.2), the indices $u$ and $v$ correspond to locations $\bm{s}_{u}$ and $\bm{s}_{v}$ respectively. The bivariate margins of the stable tail dependence function $\ell(\cdot\,;\theta)$ and the nonparametric estimator in (3.1) are given by $$\displaystyle\ell_{\pi(m)}(x_{\pi_{1}(m)},x_{\pi_{2}(m)};\theta)$$ $$\displaystyle=\ell_{uv}(x_{u},x_{v};\theta)\coloneqq\ell(0,\ldots,0,x_{u},0,% \ldots,0,x_{v},0,\ldots,0;\theta),$$ $$\displaystyle\widehat{\ell}_{n,k,\pi(m)}(x_{\pi_{1}(m)},x_{\pi_{2}(m)})$$ $$\displaystyle=\widehat{\ell}_{n,k,uv}(x_{u},x_{v})\coloneqq\widehat{\ell}_{n,k% }(0,\ldots,0,x_{u},0,\ldots,0,x_{v},0,\ldots,0),$$ respectively. Consider the random $q\times 1$ column vector $$L_{n,k}(\theta)\coloneqq\left(\int_{[0,1]^{2}}\left\{\widehat{\ell}_{n,k,\pi(m% )}(x_{\pi_{1}(m)},x_{\pi_{2}(m)})-\ell_{\pi(m)}(x_{\pi_{1}(m)},x_{\pi_{2}(m)};% \theta)\right\}\,\mathrm{d}x_{\pi_{1}(m)}\mathrm{d}x_{\pi_{2}(m)}\right)_{m=1}% ^{q}.$$ Let $\widehat{\Omega}_{n}\in\mathbb{R}^{q\times q}$ be a symmetric, positive definite, possibly random matrix. Define $$f_{n,k,\widehat{\Omega}_{n}}(\theta)\coloneqq L_{n,k}(\theta)^{T}\,\widehat{% \Omega}_{n}\,L_{n,k}(\theta),\color[rgb]{0,0,0}\qquad\theta\in\Theta.$$ The pairwise M-estimator of $\theta$ is defined as $$\widehat{\theta}_{n}\coloneqq\operatorname*{arg\,min}_{\theta\in\Theta}f_{n,k,% \widehat{\Omega}_{n}}(\theta)=\operatorname*{arg\,min}_{\theta\in\Theta}\left% \{L_{n,k}(\theta)^{T}\,\widehat{\Omega}_{n}\,L_{n,k}(\theta)\right\}.$$ (3.3) The simplest choice for $\widehat{\Omega}_{n}$ is just the $q\times q$ identity matrix $I_{q}$, yielding $$f_{n,k,I_{q}}(\theta)=\sum_{(u,v)}\left(\int_{[0,1]^{2}}\left\{\widehat{\ell}_% {n,k,uv}(x_{u},x_{v})-\ell_{uv}(x_{u},x_{v};\theta)\right\}\,\mathrm{d}x_{u}\,% \mathrm{d}x_{v}\right)^{2}.$$ (3.4) Note the similarity of this objective function with the one for the original M-estimator in equation (3.2). The role of the matrix $\widehat{\Omega}_{n}$ is to be able to assign data-driven weights to quantify the size of the vector of discrepancies $L_{n,k}(\theta)$ via a generalized Euclidian norm. As we will see in Section 3.2, a judicious choice of this matrix will allow to minimize the asymptotic variance. Comparison with Einmahl et al. (2012). A natural question that arises is whether the quality of estimation decreases when making the step from the $d$-dimensional estimator $\widehat{\theta}^{\prime}_{n}$ in (3.2) to the pairwise estimator $\widehat{\theta}_{n}$ in (3.3). We will demonstrate for the multivariate logistic model that this is not the case, necessarily in a dimension where $\widehat{\theta}^{\prime}_{n}$ can be computed. The $d$-dimensional logistic model has stable tail dependence function $$\ell(\bm{x};\theta)=\left(x_{1}^{1/\theta}+\ldots+x_{d}^{1/\theta}\right)^{% \theta},$$ with $\theta\in[0,1]$. We simulate 200 samples of size $n=1500$ from the logistic model in dimension $d=5$ with parameter value $\theta=0.5$ and we assess the quality of our estimates via the (empirical) bias and root mean squared error (RMSE) for $k\in\{40,80,\ldots,320\}$. The dashed lines in Figure 3 show the bias and RMSE for the M-estimator of Einmahl et al. (2012) with weight function $g\equiv 1$. The results are the same as in Einmahl et al. (2012, Figure 1). The solid lines show the bias and RMSE for the pairwise M-estimator with $q=9$ and $\widehat{\Omega}_{n}$ as in Section 3.2. We see that the pairwise estimator performs somewhat better in terms of bias and also has the lower minimal RMSE, for $k=160$. 3.2 Asymptotic results and choice of the weight matrix We show consistency and asymptotic normality of the rank-based pairwise M-estimator. Moreover, we provide a data-driven choice for $\widehat{\Omega}_{n}$ which minimizes the asymptotic covariance matrix of the limiting normal distribution. Results for the construction of confidence regions and hypothesis tests are presented as well. A quantity related to the stable tail dependence function $\ell$ is the exponent measure $\Lambda$, which is a measure on $[0,\infty]^{d}\setminus\{(\infty,\ldots,\infty)\}$ determined by $$\Lambda(\{\bm{w}\in[0,\infty]^{d}:w_{1}\leq x_{1}\text{ or }\ldots\text{ or }w% _{d}\leq x_{d}\})=\ell(\bm{x}),\qquad\bm{x}\in[0,\infty)^{d}.$$ Let $W_{\Lambda}$ be a mean-zero Gaussian process, indexed by the Borel sets of $[0,\infty]^{d}\setminus\{(\infty,\ldots,\infty)\}$ and with covariance function $$\mathbb{E}[W_{\Lambda}(A_{1})\,W_{\Lambda}(A_{2})]=\Lambda(A_{1}\cap A_{2}),$$ where $A_{1}$, $A_{2}$ are Borel sets in $[0,\infty]^{d}\setminus\{(\infty,\ldots,\infty)\}$. For $\bm{x}\in[0,\infty)^{d}$, define $$\displaystyle W_{\ell}(\bm{x})$$ $$\displaystyle=W_{\Lambda}(\{\bm{w}\in[0,\infty]^{d}\setminus\{(\infty,\ldots,% \infty)\}:w_{1}\leq x_{1}\text{ or }\ldots\text{ or }w_{d}\leq x_{d}\}),$$ $$\displaystyle W_{\ell,j}(x_{j})$$ $$\displaystyle=W_{\ell}(0,\ldots,0,x_{j},0,\ldots,0),\qquad j=1,\ldots,d.$$ Let $\dot{\ell}_{j}$ be the partial derivative of $\ell$ with respect to $x_{j}$, and define $$B(\bm{x})\coloneqq W_{\ell}(\bm{x})-\sum_{j=1}^{d}\dot{\ell}_{j}(\bm{x})\,W_{% \ell,j}(x_{j}),\qquad\bm{x}\in[0,\infty)^{d}.$$ For $m\in\{1,\ldots,q\}$ with $\pi(m)=(\pi_{1}(m),\pi_{2}(m))=(u,v)$, put $$B_{\pi(m)}(x_{\pi_{1}(m)},x_{\pi_{2}(m)})=B_{uv}(x_{u},x_{v})\coloneqq B(0,% \ldots,0,x_{u},0,\ldots,0,x_{v},0,\ldots,0).$$ Also define the mean-zero random column vector $$\widetilde{B}\coloneqq\left(\int_{[0,1]^{2}}B_{\pi(m)}(x_{\pi_{1}(m)},x_{\pi_{% 2}(m)})\,\mathrm{d}x_{\pi_{1}(m)}\mathrm{d}x_{\pi_{2}(m)}\right)_{m=1}^{q}.$$ The law of $\widetilde{B}$ is zero-mean Gaussian and its covariance matrix $\Gamma(\theta)\in\mathbb{R}^{q\times q}$ depends on $\theta$ via the model assumption $\ell=\ell(\cdot\,;\theta)$. For pairs $\pi(m)=(u,v)$ and $\pi(m^{\prime})=(u^{\prime},v^{\prime})$, we can obtain the $(m,m^{\prime})$-th entry of $\Gamma(\theta)$ by $$\Gamma_{(m,m^{\prime})}(\theta)=\mathbb{E}[\widetilde{B}_{m}\widetilde{B}_{m^{% \prime}}]=\int_{[0,1]^{4}}\mathbb{E}\left[B_{uv}(x_{u},x_{v})\,B_{u^{\prime}v^% {\prime}}(x_{u^{\prime}},x_{v^{\prime}})\right]\,\mathrm{d}x_{u}\mathrm{d}x_{v% }\mathrm{d}x_{u^{\prime}}\mathrm{d}x_{v^{\prime}}.$$ (3.5) Define $\psi:\Theta\rightarrow\mathbb{R}^{q}$ by $$\psi(\theta)\coloneqq\left(\int_{[0,1]^{2}}\ell_{\pi(m)}(x_{\pi_{1}(m)},x_{\pi% _{2}(m)};\theta)\,\mathrm{d}x_{\pi_{1}(m)}\,\mathrm{d}x_{\pi_{2}(m)}\right)_{m% =1}^{q}.$$ (3.6) Assuming $\theta$ is an interior point of $\Theta$ and $\psi$ is differentiable in $\theta$, let $\dot{\psi}(\theta)\in\mathbb{R}^{q\times p}$ denote the total derivative of $\psi$ at $\theta$. Theorem 3.1 (Existence, uniqueness and consistency). Let $\{\ell(\cdot\,;\theta):\theta\in\Theta\}$, $\Theta\in\mathbb{R}^{p}$, be a parametric family of $d$-variate stable tail dependence functions and let $(\pi(m))_{m=1}^{q}$, with $p\leq q\leq d(d-1)/2$, be $q$ distinct pairs in $\{1,\ldots,d\}$ such that the map $\psi$ in (3.6) is a homeomorphism from $\Theta$ to $\psi(\Theta)$. Let the $d$-variate distribution function $F$ have continuous margins and stable tail dependence function $\ell(\,\cdot\,;\theta_{0})$ for some interior point $\theta_{0}\in\Theta$. Let $\bm{X}_{1},\ldots,\bm{X}_{n}$ be an iid sample from $F$. Assume also that (C1) $\psi$ is twice continuously differentiable on a neighbourhood of $\theta_{0}$ and $\dot{\psi}(\theta_{0})$ is of full rank; (C2) there exists a symmetric, positive definite matrix $\Omega$ such that $\widehat{\Omega}_{n}\overset{\mathbb{P}}{\rightarrow}\Omega$. Then with probability tending to one, the minimizer $\widehat{\theta}_{n}$ of $f_{n,k,\widehat{\Omega}_{n}}$ exists and is unique. Moreover, $$\widehat{\theta}_{n}\overset{\mathbb{P}}{\rightarrow}\theta_{0},\qquad\text{ % as }n\rightarrow\infty.$$ Let $\Delta_{d-1}=\{\bm{w}\in[0,1]^{d}:w_{1}+\cdots+w_{d}=1\}$ denote the unit simplex in $\mathbb{R}^{d}$. Theorem 3.2 (Asymptotic normality). If in addition to the assumptions of Theorem 3.1 (C3) $t^{-1}\mathbb{P}[1-F_{1}(X_{11})\leq tx_{1}\textnormal{ or }\ldots\textnormal{% or }1-F_{d}(X_{1d})\leq tx_{d}]-\ell(\bm{x};\color[rgb]{0,0,0}\theta_{0})=O(t% ^{\alpha})$ uniformly in $\bm{x}\in\Delta_{d-1}$ as $t\downarrow 0$ for some $\alpha>0$; (C4) $k=o(n^{2\alpha/(1+2\alpha)})$ and $k\rightarrow\infty$ as $n\rightarrow\infty$, then $$\sqrt{k}\,(\widehat{\theta}_{n}-\theta_{0})\xrightarrow{d}\mathcal{N}_{p}(0,M(% \theta_{0}))$$ where, for $\theta\in\Theta$ such that $\dot{\psi}(\theta)$ is of full rank, $$M(\theta)\coloneqq\bigl{(}\dot{\psi}(\theta)^{T}\,\Omega\,\dot{\psi}(\theta)% \bigr{)}^{-1}\,\dot{\psi}(\theta)^{T}\,\Omega\,\Gamma(\theta)\,\Omega\,\dot{% \psi}(\theta)\,\bigl{(}\dot{\psi}(\theta)^{T}\,\Omega\,\dot{\psi}(\theta)\bigr% {)}^{-1}.$$ (3.7) The proofs of Theorems 3.1 and 3.2 are deferred to Appendix A. An asymptotically optimal choice for the random weight matrix $\widehat{\Omega}_{n}$ would be one for which the limit $\Omega$ minimizes the asymptotic covariance matrix in (3.7) with respect to the positive semi-definite partial ordering on the set of symmetric matrices. This minimization problem shows up in other contexts as well, and its solution is well-known: provided $\Gamma(\theta)$ is invertible, the minimum is attained at $\Omega=\Gamma(\theta)^{-1}$, the matrix $M(\theta)$ simplifying to $$M_{\text{opt}}(\theta)=\bigl{(}\dot{\psi}(\theta)^{T}\,\Gamma(\theta)^{-1}\,% \dot{\psi}(\theta)\bigr{)}^{-1},$$ (3.8) see for instance Abadir and Magnus (2005, page 339). However, this choice of the weight matrix requires the knowledge of $\theta$, which is unknown. The solution consists of computing the optimal weight matrix evaluated at a preliminary estimator of $\theta$. For $\theta\in\Theta$, let $H_{\theta}$ be the spectral measure related to $\ell(\cdot\,;\theta)$ (de Haan and Resnick, 1977; Resnick, 1987): it is a finite measure defined on the unit simplex $\Delta_{d-1}$ and it satisfies $$\ell(\bm{x};\theta)=\int_{\Delta_{d-1}}\max_{j=1,\ldots,d}\left\{w_{j}x_{j}% \right\}H_{\theta}(\mathrm{d}\bm{w}),\qquad\bm{x}\in[0,\infty)^{d}.$$ Corollary 3.3 (Optimal weight matrix). In addition to the assumptions of Theorem 3.2, assume the following: (C5) for all $\theta$ in the interior of $\Theta$, the matrix $\Gamma(\theta)$ in (3.5) has full rank; (C6) the mapping $\theta\mapsto H_{\theta}$ is weakly continuous at $\theta_{0}$. Assume $\widehat{\theta}_{n}^{(0)}$ converges in probability to $\theta_{0}$ and let $\widehat{\theta}_{n}$ be the pairwise M-estimator with weight matrix $\widehat{\Omega}_{n}=\Gamma(\widehat{\theta}_{n}^{(0)})^{-1}$. Then, with $M_{\textnormal{opt}}$ as in (3.8), we have $$\sqrt{k}(\widehat{\theta}_{n}-\theta_{0})\xrightarrow{d}\mathcal{N}_{p}(0,M_{% \textnormal{opt}}(\theta_{0})),\qquad n\to\infty.$$ For any choice of the positive definite matrix $\Omega$ in (3.7), the difference $M(\theta)-M_{\textnormal{opt}}(\theta)$ is positive semi-definite. In view of Corollary 3.3, we propose the following two-step procedure: 1. Compute the pairwise M-estimator $\widehat{\theta}_{n}^{(0)}$ with the weight matrix equal to the identity matrix, i.e., by minimizing $f_{n,k,I_{q}}$ in (3.4). 2. Calculate the pairwise M-estimator $\widehat{\theta}_{n}$ by minimizing $f_{n,k,\widehat{\Omega}_{n}}$ with $\widehat{\Omega}_{n}=\Gamma^{-1}(\widehat{\theta}^{(0)}_{n})$. We will see in Section 4.2 that this choice of $\widehat{\Omega}_{n}$ indeed reduces the estimation error. Calculating $M(\theta)$ can be a challenging task. The matrix $\Gamma(\theta)$ can become quite large since for a $d$-dimensional model, the maximal number of pairs is $d(d-1)/2$. In practice we will choose a smaller number of pairs: we will see in Section 4.2 that this even may have positive influence on the quality of our estimator. The entries of $\Gamma(\theta)$ are four-dimensional integrals of $\mathbb{E}[B_{uv}(x_{u},x_{v})B_{u^{\prime}v^{\prime}}(x_{u^{\prime}},x_{v^{% \prime}})]$ for $\pi(m)=(u,v)$ and $\pi(m^{\prime})=(u^{\prime},v^{\prime})$, $m=1,\ldots,q$. Appendix B shows how to express the entries of $\Gamma(\theta)$ in the function $\ell$ and gives details on the implementation. Finally, we present results that can be used for the construction of confidence regions and hypothesis tests. Corollary 3.4. If the assumptions from Corollary 3.3 are satisfied, then $$k(\widehat{\theta}_{n}-\theta_{0})^{T}M(\widehat{\theta}_{n})^{-1}(\widehat{% \theta}_{n}-\theta_{0})\overset{d}{\rightarrow}\chi_{p}^{2},\qquad\text{ as }n% \rightarrow\infty.$$ Let $r<p$ and $\theta=(\theta_{1},\theta_{2})\in\Theta$ with $\theta_{1}\in\mathbb{R}^{p-r}$ and $\theta_{2}\in\mathbb{R}^{r}$. Suppose we want to test $\theta_{2}=\theta^{*}_{2}$ against $\theta_{2}\neq\theta^{*}_{2}$. Write $\widehat{\theta}_{n}=(\widehat{\theta}_{1n},\widehat{\theta}_{2n})$ and let $M_{2}(\theta)$ be the $r\times r$ matrix corresponding to the lower right corner of $M(\theta)$. Corollary 3.5. If the assumptions from Corollary 3.3 are satisfied and if $\theta_{0}=(\theta_{1},\theta^{*}_{2})\in\Theta$ for some $\theta_{1}$, then $$k(\widehat{\theta}_{2n}-\theta^{*}_{2})^{T}M_{2}(\widehat{\theta}_{1n},\theta^% {*}_{2})^{-1}(\widehat{\theta}_{2n}-\color[rgb]{0,0,0}\theta^{*}_{2})\overset{% d}{\rightarrow}\chi_{r}^{2}.$$ We will not prove these corollaries here, since their proofs are straightforward extensions of those in Einmahl et al. (2012, Corollary 4.3; Corollary 4.4). 4 Spatial models 4.1 Theory and definitions The isotropic Brown–Resnick process on $\mathcal{S}\subset\mathbb{R}^{2}$ is given by $$Z(\bm{s})=\max_{i\in\mathbb{N}}\xi_{i}\exp{\left\{\epsilon_{i}(\bm{s})-\gamma(% \bm{s})\right\}},\qquad\bm{s}\in\mathcal{S},$$ where $\{\xi_{i}\}_{i\geq 1}$ is a Poisson process on $(0,\infty]$ with intensity measure $\xi^{-2}\,\mathrm{d}\xi$ and $\{\epsilon_{i}(\cdot)\}_{i\geq 1}$ are independent copies of a Gaussian process with stationary increments, $\epsilon(0)=0$, variance $2\gamma(\cdot)$, and semi-variogram $\gamma(\cdot)$. The process with $\gamma(\bm{s})=(||\bm{s}||/\rho)^{\alpha}$ appears as the only limit of (rescaled) maxima of stationary and isotropic Gaussian random fields (Kabluchko et al., 2009); here $\rho>0$ and $0<\alpha\leq 2$. Since isotropy is not a reasonable assumption for most spatial applications, we follow Blanchet and Davison (2011) and Engelke et al. (2014) and introduce a transformation matrix $V$ defined by $$V\coloneqq V(\beta,c)\coloneqq\begin{bmatrix}\cos{\beta}&-\sin{\beta}\\ c\sin{\beta}&c\cos{\beta}\end{bmatrix},\qquad\beta\in[0,\pi/2),\,c>0,$$ and a transformed space $\mathcal{S}^{\prime}=\{V^{-1}\bm{s}:\bm{s}\in\mathcal{S}\}$, so that an isotropic process on $\mathcal{S}$ is transformed to an anisotropic process on $\mathcal{S}^{\prime}$. For $\bm{s}^{\prime}\in\mathcal{S}^{\prime}$ we focus on the anisotropic Brown–Resnick process $$Z_{V}(\bm{s}^{\prime})\coloneqq Z(V\bm{s}^{\prime})=\max_{i\in\mathbb{N}}\xi_{% i}\exp{\left\{\epsilon_{i}(V\bm{s}^{\prime})-\gamma(V\bm{s}^{\prime})\right\}},$$ (4.1) whose semi-variogram is defined by $$\gamma_{V}(\bm{s}^{\prime})\coloneqq\gamma(V\bm{s}^{\prime})=\left[{\bm{s}^{% \prime}}^{T}\frac{V^{T}V}{\rho^{2}}\bm{s}^{\prime}\right]^{\alpha/2}.$$ The pairwise stable tail dependence function for a pair $(u,v)$, corresponding to locations $(\bm{s}^{\prime}_{u},\bm{s}^{\prime}_{v})$, is given by $$\ell_{uv}(x_{u},x_{v})=x_{u}\Phi\bigg{(}\frac{a_{uv}}{2}+\frac{1}{a_{uv}}\log{% \frac{x_{u}}{x_{v}}}\bigg{)}+x_{v}\Phi\bigg{(}\frac{a_{uv}}{2}+\frac{1}{a_{uv}% }\log{\frac{x_{v}}{x_{u}}}\bigg{)},$$ where $a_{uv}\coloneqq\sqrt{2\gamma_{V}(\bm{s}^{\prime}_{u}-\bm{s}^{\prime}_{v})}$ and $\Phi$ is the standard normal distribution function. Observe that the choice $\alpha=2$ leads to $$a^{2}_{uv}=2\gamma(V(\bm{s}^{\prime}_{u}-\bm{s}^{\prime}_{v}))=(\bm{s}^{\prime% }_{u}-\bm{s}^{\prime}_{v})^{T}\Sigma^{-1}(\bm{s}^{\prime}_{u}-\bm{s}^{\prime}_% {v}),\quad\text{ for some }\Sigma=\begin{bmatrix}\sigma_{11}&\sigma_{12}\\ \sigma_{12}&\sigma_{22}\end{bmatrix},$$ where $\Sigma$ represents any valid $2\times 2$ covariance matrix. This submodel is known as the Gaussian extreme value process or simply the Smith model (Smith, 1990). We will present simulation studies for processes in the domain of attraction, in the sense of (2.2), of both the Smith model and the anisotropic Brown–Resnick process. To calculate the weight matrix $\Gamma(\theta)^{-1}$, we will need to compute integrals over the four-dimensional margins of the stable tail dependence function, see (3.5) and Appendix B. In Huser and Davison (2013) the following representation is given for $\ell(x_{1},\ldots,x_{d};\theta)$ for general $d$. If $Z_{V}$ is defined as in (4.1) then for $\bm{s}_{u_{1}},\ldots,\bm{s}_{u_{d}}\in\mathbb{R}^{2}$ $$\ell_{u_{1},\ldots,u_{d}}(x_{1},\ldots,x_{d})=\sum_{i=1}^{d}x_{i}\Phi_{d-1}(% \eta^{(i)}(1/x);R^{(i)}),$$ where $$\displaystyle\eta^{(i)}(x)$$ $$\displaystyle=(\eta_{1}^{(i)}(x_{1},x_{i}),\ldots,\eta_{i-1}^{(i)}(x_{i-1},x_{% i}),\eta_{i+1}^{(i)}(x_{i+1},x_{i}),\ldots,\eta_{d}^{(i)}(x_{d},x_{i}))$$ $$\displaystyle\in\mathbb{R}^{d-1},$$ $$\displaystyle\eta_{j}^{(i)}(x_{j},x_{i})$$ $$\displaystyle=\sqrt{\frac{\gamma_{V}(s_{u_{i}}-s_{u_{j}})}{2}}+\frac{\log{(x_{% j}/x_{i})}}{\sqrt{2\gamma_{V}(s_{u_{i}}-s_{u_{j}})}}$$ $$\displaystyle\in\mathbb{R},$$ and $R^{(i)}\in\mathbb{R}^{(d-1)\times(d-1)}$ is the correlation matrix with entries $$R^{(i)}_{jk}=\frac{\gamma_{V}(s_{u_{i}}-s_{u_{j}})+\gamma_{V}(s_{u_{i}}-s_{u_{% k}})-\gamma_{V}(s_{u_{j}}-s_{u_{k}})}{2\sqrt{\gamma_{V}(s_{u_{i}}-s_{u_{j}})% \gamma_{V}(s_{u_{i}}-s_{u_{k}})}},\qquad j,k=1,\ldots,d;\,j,k\neq i.$$ 4.2 Simulation studies In order to study the performance of the pairwise M-estimator when the underlying distribution function $F$ satisfies (2.2) for a function $\ell$ corresponding to the max-stable models described before, we generate random samples from Brown–Resnick processes and Smith models perturbed with additive noise. If $\bm{Z}=(Z_{1},\ldots,Z_{d})$ is a max-stable process observed at $d$ locations, then we consider $$X_{j}=Z_{j}+\epsilon_{j},\qquad j=1,\ldots,d,$$ where $\epsilon_{j}$ are independent half normally distributed random variables, corresponding to the absolute value of a normally distributed random variable with standard deviation $1/2$. All simulations are done in R (R Core Team, 2013). Realizations of $Z$ are simulated using the SpatialExtremes package (Ribatet et al., 2013). Perturbed max-stable processes on a large grid. Assume that we have $d=100$ locations on a $10\times 10$ unit distance grid. We simulate 500 samples of size $n=500$ from the perturbed Smith model with parameters $$\Sigma=\begin{bmatrix}1.0&0.5\\ 0.5&1.5\end{bmatrix},$$ and from a perturbed anisotropic Brown–Resnick process with parameters $\alpha=1$, $\rho=3$, $\beta=0.5$ and $c=0.5$. Instead of estimating $\rho$, $\beta$, and $c$ directly, we estimate the three parameters of the matrix $$\mathcal{T}=\begin{bmatrix}\tau_{11}&\tau_{12}\\ \tau_{12}&\tau_{22}\end{bmatrix}=\rho^{-2}\,V(\beta,c)^{T}V(\beta,c).$$ In practice, this parametrization, which is in line with the one of the Smith model, often yields better results. We study the bias and root mean squared error (RMSE) for $k\in\{25,50,75,100\}$. We compare the estimators for two sets of pairs: one containing all pairs ($q=4950$) and one containing only pairs of neighbouring locations ($q=342$). Although the first option may sound like a time-consuming procedure, estimation of the parameters for one sample takes about 20 seconds for the Smith model and less than two minutes for the anisotropic Brown–Resnick process. We let the weight matrix $\Omega$ be the $q\times q$ identity matrix, since for so many pairs a data-driven computation of the optimal weight matrix is too time-consuming. Figure 10 shows the bias and RMSE of $(\sigma_{11},\sigma_{22},\sigma_{12})$ for the Smith model. We see that great improvements are achieved by using only pairs of neighbouring locations and that the thus obtained estimator performs well. Using all pairs causes the parameters to have a large positive bias, which translates into a high RMSE. In general, distant pairs often lead to less dependence and hence less information about $\ell$ and its parameters. Observe that small values of $k$ are preferable, i.e. $k=25$ or $k=50$. Figure 19 shows the bias and RMSE of the pairwise M-estimators of $(\alpha,\rho,\beta,c)$ for the anisotropic Brown–Resnick process. We see again that using only pairs of neighbouring locations improves the quality of estimation. The corresponding estimators perform well for the estimation of $\alpha$, $\beta$, and $c$. The lesser performance when estimating $\rho$ seems to be inherent to the Brown–Resnick process and appears regardless of the estimation procedure: see for example Engelke et al. (2014) or Wadsworth and Tawn (2013), who both report a positive bias of $\rho$ for small sample sizes. Compared to those for the Smith model, the values of $k$ for which the estimation error is smallest are higher, i.e., $k=50$ or $k=75$. A perturbed Brown–Resnick process on a small grid with optimal weight matrix. We consider $d=12$ locations on an equally spaced unit distance $4\times 3$ grid. We simulate 500 samples of size $n=1000$ from an anisotropic Brown–Resnick process with parameters $\alpha=1.5$, $\rho=1$, $\beta=0.25$ and $c=1.5$. We study the bias, standard deviation, and RMSE for $k\in\{25,75,125\}$. Three estimation methods are compared: one containing all pairs ($q=66$), one containing only pairs of neighbouring locations ($q=29$), and one using optimal weight matrices chosen according to the two-step procedure described after Corollary 3.3, based on the $29$ pairs of neighbouring locations. In line with the theory the weighted estimators have lower (or equal) standard deviation (and RMSE) than the unweighted estimators. The difference is clearest for low $k$ for $\alpha$ and $\rho$. Comparison with Engelke et al. (2014). To compare the pairwise M-estimator with the one from Engelke et al. (2014), we consider the setting used in the simulation study of the latter paper: we simulate 500 samples of size $8000$ of the univariate Brown–Resnick process on an equidistant grid on the interval $[0,3]$ with step size $0.1$. The parameters of the model are $(\alpha,\rho)=(1,1)$. We estimate the unknown parameters for $k=500$ and 140 pairs, so that the locations of the selected pairs are at most a distance $0.5$ apart. We use the identity weight matrix, since in this particular setting the weight matrix is very large and, as far as we could tell from some preliminary experiments, it leads to only a small reduction in estimation error. Asymptotically we see a reduction of the standard deviations of about $13\%$ for $\alpha$ and $3\%$ for $\rho$. In Figure 35 below, the results are presented in the form of boxplots, to facilitate comparison with Figure $2$ in Engelke et al. (2014). Our procedure turns out to perform equally well for the estimation of $\alpha$ and only slightly worse when estimating $\rho$. It is to be kept in mind that, whereas the method in Engelke et al. (2014) is tailor-made for the Brown–Resnick process, our method is designed to work for general parametric models. 4.3 Application: speeds of wind gusts Using extreme value theory to estimate the frequency and magnitude of extreme wind events or to estimate the return levels for (extremely) long return periods is not a novelty in the fields of meteorology and climatology. Numerous research papers published in the last 20–25 years are applying methods from extreme value theory to treat those estimation problems, see, for example, Karpa and Naess (2013); Ceppi et al. (2008); Palutikof et al. (1999) and the references therein. However, until very recently, all statistical approaches were univariate. The scientific and computational advancements nowadays facilitate the usage of high-dimensional or spatial models. In Engelke et al. (2014) and Oesting et al. (2013), for instance, Brown–Resnick processes are used to model wind speed data. We consider a data set from the Royal Netherlands Meteorological Institute (KNMI), consisting of the daily maximal speeds of wind gusts, which are measured in 0.1 m/s. The data are observed at 35 weather stations in the Netherlands, over the time period from January 1, 1990 to May 16, 2012. The data set is freely available from http://www.knmi.nl/climatology/daily_data/selection.cgi. Due to the strong influence of the sea on the wind speeds in the coastal area, we only consider the inland stations, of which we removed three stations with more than 1000 missing observations. The thus obtained 22 stations and the remaining amount of missing data per station are shown in the left panel of Figure 38. We aggregate the daily maxima to three-day maxima in order to minimize temporal dependence and we also restrict our observation period to the summer season (June, July and August) to obtain more or less equally distributed data. Finally, we use the “complete deletion approach” for the remaining missing data and obtain a data set with $n=563$ observations. We consider the function $\ell$ corresponding to the Brown–Resnick process (see Section 4.1) and we first test, based on the $q=29$ pairs of stations that are at most 50 kilometers apart, if the isotropic process suffices for the above data. In the reparametrization introduced in Section 4.2, the case $\tau_{11}=\tau_{22}$ and $\tau_{12}=0$ corresponds to isotropy. Using Corollary 3.5 and the test statistic $$k\left(\hat{\tau}_{11}-\hat{\tau}_{22},\hat{\tau}_{12}\right)M_{2}\left(\hat{% \alpha},\hat{\tau}_{11}+\hat{\tau}_{22},0,0\right)^{-1}\left(\hat{\tau}_{11}-% \hat{\tau}_{22},\hat{\tau}_{12}\right)^{T},$$ with $k=50$, we obtain a value of $0.227$, corresponding to a $p$-value of 0.89, so we can not reject the null hypothesis and we proceed to estimate the parameters corresponding to the isotropic Brown–Resnick process, using the optimal weight matrix chosen according to the two-step procedure described after Corollary 3.3. The estimates, with standard errors in parentheses, are $\alpha=0.408$ $(0.171)$ and $\rho=0.634$ $(0.344)$ for $k=50$. We see that the Smith model would not fit these data well since $\alpha$ is much smaller than $2$. To visually assess the goodness-of-fit, we plot, in the right panel of Figure 38, the 29 nonparametric estimates of the extremal coefficient function $\ell(1,1)$ and the values computed from the model (the curve) against the estimated distances $a_{uv}=\sqrt{2\gamma_{V}(\bm{s}_{u}-\bm{s}_{v})}$ for the 29 pairs of stations. It is more in line with our M-estimator, which uses integration over $[0,1]^{2}$, to focus on the center $(1/2,1/2)$ instead of the vertex $(1,1)$ of the unit square. Hence, we use the homogeneity of $\ell$ to replace $\ell(1,1)$ with $2\ell(1/2,1/2)$ and then estimate the latter with $2\hat{\ell}_{n,k}(1/2,1/2)$, see (3.1). The nonparametric estimates of $\ell(1,1)$ in the figure are obtained in this way. Observe that these estimates of $\ell(1,1)$ have a reasonably high variability, even when considering only a small interval for the distances $a_{uv}$. This explains why the curve cannot fit the points very well. Appendix A Proofs The notations are as in Section 3. Let $\widehat{\Theta}_{n}$ denote the (possibly empty) set of minimizers of the function $$f_{n,k,\widehat{\Omega}_{n}}(\theta)=L_{n,k}(\theta)^{T}\,\widehat{\Omega}_{n}% \,L_{n,k}(\theta)\eqqcolon\lVert L_{n,k}(\theta)\rVert^{2}_{\widehat{\Omega}_{% n}}.$$ Write $\delta_{0}$ for the Dirac measure concentrated at zero. Recall that to each $m\in\{1,\ldots,q\}$ there corresponds a pair of indices $\pi(m)=(u,v)$ with $1\leq u<v\leq d$. Let $\mu=(\mu_{1},\ldots,\mu_{q})^{T}$ denote a column vector of measures on $\mathbb{R}^{d}$ whose $m$-th element is defined as $$\mu_{m}(\mathrm{d}\bm{x})=\mu_{m}(\mathrm{d}x_{1}\times\ldots\times\mathrm{d}x% _{d})=\mu_{m1}(\mathrm{d}x_{1})\times\ldots\times\mu_{md}(\mathrm{d}x_{d})% \coloneqq\mathrm{d}x_{u}\,\mathrm{d}x_{v}\prod_{j\neq u,v}\delta_{0}(\mathrm{d% }x_{j}),$$ so that $\mu_{mj}$ is the Lebesgue measure if $j=u$ or $j=v$, and $\mu_{mj}$ is the Dirac measure at zero for $j\neq u,v$. Using the measures $\mu_{m}$ allows us to write $$L_{n,k}(\theta)=\left(\int_{\color[rgb]{0,0,0}[0,1]^{d}}\left\{\widehat{\ell}_% {n,k}(\bm{x})-\ell(\bm{x};\theta)\right\}\,\color[rgb]{0,0,0}\mu_{m}(\mathrm{d% }\bm{x})\right)_{m=1}^{q}=\int\widehat{\ell}_{n,k}\,\mu-\psi(\theta).$$ Lemma A.1. If $0<\lambda_{n,1}\leq\ldots\leq\lambda_{n,q}$ and $0<\lambda_{1}\leq\ldots\leq\lambda_{q}$ denote the ordered eigenvalues of the symmetric matrices $\widehat{\Omega}_{n}$ and $\Omega\in\mathbb{R}^{q\times q}$, respectively, then, as $n\to\infty$, $$\widehat{\Omega}_{n}\overset{\mathbb{P}}{\rightarrow}\Omega\qquad\text{\color[% rgb]{0,0,0}implies}\qquad(\lambda_{n,1},\ldots,\lambda_{n,q})\overset{\mathbb{% P}}{\rightarrow}(\lambda_{1},\ldots,\lambda_{q}).$$ Proof of Lemma A.1. The convergence $\widehat{\Omega}_{n}\overset{\mathbb{P}}{\rightarrow}\Omega$ elementwise implies $\lVert\widehat{\Omega}_{n}-\Omega\rVert\overset{\mathbb{P}}{\rightarrow}0$ for any matrix norm $\lVert\,\cdot\,\rVert$ on $\mathbb{R}^{q\times q}$. If we take the spectral norm $\lVert\Omega\rVert$ (i.e., $\lVert\Omega\rVert^{2}$ is the largest eigenvalue of $\Omega^{T}\Omega$), then Weyl’s perturbation theorem (Jiang, 2010, page 145) states that $$\max_{i=1,\ldots,q}\left|\lambda_{n,i}-\lambda_{i}\right|\leq\lVert\widehat{% \Omega}_{n}-\Omega\rVert,$$ so that the desired result follows immediately. ∎ By the diagonalization of $\widehat{\Omega}_{n}$ in terms of its eigenvectors and eigenvalues, the norm $\lVert\,\cdot\,\rVert_{\widehat{\Omega}_{n}}$ is equivalent to the Euclidian norm $\lVert\,\cdot\,\rVert$ in the sense that $$\lambda_{n,1}\lVert L_{n,k}(\theta)\rVert^{2}\leq\lVert L_{n,k}(\theta)\rVert^% {2}_{\widehat{\Omega}_{n}}\leq\lambda_{n,q}\lVert L_{n,k}(\theta)\rVert^{2}.$$ Proof of Theorem 3.1. Let $\varepsilon_{0}>0$ be such that the closed ball $B_{\varepsilon_{0}}(\theta_{0})=\{\theta:\left\|\theta-\theta_{0}\right\|\leq% \varepsilon_{0}\}$ is a subset of $\Theta$; such an $\varepsilon_{0}$ exists since $\theta_{0}$ is an interior point of $\Theta$. Fix $\varepsilon>0$ such that $0<\varepsilon\leq\varepsilon_{0}$. We show first that $$\mathbb{P}[\widehat{\Theta}_{n}\neq\varnothing\text{ and }\widehat{\Theta}_{n}% \subset B_{\varepsilon}(\theta_{0})]\rightarrow 1,\qquad n\to\infty.$$ (A.1) Because $\psi$ is a homeomorphism, there exists $\delta>0$ such that for $\theta\in\Theta$, $\left\|\psi(\theta)-\psi(\theta_{0})\right\|\leq\delta$ implies $\left\|\theta-\theta_{0}\right\|\leq\varepsilon$. Equivalently, for every $\theta\in\Theta$ such that $\left\|\theta-\theta_{0}\right\|>\varepsilon$ we have $\left\|\psi(\theta)-\psi(\theta_{0})\right\|>\delta$. Define the event $$A_{n}=\left\{\left\|\psi(\theta_{0})-\int\widehat{\ell}_{n,k}\,\mu\right\|\leq% \frac{\delta\sqrt{\lambda_{n,1}}}{2+\sqrt{\lambda_{n,q}}}\right\}.$$ If $\theta\in\Theta$ is such that $\left\|\theta-\theta_{0}\right\|>\varepsilon$, then on the event $A_{n}$, we have $$\displaystyle\left\|L_{n,k}(\theta)\right\|_{\widehat{\Omega}_{n}}$$ $$\displaystyle=\left\|\psi(\theta_{0})-\psi(\theta)-\left(\psi(\theta_{0})-\int% \widehat{\ell}_{n,k}\,\mu\right)\right\|_{\widehat{\Omega}_{n}}$$ $$\displaystyle\geq\left\|\psi(\theta_{0})-\psi(\theta)\right\|_{\widehat{\Omega% }_{n}}-\left\|\psi(\theta_{0})-\int\widehat{\ell}_{n,k}\,\mu\right\|_{\widehat% {\Omega}_{n}}$$ $$\displaystyle\geq\sqrt{\lambda_{n,1}}\left\|\psi(\theta_{0})-\psi(\theta)% \right\|-\sqrt{\lambda_{n,q}}\left\|\psi(\theta_{0})-\int\widehat{\ell}_{n,k}% \,\mu\right\|$$ $$\displaystyle>\delta\sqrt{\lambda_{n,1}}-\delta\frac{\sqrt{\lambda_{n,1}% \lambda_{n,q}}}{2+\sqrt{\lambda_{n,q}}}=\frac{2\delta\sqrt{\lambda_{n,1}}}{2+% \sqrt{\lambda_{n,q}}}.$$ It follows that on $A_{n}$, $$\inf_{\theta:\left\|\theta-\theta_{0}\right\|>\varepsilon}\left\|L_{n,k}(% \theta)\right\|_{\widehat{\Omega}_{n}}\color[rgb]{0,0,0}\geq\frac{2\delta\sqrt% {\lambda_{n,1}}}{2+\sqrt{\lambda_{n,q}}}>\left\|\psi(\theta_{0})-\int\widehat{% \ell}_{n,k}\,\mu\right\|\geq\inf_{\theta:\left\|\theta-\theta_{0}\right\|\leq% \varepsilon}\left\|\psi(\theta)-\int\widehat{\ell}_{n,k}\,\mu\right\|.$$ The infimum on the right-hand side is actually a minimum since $\psi$ is continuous and $B_{\varepsilon}(\theta_{0})$ is compact. Hence on $A_{n}$ the set $\widehat{\Theta}_{n}$ is non-empty and $\widehat{\Theta}_{n}\subset B_{\varepsilon}(\theta_{0})$. To show (A.1), it remains to be shown that $\mathbb{P}[A_{n}]\to 1$ as $n\to\infty$. Uniform consistency of $\widehat{\ell}_{n,k}$ for $d=2$ was shown in Huang (1992); see also de Haan and Ferreira (2006, page 237). The proof for $d>2$ is a straightforward extension. By the continuous mapping theorem, it follows that $\int\widehat{\ell}_{n,k}\,\mu$ is consistent for $\int\ell\,\mu=\psi(\theta_{0})$. By Lemma A.1, $\lambda_{n,m}$ is consistent for $\lambda_{m}$ for all $m\in\{1,\ldots,q\}$. This yields $\mathbb{P}[A_{n}]\rightarrow 1$ and thus (A.1). Next we wish to prove that, with probability tending to one, $\widehat{\Theta}_{n}$ has exactly one element, i.e., the function $f_{n,k,\widehat{\Omega}_{n}}$ has a unique minimizer. To do so, we will show that there exists $\varepsilon_{1}\in(0,\varepsilon_{0}]$ such that, with probability tending to one, the Hessian of $f_{n,k,\widehat{\Omega}_{n}}$ is positive definite on $B_{\varepsilon_{1}}(\theta_{0})$ and thus $f_{n,k,\widehat{\Omega}_{n}}$ is strictly convex on $B_{\varepsilon_{1}}(\theta_{0})$. In combination with (A.1) for $\varepsilon\in(0,\varepsilon_{1}]$, this will yield the desired conclusion. For $\theta\in\Theta$, define the symmetric $p\times p$ matrix $\mathcal{H}(\theta;\theta_{0})$ by $$\bigl{(}\mathcal{H}(\theta;\theta_{0})\bigr{)}_{i,j}\coloneqq 2\left(\frac{% \partial\psi(\theta)}{\partial\theta_{j}}\right)^{T}\Omega\left(\frac{\partial% \psi(\theta)}{\partial\theta_{i}}\right)-2\left(\frac{\partial^{2}\psi(\theta)% }{\partial\theta_{j}\partial\theta_{i}}\right)\,\Omega\,\bigl{(}\psi(\theta_{0% })-\psi(\theta)\bigr{)}$$ for $i,j\in\{1,\ldots,p\}$. The map $\theta\mapsto\mathcal{H}(\theta;\theta_{0})$ is continuous and $$\mathcal{H}(\theta_{0})\coloneqq\mathcal{H}(\theta_{0};\theta_{0})=2\,\dot{% \psi}(\theta_{0})^{T}\,\Omega\,\dot{\psi}(\theta_{0}),$$ is a positive definite matrix. Let $\lVert\,\cdot\,\rVert$ denote a matrix norm. By an argument similar to that in the proof of Lemma A.1, there exists $\eta>0$ such that every symmetric matrix $A\in\mathbb{R}^{p\times p}$ with $\lVert{A-\mathcal{H}(\theta_{0})}\rVert\leq\eta$ has positive eigenvalues and is therefore positive definite. Let $\varepsilon_{1}\in(0,\varepsilon_{0}]$ be sufficiently small such that the second-order partial derivatives of $\psi$ are continuous on $B_{\varepsilon_{1}}(\theta_{0})$ and such that $\lVert{\mathcal{H}(\theta;\theta_{0})-\mathcal{H}(\theta_{0})}\rVert\leq\eta/2$ for all $\theta\in B_{\varepsilon_{1}}(\theta_{0})$. Let $\mathcal{H}_{n,k,\widehat{\Omega}_{n}}(\theta)\in\mathbb{R}^{p\times p}$ denote the Hessian matrix of $f_{n,k,\widehat{\Omega}_{n}}$. Its $(i,j)$-th element is $$\displaystyle\bigl{(}\mathcal{H}_{n,k,\widehat{\Omega}_{n}}(\theta)\bigr{)}_{ij}$$ $$\displaystyle=\frac{\partial^{2}}{\partial\theta_{j}\partial\theta_{i}}\left[L% _{n,k}(\theta)^{T}\,\widehat{\Omega}_{n}\,L_{n,k}(\theta)\right]=\frac{% \partial}{\partial\theta_{j}}\left[-2L_{n,k}(\theta)^{T}\,\widehat{\Omega}_{n}% \frac{\partial\psi(\theta)}{\partial\theta_{i}}\right]$$ $$\displaystyle=2\left(\frac{\partial\psi(\theta)}{\partial\theta_{j}}\right)^{T% }\widehat{\Omega}_{n}\left(\frac{\partial\psi(\theta)}{\partial\theta_{i}}% \right)-2\left(\frac{\partial^{2}\psi(\theta)}{\partial\theta_{j}\partial% \theta_{i}}\right)\widehat{\Omega}_{n}\,L_{n,k}(\theta).$$ Since $L_{n,k}(\theta)=\int\widehat{\ell}_{n,k}\,\mu-\psi(\theta)$ and since $\int\widehat{\ell}_{n,k}\,\mu$ converges in probability to $\psi(\theta_{0})$, we obtain $$\sup_{\theta\in B_{\varepsilon_{1}}(\theta_{0})}\lVert{\mathcal{H}_{n,k,% \widehat{\Omega}_{n}}(\theta)-\mathcal{H}(\theta;\theta_{0})}\rVert\overset{% \mathbb{P}}{\rightarrow}0,\qquad n\to\infty.$$ (A.2) By the triangle inequality, it follows that $$\mathbb{P}\biggl{[}\sup_{\theta\in B_{\varepsilon_{1}}(\theta_{0})}\lVert{% \mathcal{H}_{n,k,\widehat{\Omega}_{n}}(\theta)-\mathcal{H}(\theta_{0})}\rVert% \leq\eta\biggr{]}\to 1,\qquad n\to\infty.$$ In view of our choice for $\eta$, this implies that, with probability tending to one, $\mathcal{H}_{n,k}(\theta)$ is positive definite for all $\theta\in B_{\varepsilon_{1}}(\theta_{0})$, as required. ∎ Proof of Theorem 3.2. First note that, as $n\rightarrow\infty$, $$\sqrt{k}\,L_{n,k}(\theta_{0})\xrightarrow{d}\widetilde{B},\qquad\text{ where }% \widetilde{B}\sim\mathcal{N}_{q}(\bm{0},\Gamma(\theta_{0})).$$ This follows directly from Einmahl et al. (2012, Proposition 7.3) by replacing $g(\bm{x})\,\text{d}\bm{x}$ with $\mu(\text{d}\bm{x})$. Also, from $(C2)$ and Slutsky’s lemma, we have $$\displaystyle\sqrt{k}\,\nabla f_{n,k,\widehat{\Omega}_{n}}(\theta_{0})$$ $$\displaystyle=-2\sqrt{k}\,L_{n,k}(\theta_{0})^{T}\,\widehat{\Omega}_{n}\,\dot{% \psi}(\theta_{0})$$ $$\displaystyle\xrightarrow{d}-2\,\widetilde{B}^{T}\,\Omega\,\dot{\psi}(\theta_{% 0})\sim\mathcal{N}_{p}\bigl{(}\bm{0},\;4\,\dot{\psi}(\theta_{0})^{T}\,\Omega\,% \Gamma(\theta_{0})\,\Omega\,\dot{\psi}(\theta)\bigr{)}.$$ Since $\widehat{\theta}_{n}$ is a minimizer of $\widehat{f}_{k,n}$ we have $\nabla f_{n,k,\widehat{\Omega}_{n}}(\widehat{\theta}_{n})=0$. Applying the mean value theorem to the function $t\mapsto\nabla f_{n,k,\widehat{\Omega}_{n}}(\theta_{0}+t(\widehat{\theta}_{n}-% \theta_{0}))$ at $t=0$ and $t=1$ yields $$0=\nabla f_{n,k,\widehat{\Omega}_{n}}(\widehat{\theta}_{n})=\nabla f_{n,k,% \widehat{\Omega}_{n}}(\theta_{0})+\mathcal{H}_{n,k,\widehat{\Omega}_{n}}(% \widetilde{\theta}_{n})\,(\widehat{\theta}_{n}-\theta_{0})$$ where $\widetilde{\theta}_{n}$ is a random vector on the segment connecting $\theta_{0}$ and $\widehat{\theta}_{n}$. As $\widehat{\theta}_{n}\overset{\mathbb{P}}{\rightarrow}\theta_{0}$, we have $\widetilde{\theta}_{n}\overset{\mathbb{P}}{\rightarrow}\theta_{0}$ too. By (A.2) and continuity of $\theta\mapsto\mathcal{H}(\theta;\theta_{0})$, it then follows that $\mathcal{H}_{n,k,\widehat{\Omega}_{n}}(\widetilde{\theta}_{n})\overset{\mathbb% {P}}{\rightarrow}\mathcal{H}(\theta_{0})$. Putting these facts together, we conclude that $$\sqrt{k}(\widehat{\theta}_{n}-\theta_{0})=-\bigl{(}\mathcal{H}_{n,k,\widehat{% \Omega}_{n}}(\widetilde{\theta}_{n})\bigr{)}^{-1}\,\sqrt{k}\,\nabla f_{n,k,% \widehat{\Omega}_{n}}(\theta_{0})\xrightarrow{d}\mathcal{N}_{p}\bigl{(}0,M(% \theta_{0})\bigr{)},$$ as required. ∎ Proof of Corollary 3.3. Assumption (C6) implies that the map $\theta\mapsto\Gamma(\theta)$ is continuous at $\theta_{0}$ (Einmahl et al., 2008, Lemma 7.2). Further, $\Gamma^{-1}(\widehat{\theta}^{(0)}_{n})$ converges in probability to $\Gamma^{-1}(\theta_{0})$, because of the continuous mapping theorem and the fact that $\widehat{\theta}^{(0)}_{n}$ is a consistent estimator of $\theta_{0}$. Finally, the choice $\Omega_{\textnormal{opt}}=\Gamma^{-1}(\theta)$ in (3.7) leads to the minimal matrix $M_{\textnormal{opt}}(\theta)$ in (3.8); see for example Abadir and Magnus (2005, page 339). ∎ Appendix B Calculating the asymptotic variance matrix Fix the function $\pi$ describing the pairs, and let for $m,m^{\prime}\in\{1,\ldots,q\}$, $\pi(m)=(u,v)$ and $\pi(m^{\prime})=(u^{\prime},v^{\prime})$. Let $\dot{\ell}_{uv,1}(x_{u},x_{v})$ and $\dot{\ell}_{uv,2}(x_{u},x_{v})$ denote the partial derivatives of $\ell_{uv}(x_{u},x_{v})$ with respect to $x_{u}$ and $x_{v}$ respectively and define $$W_{\ell,uv}=W_{\Lambda}(\{\bm{w}\in[0,\infty]^{2}\setminus\{(\infty,\infty)\}:% w_{u}\leq x_{u}\text{ or }w_{v}\leq x_{v}\}).$$ Note that $$B_{uv}(x_{u},x_{v})=W_{\ell,uv}(x_{u},x_{v})-\dot{\ell}_{uv,1}(x_{u},x_{v})W_{% \ell,u}(x_{u})-\dot{\ell}_{uv,2}(x_{u},x_{v})W_{\ell,v}(x_{v}).$$ The $(m,m^{\prime})$-th entry of $\Gamma(\theta)\in\mathbb{R}^{q\times q}$ from (3.5) is given by $$\int_{[0,1]^{4}}\mathbb{E}[B_{uv}(x_{u},x_{v})B_{u^{\prime}v^{\prime}}(x_{u^{% \prime}},x_{v^{\prime}})]\,\mathrm{d}\bm{x}=\int_{[0,1]^{4}}\left(T_{1}-T_{2}-% T_{3}+T_{4}+T_{5}\right)\,\mathrm{d}\bm{x},$$ (B.1) for $\bm{x}=(x_{u},x_{v},x_{u^{\prime}},x_{v^{\prime}})$ where $$\displaystyle T_{1}$$ $$\displaystyle=\mathbb{E}[W_{\ell,uv}(x_{u},x_{v})W_{\ell,u^{\prime}v^{\prime}}% (x_{u^{\prime}},x_{v^{\prime}})],$$ $$\displaystyle T_{2}$$ $$\displaystyle=\dot{\ell}_{u^{\prime}v^{\prime},1}(x_{u^{\prime}},x_{v^{\prime}% })\mathbb{E}[W_{\ell,uv}(x_{u},x_{v})W_{\ell,u^{\prime}}(x_{u^{\prime}})]+\dot% {\ell}_{u^{\prime}v^{\prime},2}(x_{u^{\prime}},x_{v^{\prime}})\mathbb{E}[W_{% \ell,uv}(x_{u},x_{v})W_{\ell,v^{\prime}}(x_{v^{\prime}})],$$ $$\displaystyle T_{3}$$ $$\displaystyle=\dot{\ell}_{uv,1}(x_{u},x_{v})\mathbb{E}[W_{\ell,u}(x_{u})W_{% \ell,u^{\prime}v^{\prime}}(x_{u^{\prime}},x_{v^{\prime}})]+\dot{\ell}_{uv,2}(x% _{u},x_{v})\mathbb{E}[W_{\ell,v}(x_{v})W_{\ell,u^{\prime}v^{\prime}}(x_{u^{% \prime}},x_{v^{\prime}})],$$ $$\displaystyle T_{4}$$ $$\displaystyle=\dot{\ell}_{uv,1}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{\prime},1% }(x_{u^{\prime}},x_{v^{\prime}})\mathbb{E}[W_{\ell,u}(x_{u})W_{\ell,u^{\prime}% }(x_{u^{\prime}})]$$ $$\displaystyle\qquad+\dot{\ell}_{uv,2}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{% \prime},2}(x_{u^{\prime}},x_{v^{\prime}})\mathbb{E}[W_{\ell,v}(x_{v})W_{\ell,v% ^{\prime}}(x_{v^{\prime}})],$$ $$\displaystyle T_{5}$$ $$\displaystyle=\dot{\ell}_{uv,1}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{\prime},2% }(x_{u^{\prime}},x_{v^{\prime}})\mathbb{E}[W_{\ell,u}(x_{u})W_{\ell,v^{\prime}% }(x_{v^{\prime}})]$$ $$\displaystyle\qquad+\dot{\ell}_{uv,2}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{% \prime},1}(x_{u^{\prime}},x_{v^{\prime}})\mathbb{E}[W_{\ell,v}(x_{v})W_{\ell,u% ^{\prime}}(x_{u^{\prime}})].$$ Suppose $(u,v,u^{\prime},v^{\prime})$ are all different and define the sets $$A_{ij}(z_{i},z_{j})=\{\bm{w}\in[0,\infty]^{d}\setminus\{(\infty,\ldots,\infty)% \}:w_{i}\leq z_{i}\text{ or }w_{j}\leq z_{j}\}.$$ Then $$\displaystyle\mathbb{E}[W_{\ell,uv}(x_{u},x_{v})W_{\ell,u^{\prime}v^{\prime}}(% x_{u^{\prime}},x_{v^{\prime}})]$$ $$\displaystyle=\mathbb{E}[W_{\Lambda}(A_{uv}(x_{u},x_{v}))W_{\Lambda}(A_{u^{% \prime}v^{\prime}}(x_{u^{\prime}},x_{v^{\prime}}))]$$ $$\displaystyle=\Lambda(A_{uv}(x_{u},x_{v})\cap A_{u^{\prime}v^{\prime}}(x_{u^{% \prime}},x_{v^{\prime}}))$$ $$\displaystyle=\Lambda(A_{uv}(x_{u},x_{v}))+\Lambda(A_{u^{\prime}v^{\prime}}(x_% {u^{\prime}},x_{v^{\prime}}))$$ $$\displaystyle\qquad-\Lambda(A_{uv}(x_{u},x_{v})\cup A_{u^{\prime}v^{\prime}}(x% _{u^{\prime}},x_{v^{\prime}}))$$ $$\displaystyle=\ell_{uv}(x_{u},x_{v})+\ell_{u^{\prime}v^{\prime}}(x_{u^{\prime}% },x_{v^{\prime}})-\ell_{uvu^{\prime}v^{\prime}}(x_{u},x_{v},x_{u^{\prime}},x_{% v^{\prime}}).$$ Similar calculations for the other terms yield $$\displaystyle T_{1}$$ $$\displaystyle=\ell_{uv}(x_{u},x_{v})+\ell_{u^{\prime}v^{\prime}}(x_{u^{\prime}% },x_{v^{\prime}})-\ell_{uvu^{\prime}v^{\prime}}(x_{u},x_{v},x_{u^{\prime}},x_{% v^{\prime}}),$$ $$\displaystyle T_{2}$$ $$\displaystyle=\dot{\ell}_{u^{\prime}v^{\prime},1}(x_{u^{\prime}},x_{v^{\prime}% })[\ell_{uv}(x_{u},x_{v})+x_{u^{\prime}}-\ell_{uvu^{\prime}}(x_{u},x_{v},x_{u^% {\prime}})]$$ $$\displaystyle\qquad+\dot{\ell}_{u^{\prime}v^{\prime},2}(x_{u^{\prime}},x_{v^{% \prime}})[\ell_{uv}(x_{u},x_{v})+x_{v^{\prime}}-\ell_{uvv^{\prime}}(x_{u},x_{v% },x_{v^{\prime}})],$$ $$\displaystyle T_{3}$$ $$\displaystyle=\dot{\ell}_{uv,1}(x_{u},x_{v})[\ell_{u^{\prime}v^{\prime}}(x_{u^% {\prime}},x_{v^{\prime}})+x_{u}-\ell_{uu^{\prime}v^{\prime}}(x_{u},x_{u^{% \prime}},x_{v^{\prime}})]$$ $$\displaystyle\qquad+\dot{\ell}_{uv,2}(x_{u},x_{v})[\ell_{u^{\prime}v^{\prime}}% (x_{u^{\prime}},x_{v^{\prime}})+x_{v}-\ell_{vu^{\prime}v^{\prime}}(x_{v},x_{u^% {\prime}},x_{v^{\prime}})],$$ $$\displaystyle T_{4}$$ $$\displaystyle=\dot{\ell}_{uv,1}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{\prime},1% }(x_{u^{\prime}},x_{v^{\prime}})[x_{u}+x_{u^{\prime}}-\ell_{uu^{\prime}}(x_{u}% ,x_{u^{\prime}})]$$ $$\displaystyle\qquad+\dot{\ell}_{uv,2}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{% \prime},2}(x_{u^{\prime}},x_{v^{\prime}})[x_{v}+x_{v^{\prime}}-\ell_{vv^{% \prime}}(x_{v},x_{v^{\prime}})],$$ $$\displaystyle T_{5}$$ $$\displaystyle=\dot{\ell}_{uv,1}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{\prime},2% }(x_{u^{\prime}},x_{v^{\prime}})[x_{u}+x_{v^{\prime}}-\ell_{uv^{\prime}}(x_{u}% ,x_{v^{\prime}})]$$ $$\displaystyle\qquad+\dot{\ell}_{uv,2}(x_{u},x_{v})\dot{\ell}_{u^{\prime}v^{% \prime},1}(x_{u^{\prime}},x_{v^{\prime}})[x_{v}+x_{u^{\prime}}-\ell_{vu^{% \prime}}(x_{v},x_{u^{\prime}})].$$ Integrating directly over $T_{1},\ldots,T_{5}$ is very slow, so we would like to simplify as many terms as possible. Introduce the notations $$\displaystyle I(u,v)$$ $$\displaystyle\coloneqq\int_{0}^{1}\int_{0}^{1}\ell_{uv}(x_{u},x_{v})\,\mathrm{% d}x_{u}\mathrm{d}x_{v},$$ $$\displaystyle I(u,v;x_{u})$$ $$\displaystyle\coloneqq\int_{0}^{1}\ell_{uv}(x_{u},x_{v})\,\mathrm{d}x_{v},$$ $$\displaystyle I_{u}(u,v;x_{u})$$ $$\displaystyle\coloneqq\int_{0}^{1}\frac{\partial\ell_{uv}(x_{u},x_{v})}{% \partial x_{u}}\,\mathrm{d}x_{v}.$$ Now we can write the four-dimensional integrals in (B.1) as: $$\displaystyle\int_{[0,1]^{4}}T_{1}=$$ $$\displaystyle\,\,I(u,v)+I(u^{\prime},v^{\prime})-\int_{[0,1]^{4}}\ell_{uvu^{% \prime}v^{\prime}}(x_{u},x_{v},x_{u^{\prime}},x_{v^{\prime}})\,\mathrm{d}x_{u}% \mathrm{d}x_{v}\mathrm{d}x_{u^{\prime}}\mathrm{d}x_{v^{\prime}},$$ $$\displaystyle\int_{[0,1]^{4}}T_{2}=$$ $$\displaystyle\,\,I(u,v)[2I(u^{\prime},v^{\prime};1)-1]+2I(u^{\prime},v^{\prime% };1)-2I(u^{\prime},v^{\prime})$$ $$\displaystyle\,\,-\int_{[0,1]^{3}}I_{u^{\prime}}(u^{\prime},v^{\prime};x_{u^{% \prime}})\ell_{uvu^{\prime}}(x_{u},x_{v},x_{u^{\prime}})\,\mathrm{d}x_{u^{% \prime}}\mathrm{d}x_{u}\mathrm{d}x_{v}$$ $$\displaystyle\,\,-\int_{[0,1]^{3}}I_{v^{\prime}}(u^{\prime},v^{\prime};x_{v^{% \prime}})\ell_{uvv^{\prime}}(x_{u},x_{v},x_{v^{\prime}})\,\mathrm{d}x_{v^{% \prime}}\mathrm{d}x_{u}\mathrm{d}x_{v},$$ $$\displaystyle\int_{[0,1]^{4}}T_{3}=$$ $$\displaystyle\,\,I(u^{\prime},v^{\prime})[2I(u,v;1)-1]+2I(u,v;1)-2I(u,v)$$ $$\displaystyle\,\,-\int_{[0,1]^{3}}I_{u}(u,v;x_{u})\ell_{uu^{\prime}v^{\prime}}% (x_{u},x_{u^{\prime}},x_{v^{\prime}})\,\mathrm{d}x_{u}\mathrm{d}x_{u^{\prime}}% \mathrm{d}x_{v^{\prime}}$$ $$\displaystyle\,\,-\int_{[0,1]^{3}}I_{v}(u,v;x_{v})\ell_{vu^{\prime}v^{\prime}}% (x_{v},x_{u^{\prime}},x_{v^{\prime}})\,\mathrm{d}x_{v}\mathrm{d}x_{u^{\prime}}% \mathrm{d}x_{v^{\prime}},$$ $$\displaystyle\int_{[0,1]^{4}}T_{4}=$$ $$\displaystyle\,\,[I(u,v)-I(u,v;1)][1-2I(u^{\prime},v^{\prime};1)]+[I(u^{\prime% },v^{\prime})-I(u^{\prime},v^{\prime};1)][1-2I(u,v;1)]$$ $$\displaystyle\,\,-\int_{[0,1]^{2}}I_{u}(u,v;x_{u})I_{u^{\prime}}(u^{\prime},v^% {\prime};x_{u^{\prime}})\ell_{uu^{\prime}}(x_{u},x_{u^{\prime}})\,\mathrm{d}x_% {u}\mathrm{d}x_{u^{\prime}}$$ $$\displaystyle\,\,-\int_{[0,1]^{2}}I_{v}(u,v;x_{v})I_{v^{\prime}}(u^{\prime},v^% {\prime};x_{v^{\prime}})\ell_{vv^{\prime}}(x_{v},x_{v^{\prime}})\,\mathrm{d}x_% {v}\mathrm{d}x_{v^{\prime}},$$ $$\displaystyle\int_{[0,1]^{4}}T_{5}=$$ $$\displaystyle\,\,[I(u,v)-I(u,v;1)][1-2I(u^{\prime},v^{\prime};1)]+[I(u^{\prime% },v^{\prime})-I(u^{\prime},v^{\prime};1)][1-2I(u,v;1)]$$ $$\displaystyle\,\,-\int_{[0,1]^{2}}I_{u}(u,v;x_{u})I_{v^{\prime}}(u^{\prime},v^% {\prime};x_{v^{\prime}})\ell_{uv^{\prime}}(x_{u},x_{v^{\prime}})\,\mathrm{d}x_% {u}\mathrm{d}x_{v^{\prime}}$$ $$\displaystyle\,\,-\int_{[0,1]^{2}}I_{v}(u,v;x_{v})I_{u^{\prime}}(u^{\prime},v^% {\prime};x_{u^{\prime}})\ell_{vu^{\prime}}(x_{v},x_{u^{\prime}})\,\mathrm{d}x_% {v}\mathrm{d}x_{u^{\prime}}.$$ For the Brown–Resnick process, the integrals $I(u,v)$, $I(u,v;x_{u})$ and $I_{u}(u,v;x_{u})$ are analytically computable. To calculate $I(u,v)$, do the change of variables $1/2\log{(x_{u}/x_{v})}=z_{1}$ and $1/2\log{(x_{u}x_{v})}=z_{2}$, so that $\mathrm{d}x_{u}\mathrm{d}x_{v}=2\exp{(2z_{2})}\mathrm{d}z_{1}\mathrm{d}z_{2}$ and the area we integrate over is the area between the lines $z_{2}=z_{1}$ and $z_{2}=-z_{1}$ for $z_{2}<0$. We obtain, for $a=a_{uv}=\sqrt{2\gamma(V(\bm{s}_{u}-\bm{s}_{v}))}$ $$\displaystyle\int_{0}^{1}\int_{0}^{1}\ell_{uv}(x_{u},x_{v})\,\mathrm{d}x_{u}% \mathrm{d}x_{v}$$ $$\displaystyle=\int_{z_{1}=-\infty}^{\infty}\int_{z_{2}=-\infty}^{-|z_{1}|}% \bigg{[}e^{z_{2}+z_{1}}\Phi\bigg{(}\frac{a}{2}+\frac{2z_{1}}{a}\bigg{)}$$ $$\displaystyle\qquad+e^{z_{2}-z_{1}}\Phi\bigg{(}\frac{a}{2}-\frac{2z_{1}}{a}% \bigg{)}\bigg{]}2e^{2z_{2}}\,\mathrm{d}z_{2}\mathrm{d}z_{1}$$ $$\displaystyle=\Phi(a/2)+\frac{e^{a^{2}}\Phi(-3a/2)}{3}.$$ The other two integrals are given by $$\displaystyle I(u,v;x_{u})$$ $$\displaystyle=\frac{1}{2}\Phi\bigg{(}\frac{a}{2}-\frac{\log{x_{u}}}{a}\bigg{)}% +x_{u}\Phi\bigg{(}\frac{a}{2}+\frac{\log{x_{u}}}{a}\bigg{)}+\frac{x_{u}^{2}e^{% a^{2}}}{2}\Phi\bigg{(}-\frac{3a}{2}-\frac{\log{x_{u}}}{a}\bigg{)},$$ $$\displaystyle I_{u}(u,v;x_{u})$$ $$\displaystyle=\Phi\bigg{(}\frac{a}{2}+\frac{\log{x_{u}}}{a}\bigg{)}+x_{u}e^{a^% {2}}\Phi\bigg{(}-\frac{3a}{2}-\frac{\log{x_{u}}}{a}\bigg{)}.$$ Acknowledgments This research is supported by contract “Projet d’Actions de Recherche Concertées” No. 12/17-045 of the “Communauté française de Belgique” and by IAP research network grant nr. P7/06 of the Belgian government (Belgian Science Policy). REFERENCES Abadir and Magnus (2005) Abadir, K. M. and J. R. Magnus (2005). Matrix algebra, Volume 1. Cambridge University Press. Bacro and Gaetan (2013) Bacro, J.-N. and C. Gaetan (2013). Estimation of spatial max-stable models using threshold exceedances. Statistics and Computing, 1–12. Beirlant et al. (2004) Beirlant, J., Y. Goegebeur, J. Segers, and J. Teugels (2004). Statistics of Extremes: Theory and Applications. Wiley. Bienvenüe and Robert (2014) Bienvenüe, A. and C. Y. Robert (2014). Likelihood based inference for high-dimensional extreme value distributions. Available at http://arxiv.org/abs/1403.0065. Blanchet and Davison (2011) Blanchet, J. and A. C. Davison (2011). Spatial modeling of extreme snow depth. The Annals of Applied Statistics 5(3), 1699–1724. Brown and Resnick (1977) Brown, B. M. and S. I. Resnick (1977). Extreme values of independent stochastic processes. Journal of Applied Probability 14(4), 732–739. Buishand et al. (2008) Buishand, T., L. de Haan, and C. Zhou (2008). On spatial extremes: with application to a rainfall problem. Annals of Applied Statistics 2(2), 624–642. Ceppi et al. (2008) Ceppi, P., P. Della-Marta, and C. Appenzeller (2008). Extreme value analysis of wind speed observations over Switzerland. Arbeitsberichte der MeteoSchweiz 219. Coles (2001) Coles, S. G. (2001). An Introduction to Statistical Modeling of Extreme Values. Springer-Verlag Inc. Cooley et al. (2012) Cooley, D., J. Cisewski, R. J. Erhardt, S. Jeon, E. Mannshardt, B. O. Omolo, and Y. Sun (2012). A survey for spatial extremes: measuring spatial dependence and modelling spatial effects. REVSTAT 10(1), 135–165. Cooley et al. (2012) Cooley, D., A. C. Davison, and M. Ribatet (2012). Bayesian inference from composite likelihoods, with an application to spatial extremes. Statistica Sinica 22(2), 813–845. Davis et al. (2013) Davis, R. A., C. Klüppelberg, and C. Steinkohl (2013). Statistical inference for max-stable processes in space and time. Journal of the Royal Statistical Society: Series B (Statistical Methodolgy) 75(5), 791–819. Davison et al. (2012) Davison, A. C., S. A. Padoan, and M. Ribatet (2012). Statistical modeling of spatial extremes. Statistical Science 27(2), 161–186. de Haan and Ferreira (2006) de Haan, L. and A. Ferreira (2006). Extreme Value Theory: an Introduction. Springer-Verlag Inc. de Haan and Lin (2001) de Haan, L. and T. Lin (2001). On convergence toward an extreme value distribution in C[0,1]. The Annals of Probability 29(1), 467–483. de Haan and Pereira (2006) de Haan, L. and T. T. Pereira (2006). Spatial extremes: Models for the stationary case. The Annals of Statistics 34(1), 146–168. de Haan and Resnick (1977) de Haan, L. and S. I. Resnick (1977). Limit theory for multivariate sample extremes. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 40(4), 317–337. Dematteo et al. (2013) Dematteo, A., S. Clemençon, N. Vayatis, and M. Mougeot (2013). Sloshing in the LNG shipping industry: risk modelling through multivariate heavy-tail analysis. Available at http://arxiv.org/abs/1312.0020. Drees and Huang (1998) Drees, H. and X. Huang (1998). Best attainable rates of convergence for estimators of the stable tail dependence function. Journal of Multivariate Analysis 64(1), 25–47. Einmahl et al. (2008) Einmahl, J. H., A. Krajina, and J. Segers (2008). A method of moments estimator of tail dependence. Bernoulli 14(4), 1003–1026. Einmahl et al. (2012) Einmahl, J. H., A. Krajina, and J. Segers (2012). An M-estimator for tail dependence in arbitrary dimensions. The Annals of Statistics 40(3), 1764–1793. Engelke et al. (2014) Engelke, S., A. Malinowski, Z. Kabluchko, and M. Schlather (2014). Estimation of Hüsler-Reiss distributions and Brown-Resnick processes. Journal of the Royal Statistical Society: Series B (Statistical Methodology). To be published. Huang (1992) Huang, X. (1992). Statistics of bivariate extreme values. Ph. D. thesis, Tinbergen Institute Research Series. Huser and Davison (2013) Huser, R. and A. Davison (2013). Composite likelihood estimation for the Brown-Resnick process. Biometrika 100(2), 511–518. Huser and Davison (2014) Huser, R. and A. Davison (2014). Space-time modelling of extreme events. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 76(2), 439–461. Jeon and Smith (2012) Jeon, S. and R. Smith (2012). Dependence structure of spatial extremes using threshold approach. Available at http://arxiv.org/abs/1209.6344. Jiang (2010) Jiang, J. (2010). Large sample techniques for statistics. Springer. Kabluchko et al. (2009) Kabluchko, Z., M. Schlather, and L. de Haan (2009). Stationary max-stable fields associated to negative definite functions. Annals of Probability 37(5), 2042–2065. Karpa and Naess (2013) Karpa, O. and A. Naess (2013). Extreme value statistics of wind speed data by the ACER method. Journal of Wind Engineering and Industrial Aerodynamics 112, 1–10. Oesting et al. (2013) Oesting, M., M. Schlather, and P. Friedrichs (2013). Conditional modelling of extreme wind gusts by bivariate Brown-Resnick processes. Available at http://arxiv.org/abs/1312.4584. Padoan et al. (2010) Padoan, S., M. Ribatet, and S. Sisson (2010). Likelihood-based inference for max-stable processes. Journal of the American Statistical Association (Theory and Methods) 105(489), 263–277. Palutikof et al. (1999) Palutikof, J., B. Brabson, D. Lister, and S. Adcock (1999). A review of methods to calculate extreme wind speeds. Meteorological Applications 6(2), 199–132. R Core Team (2013) R Core Team (2013). R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. Reich and Shaby (2012) Reich, B. J. and B. A. Shaby (2012). A hierarchical max-stable spatial model for extreme precipitation. The Annals of Applied Statistics 6(4), 1430–1451. Resnick (1987) Resnick, S. I. (1987). Extreme Values, Regular Variation, and Point Processes. Springer, New York. Ribatet (2013) Ribatet, M. (2013). Spatial extremes: max-stable processes at work. Journal de la Société Française de Statistique 154(2), 1–22. Ribatet et al. (2013) Ribatet, M., R. Singleton, and R Core team (2013). SpatialExtremes: Modelling Spatial Extremes. R package version 2.0-0. Schlather (2002) Schlather, M. (2002). Models for stationary max-stable random fields. Extremes 5(1), 33–44. Smith (1990) Smith, R. L. (1990). Max-stable processes and spatial extremes. Unpublished manuscript. Wadsworth and Tawn (2013) Wadsworth, J. L. and J. A. Tawn (2013). Efficient inference for spatial extreme-value processes associated to log-gaussian random functions. To appear in Biometrika. Wang and Stoev (2011) Wang, Y. and S. Stoev (2011). Conditional sampling for spectrally discrete max-stable random fields. Advances in Applied Probability 43, 463–481. Yuen and Stoev (2013) Yuen, R. A. and S. Stoev (2013). CPRS M-estimation for max-stable models. Available at http://arxiv.org/abs/1307.7209.
Characterizations of compactness of fuzzy set space with endograph metric 111Project supported by Natural Science Foundation of Fujian Province of China (No. 2020J01706) Huan Huang [email protected] Department of Mathematics, Jimei University, Xiamen 361021, China Abstract In this paper, we present the characterizations of total boundedness, relative compactness and compactness in fuzzy set spaces equipped with the endograph metric. The conclusions in this paper significantly improve the corresponding conclusions given in our previous paper [H. Huang, Characterizations of endograph metric and $\Gamma$-convergence on fuzzy sets, Fuzzy Sets and Systems 350 (2018), 55-84]. The results in this paper are applicable to fuzzy sets in a general metric space. The results in our previous paper are applicable to fuzzy sets in the $m$-dimensional Euclidean space $\mathbb{R}^{m}$, which is a special type of metric space. Furthermore, based on the above results, we give the characterizations of relative compactness, total boundedness and compactness in a kind of common subspaces of general fuzzy sets according to the endograph metric. As an application, we investigate some relationship between the endograph metric and the $\Gamma$-convergence on fuzzy sets. keywords: Compactness; Endograph metric; $\Gamma$-convergence; Hausdorff metric 1 Introduction Fuzzy set is a fundamental tool to investigate fuzzy phenomenon da ; du ; wu ; garcia ; wa ; gong ; wangun . A fuzzy set can be identified with its endograph. The endograph metric $H_{\rm end}$ on fuzzy sets is the Hausdorff metric defined on their endographs. It’s shown that the endograph metric on fuzzy sets has significant advantages kloeden2 ; kloeden ; kupka ; can . Compactness is one of the central concepts in topology and analysis and is useful in applications (see kelley ; wa ). The characterizations of compactness in various fuzzy set spaces endowed with different topologies have attracted much attention greco ; greco3 ; huang ; huang9 ; huang719 ; roman ; trutschnig . In huang , we have given the characterizations of total boundedness, relative compactness and compactness of fuzzy set spaces equipped with the endograph metric $H_{\rm end}$. The results in huang are applicable to fuzzy sets in the $m$-dimensional Euclidean space $\mathbb{R}^{m}$. $\mathbb{R}^{m}$ is a special type of metric space. In theoretical research and practical applications, fuzzy sets in a general metric space are often used da ; du ; greco ; greco3 . In this paper, we present the characterizations of total boundedness, relative compactness and compactness of the space fuzzy sets in a general metric space equipped with the endograph metric $H_{\rm end}$. We point out that the characterizations of total boundedness, relative compactness and compactness given in huang are corollaries of the corresponding characterizations given in this paper. Furthermore, we discuss the properties of the endograph metric $H_{\rm end}$, and then use these properties and the above characterizations for general fuzzy sets to give the characterizations of relative compactness, total boundedness and compactness in a kind of common subspaces of general fuzzy sets according to the endograph metric $H_{\rm end}$. As an application of the characterizations of compactness given in this paper, we discuss the relationship between $H_{\rm end}$ metric and $\Gamma$-convergence on fuzzy sets. The remainder of this paper is organized as follows. In Section 2, we recall and give some basic notions and fundamental results related to fuzzy sets and the endograph metric and the $\Gamma$-convergence on them. In Section 3, we give representation theorems for various kinds of fuzzy sets which are useful in this paper. In Section 4, we give the characterizations of relatively compact sets, totally bounded sets, and compact sets in space of fuzzy sets in a general metric space equipped with the endograph metric, respectively. In Section 5, based on the characterizations of compactness given in Section 4, we give the characterizations of relatively compact sets, totally bounded sets, and compact sets in a kind of common subspaces of the fuzzy set space discussed in Section 4. In Section 6, as an application of the characterizations of compactness given in Section 4, we investigate the relationship between the endograph metric and the $\Gamma$-convergence on fuzzy sets. At last, we draw conclusions in Section 7. 2 Fuzzy sets and endograph metric and $\Gamma$-convergence on them In this section, we recall and give some basic notions and fundamental results related to fuzzy sets and the endograph metric and the $\Gamma$-convergence on them. Readers can refer to wu ; da ; du ; rojas ; huang17 for related contents. Let $\mathbb{N}$ denote the set of natural numbers. Let $\mathbb{R}$ denote the set of real numbers. Let $\mathbb{R}^{m}$, $m>1$, denote the set $\{\langle x_{1},\ldots,x_{m}\rangle:x_{i}\in\mathbb{R},\ i=1,\ldots,m\}$. In the sequel, $\mathbb{R}$ is also written as $\mathbb{R}^{1}$. Throughout this paper, we suppose that $X$ is a nonempty set and $d$ is the metric on $X$. For simplicity, we also use $X$ to denote the metric space $(X,d)$. The metric $\overline{d}$ on $X\times[0,1]$ is defined as follows: for $(x,\alpha),(y,\beta)\in X\times[0,1]$, $$\overline{d}((x,\alpha),(y,\beta))=d(x,y)+|\alpha-\beta|.$$ Throughout this paper, we suppose that the metric on $X\times[0,1]$ is $\overline{d}$. For simplicity, we also use $X\times[0,1]$ to denote the metric space $(X\times[0,1],\overline{d})$. Let $m\in\mathbb{N}$. For simplicity, $\mathbb{R}^{m}$ is also used to denote the $m$-dimensional Euclidean space; $d_{m}$ is used to denote the Euclidean metric on $\mathbb{R}^{m}$; $\mathbb{R}^{m}\times[0,1]$ is also used to denote the metric space $(\mathbb{R}^{m}\times[0,1],\overline{d_{m}})$. A fuzzy set $u$ in $X$ can be seen as a function $u:X\to[0,1]$. A subset $S$ of $X$ can be seen as a fuzzy set in $X$. If there is no confusion, the fuzzy set corresponding to $S$ is often denoted by $\chi_{S}$; that is, $$\chi_{S}(x)=\left\{\begin{array}[]{ll}1,&x\in S,\\ 0,&x\in X\setminus S.\end{array}\right.$$ For simplicity, for $x\in X$, we will use $\widehat{x}$ to denote the fuzzy set $\chi_{\{x\}}$ in $X$. In this paper, if we want to emphasize a specific metric space $X$, we will write the fuzzy set corresponding to $S$ in $X$ as $S_{F(X)}$, and the fuzzy set corresponding to $\{x\}$ in $X$ as $\widehat{x}_{F(X)}$. The symbol $F(X)$ is used to denote the set of all fuzzy sets in $X$. For $u\in F(X)$ and $\alpha\in[0,1]$, let $\{u>\alpha\}$ denote the set $\{x\in X:u(x)>\alpha\}$, and let $[u]_{\alpha}$ denote the $\alpha$-cut of $u$, i.e. $$[u]_{\alpha}=\begin{cases}\{x\in X:u(x)\geq\alpha\},&\ \alpha\in(0,1],\\ {\rm supp}\,u=\overline{\{u>0\}},&\ \alpha=0,\end{cases}$$ where $\overline{S}$ denotes the topological closure of $S$ in $(X,d)$. The symbol $K(X)$ and $C(X)$ are used to denote the set of all nonempty compact subsets of $X$ and the set of all nonempty closed subsets of $X$, respectively. $P(X)$ is used to denote the power set of $X$, which is the set of all subsets of $X$. Let $F_{USC}(X)$ denote the set of all upper semi-continuous fuzzy sets $u:X\to[0,1]$, i.e., $$F_{USC}(X):=\{u\in F(X):[u]_{\alpha}\in C(X)\cup\{\emptyset\}\ \mbox{for all}\ \alpha\in[0,1]\}.$$ Define $$\displaystyle F_{USCB}(X):=\{u\in F_{USC}(X):[u]_{0}\in K(X)\cup\{\emptyset\}\},$$ $$\displaystyle F_{USCG}(X):=\{u\in F_{USC}(X):[u]_{\alpha}\in K(X)\cup\{\emptyset\}\ \mbox{for all}\ \alpha\in(0,1]\}.$$ Clearly, $$F_{USCB}(X)\subseteq F_{USCG}(X)\subseteq F_{USC}(X).$$ Define $$\displaystyle F_{CON}(X):=\{u\in F(X):\mbox{for all }\alpha\in(0,1],\ [u]_{\alpha}\mbox{ is connected in }X\},$$ $$\displaystyle F_{USCCON}(X):=F_{USC}(X)\cap F_{CON}(X),$$ $$\displaystyle F_{USCGCON}(X):=F_{USCG}(X)\cap F_{CON}(X).$$ Let $u\in F_{CON}(X)$. Then $[u]_{0}=\overline{\cup_{\alpha>0}[u]_{\alpha}}$ is connected in $X$. The proof is as follows. If $u=\chi_{\emptyset}$, then $[u]_{0}=\emptyset$ is connected in $X$. If $u\not=\chi_{\emptyset}$, then there is an $\alpha\in(0,1]$ such that $[u]_{\alpha}\not=\emptyset$. Note that $[u]_{\beta}\supseteq[u]_{\alpha}$ when $\beta\in[0,\alpha]$. Hence $\cup_{0<\beta<\alpha}[u]_{\beta}$ is connected, and thus $[u]_{0}=\overline{\cup_{0<\beta<\alpha}[u]_{\beta}}$ is connected. So $$F_{CON}(X)=\{u\in F(X):\mbox{for all }\alpha\in[0,1],\ [u]_{\alpha}\mbox{ is connected in }X\}.$$ Let $F^{1}_{USC}(X)$ denote the set of all normal and upper semi-continuous fuzzy sets $u:X\to[0,1]$, i.e., $$F^{1}_{USC}(X):=\{u\in F(X):[u]_{\alpha}\in C(X)\ \mbox{for all}\ \alpha\in[0,1]\}.$$ We introduce some subclasses of $F^{1}_{USC}(X)$, which will be discussed in this paper. Define $$\displaystyle F^{1}_{USCB}(X):=F^{1}_{USC}(X)\cap F_{USCB}(X),$$ $$\displaystyle F^{1}_{USCG}(X):=F^{1}_{USC}(X)\cap F_{USCG}(X),$$ $$\displaystyle F^{1}_{USCCON}(X):=F^{1}_{USC}(X)\cap F_{CON}(X),$$ $$\displaystyle F^{1}_{USCGCON}(X):=F^{1}_{USCG}(X)\cap F_{CON}(X).$$ Clearly, $$\displaystyle F^{1}_{USCB}(X)\subseteq F^{1}_{USCG}(X)\subseteq F^{1}_{USC}(X),$$ $$\displaystyle F^{1}_{USCGCON}(X)\subseteq F^{1}_{USCCON}(X).$$ Let $(X,d)$ be a metric space. We use $\bm{H}$ to denote the Hausdorff distance on $C(X)$ induced by $d$, i.e., $$\bm{H(U,V)}=\max\{H^{*}(U,V),\ H^{*}(V,U)\}$$ (1) for arbitrary $U,V\in C(X)$, where $$H^{*}(U,V)=\sup\limits_{u\in U}\,d\,(u,V)=\sup\limits_{u\in U}\inf\limits_{v\in V}d\,(u,v).$$ If there is no confusion, we also use $H$ to denote the Hausdorff distance on $C(X\times[0,1])$ induced by $\overline{d}$. The Hausdorff distance on $C(X)$ can be extended to $C(X)\cup\{\emptyset\}$ as follows: $$H(M_{1},M_{2})=\left\{\begin{array}[]{ll}H(M_{1},M_{2}),&\hbox{if}\ M_{1},M_{2}\in C(X),\\ +\infty,&\hbox{if}\ M_{1}=\emptyset\ \hbox{and}\ M_{2}\in C(X),\\ 0,&\hbox{if}\ M_{1}=M_{2}=\emptyset.\end{array}\right.$$ Remark 2.1. $\rho$ is said to be a metric on $Y$ if $\rho$ is a function from $Y\times Y$ into $\mathbb{R}$ satisfying positivity, symmetry and triangle inequality. At this time, $(Y,\rho)$ is said to be a metric space. $\rho$ is said to be an extended metric on $Y$ if $\rho$ is a function from $Y\times Y$ into $\mathbb{R}\cup\{+\infty\}$ satisfying positivity, symmetry and triangle inequality. At this time, $(Y,\rho)$ is said to be an extended metric space. We can see that for arbitrary metric space $(X,d)$, the Hausdorff distance $H$ on $K(X)$ induced by $d$ is a metric. So the Hausdorff distance $H$ on $K(X\times[0,1])$ induced by $\overline{d}$ on $X\times[0,1]$ is a metric. The Hausdorff distance $H$ on $C(X)$ induced by $d$ on $X$ is an extended metric, but probably not a metric, because $H(A,B)$ could be equal to $+\infty$ for certain metric space $X$ and $A,B\in C(X)$. The Hausdorff distance $H$ on $C(X)\cup\{\emptyset\}$ is an extended metric, but not a metric. Clearly, if $H$ on $C(X)$ induced by $d$ is not a metric, then $H$ on $C(X\times[0,1])$ induced by $\overline{d}$ is also not a metric. So the Hausdorff distance $H$ on $C(X\times[0,1])$ induced by $\overline{d}$ on $X\times[0,1]$ is an extended metric but probably not a metric. We can see that $H$ on $C(\mathbb{R}^{m})$ is an extended metric but not a metric, and then the same is $H$ on $C(\mathbb{R}^{m}\times[0,1])$. In the cases that the Hausdorff distance $H$ is a metric, we call the Hausdorff distance the Hausdorff metric. In the cases that the Hausdorff distance $H$ is an extended metric, we call the Hausdorff distance the Hausdorff extended metric. In this paper, for simplicity, we refer to both the Hausdorff extended metric and the Hausdorff metric as the Hausdorff metric. For $u\in F(X)$, define $$\displaystyle{\rm end}\,u:=\{(x,t)\in X\times[0,1]:u(x)\geq t\},$$ $$\displaystyle{\rm send}\,u:=\{(x,t)\in X\times[0,1]:u(x)\geq t\}\cap([u]_{0}\times[0,1]).$$ ${\rm end}\,u$ and ${\rm send}\,u$ are called the endograph and the sendograph of $u$, respectively. Let $u\in F(X)$. The following properties (i)-(iii) are equivalent: (i)  $u\in F_{USC}(X)$; (ii) ${\rm end}\,u$ is closed in $(X\times[0,1],\overline{d})$; (iii) ${\rm send}\,u$ is closed in $(X\times[0,1],\overline{d})$. (i)$\Rightarrow$(ii). Assume that (i) is true. To show that (ii) is true, let $\{(x_{n},\alpha_{n})\}$ be a sequence in ${\rm end}\,u$ which converges to $(x,\alpha)$ in $X\times[0,1]$, we only need to show that $(x,\alpha)\in{\rm end}\,u$. Since $u$ is upper semi-continuous, then $u(x)\geq\limsup_{n\to\infty}u(x_{n})\geq\lim_{n\to\infty}\alpha_{n}=\alpha$. Thus $(x,\alpha)\in{\rm end}\,u$. So (ii) is true. (ii)$\Rightarrow$(iii). Assume that (ii) is true. Note that $[u]_{0}\times[0,1]$ is closed in $X\times[0,1]$, then ${\rm send}\,u={\rm end}\,u\cap([u]_{0}\times[0,1])$ is closed in $X\times[0,1]$. So (iii) is true. (iii)$\Rightarrow$(i). Assume that (iii) is true. To show that (i) is true, let $\alpha\in[0,1]$ and suppose that $\{x_{n}\}$ is a sequence in $[u]_{\alpha}$ which converges to $x$ in $X$, we only need to show that $x\in[u]_{\alpha}$. Note that $\{(x_{n},\alpha)\}$ converges to $(x,\alpha)$ in $X\times[0,1]$, and that the sequence $\{(x_{n},\alpha)\}$ is in ${\rm send}\,u$. Hence from the closedness of ${\rm send}\,u$, it follows that $(x,\alpha)\in{\rm send}\,u$, which means that $x\in[u]_{\alpha}$. So (i) is true. Let $u\in F(X)$. Clearly $X\times\{0\}\subseteq{\rm end}\,u$. So ${\rm end}\,u\not=\emptyset$. We can see that ${\rm send}\,u=\emptyset$ if and only if $u=\emptyset_{F(X)}$. From the above discussions, we know that $u\in F_{USC}(X)$ if and only if ${\rm end}\,u\in C(X\times[0,1])$. Kloeden kloeden2 introduced the endograph metric $H_{\rm end}$. For $u,v\in F_{USC}(X)$, $$\displaystyle\bm{H_{\rm end}(u,v)}:=H({\rm end}\,u,{\rm end}\,v),$$ where $H$ is the Hausdorff metric on $C(X\times[0,1])$ induced by $\overline{d}$ on $X\times[0,1]$. Rojas-Medar and Román-Flores rojas introduced the $\Gamma$-convergence of a sequence of upper semi-continuous fuzzy sets based on the Kuratowski convergence of a sequence of sets in a metric space. Let $(X,d)$ be a metric space. Let $C$ be a set in $X$ and $\{C_{n}\}$ a sequence of sets in $X$. $\{C_{n}\}$ is said to Kuratowski converge to $C$ according to $(X,d)$, if $$C=\liminf_{n\rightarrow\infty}C_{n}=\limsup_{n\rightarrow\infty}C_{n},$$ where $$\displaystyle\liminf_{n\rightarrow\infty}C_{n}=\{x\in X:\ x=\lim\limits_{n\rightarrow\infty}x_{n},x_{n}\in C_{n}\},$$ $$\displaystyle\limsup_{n\rightarrow\infty}C_{n}=\{x\in X:\ x=\lim\limits_{j\rightarrow\infty}x_{n_{j}},x_{n_{j}}\in C_{n_{j}}\}=\bigcap\limits_{n=1}^{\infty}\overline{\bigcup\limits_{m\geq n}C_{m}}.$$ In this case, we’ll write $C=\lim^{(K)}_{n\to\infty}C_{n}$ according to $(X,d)$. If there is no confusion, we will not emphasize the metric space $(X,d)$ and write $\{C_{n}\}$ Kuratowski converges to $C$ or $C=\lim^{(K)}_{n\to\infty}C_{n}$ for simplicity. Remark 2.2. Theorem 5.2.10 in beer pointed out that, in a first countable Hausdorff topological space, a sequence of sets is Kuratowski convergent is equivalent to this sequence is Fell topology convergent. A metric space is of course a first countable Hausdorff topological space. Definition 3.1.4 in kle gives the definitions of $\liminf C_{n}$, $\limsup C_{n}$ and $\lim C_{n}$ for a net of subsets $\{C_{n},n\in D\}$ in a topological space. When $\{C_{n},n=1,2,\ldots\}$ is a sequence of subsets of a metric space, $\liminf C_{n}$, $\limsup C_{n}$ and $\lim C_{n}$ according to Definition 3.1.4 in kle are $\liminf_{n\rightarrow\infty}C_{n}$, $\limsup_{n\rightarrow\infty}C_{n}$ and $\lim^{(K)}_{n\to\infty}C_{n}$ according to the above definitions, respectively. Let $u$, $u_{n}$, $n=1,2,\ldots$, be fuzzy sets in $F_{USC}(X)$. $\{u_{n}\}$ is said to $\Gamma$-converge to $u$, denoted by $u=\lim_{n\to\infty}^{(\Gamma)}u_{n}$, if ${\rm end}\,u=\lim_{n\to\infty}^{(K)}{\rm end}\,u_{n}$ according to $(X\times[0,1],\overline{d})$. The following Theorem 2.3 is an already known conclusion, which is useful in this paper. Theorem 2.3. Suppose that $C$, $C_{n}$ are sets in $C(X)$, $n=1,2,\ldots$. Then $H(C_{n},C)\to 0$ implies that $\lim_{n\to\infty}^{(K)}C_{n}\,=C$. Remark 2.4. Theorem 2.3 implies that for a sequence $\{u_{n}\}$ in $F_{USC}(X)$ and an element $u$ in $F_{USC}(X)$, if $H_{\rm end}(u_{n},u)\to 0$ as $n\to\infty$, then $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$. However, the converse is false. See Example 4.1 in huang . Theorem 2.3 can be shown in a similar fashion to Theorem 4.1 in huang . In Theorem 2.3, we exclude the case that $C=\emptyset$. Remark 2.5. Let $\{u_{n}\}$ be a sequence in $F_{USC}(X)$ and let $\{v_{n}\}$ be a subsequence of $\{u_{n}\}$. We can see that $$\liminf_{n\to\infty}u_{n}\subseteq\liminf_{n\to\infty}v_{n}\subseteq\limsup_{n\to\infty}v_{n}\subseteq\limsup_{n\to\infty}u_{n}.$$ So if there is a $u\in F_{USC}(X)$ with $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, then $\lim_{n\to\infty}^{(\Gamma)}v_{n}=u$. Clearly, $\lim_{n\to\infty}^{(\Gamma)}v_{n}=u$ does not necessarily imply that $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$. A simple example is given below. For $n=1,2,\ldots$, let $v_{n}=\widehat{1}_{F(\mathbb{R})}$. For $n=1,2,\ldots$, let $u_{n}\in F_{USC}(\mathbb{R})$ defined by $$u_{n}=\left\{\begin{array}[]{ll}\widehat{1}_{F(\mathbb{R})},&n\mbox{ is odd,}\\ \widehat{3}_{F(\mathbb{R})},&n\mbox{ is even.}\end{array}\right.$$ Then $\{v_{n}\}$ is a subsequence of $\{u_{n}\}$. We can see that $\lim_{n\to\infty}^{(\Gamma)}v_{n}=\widehat{1}_{F(\mathbb{R})}$. However $\lim_{n\to\infty}^{(\Gamma)}u_{n}$ does not exist because $$\liminf_{n\to\infty}u_{n}=\mathbb{R}\times\{0\}\subsetneqq{\rm end}\,{\widehat{1}_{F(\mathbb{R})}}\vee{\rm end}\,{\widehat{3}_{F(\mathbb{R})}}=\limsup_{n\to\infty}u_{n}.$$ In this paper, for a metric space $(Y,\rho)$ and a subset $S$ in $Y$, we still use $\rho$ to denote the induced metric on $S$ by $\rho$. 3 Representation theorems for various kinds of fuzzy sets In this section, we give representation theorems for various kinds of fuzzy sets. These representation theorems are useful in this paper. The following representation theorem should be a known conclusion. In this paper we assume that $\sup\emptyset=0$. Theorem 3.1. Let $Y$ be a nonempty set. If $u\in F(Y)$, then for all $\alpha\in(0,1]$, $[u]_{\alpha}=\cap_{\beta<\alpha}[u]_{\beta}.$ Conversely, suppose that $\{v_{\alpha}:\alpha\in(0,1]\}$ is a family of sets in $Y$ with $v_{\alpha}=\cap_{\beta<\alpha}v_{\beta}$ for all $\alpha\in(0,1]$. Define $u\in F(Y)$ by $$u(x):=\sup\{\alpha:x\in v_{\alpha}\}$$ for each $x\in Y$. Then $u$ is the unique fuzzy set in $Y$ satisfying that $[u]_{\alpha}=v_{\alpha}$ for all $\alpha\in(0,1]$. Proof. Let $u\in F(Y)$ and $\alpha\in(0,1]$. For each $x\in Y$, $x\in[u]_{\alpha}\Leftrightarrow u(x)\geq\alpha\Leftrightarrow$ for each $\beta<\alpha$, $u(x)\geq\beta$ $\Leftrightarrow$ for each $\beta<\alpha$, $x\in[u]_{\beta}$. So $[u]_{\alpha}=\cap_{\beta<\alpha}[u]_{\beta}.$ Conversely, suppose that $\{v_{\alpha}:\alpha\in(0,1]\}$ is a family of sets in $Y$ with $v_{\alpha}=\cap_{\beta<\alpha}v_{\beta}$ for all $\alpha\in(0,1]$. Let $u\in F(Y)$ defined by $$u(x):=\sup\{\alpha:x\in v_{\alpha}\}$$ for each $x\in Y$. Firstly, we show that for each $\alpha\in(0,1]$, $[u]_{\alpha}=v_{\alpha}$. To do this, let $\alpha\in(0,1]$. We only need to verify that $[u]_{\alpha}\supseteq v_{\alpha}$ and $[u]_{\alpha}\subseteq v_{\alpha}$. Let $x\in v_{\alpha}$. Then clearly $u(x)\geq\alpha$, i.e. $x\in[u]_{\alpha}$. So $[u]_{\alpha}\supseteq v_{\alpha}$. Let $x\in[u]_{\alpha}$. Then $\sup\{\beta:x\in v_{\beta}\}=u(x)\geq\alpha$. Hence there exists a sequence $\{\beta_{n},\ n=1,2,\ldots\}$ such that $1\geq\beta_{n}\geq\alpha-1/n$ and $x\in v_{\beta_{n}}$. Set $\gamma=\sup_{n=1}^{+\infty}\beta_{n}$. Then $1\geq\gamma\geq\alpha$ and thus $x\in\cap_{n=1}^{+\infty}v_{\beta_{n}}=v_{\gamma}\subseteq v_{\alpha}$. So $[u]_{\alpha}\subseteq v_{\alpha}$. Now we show the uniqueness of $u$. To do this, assume that $v$ is a fuzzy set in $Y$ satisfying that $[v]_{\alpha}=v_{\alpha}$ for all $\alpha\in(0,1]$. Then for each $x\in Y$, $$v(x)=\sup\{\alpha:x\in[v]_{\alpha}\}=\sup\{\alpha:x\in v_{\alpha}\}=u(x).$$ So $u=v$. ∎ Remark 3.2. We can’t find the original reference which gave Theorem 3.1, so we give a proof here for the self-containing of this paper. Theorem 3.1 and its proof are essentially the same as the Theorem 7.10 in P27 of chinaXiv:202110.00083v4 and its proof since the uniqueness of $u$ is obvious. From Theorem 3.1, it follows immediately below representation theorems for $F_{USC}(X)$, $F^{1}_{USC}(X)$, $F_{USCG}(X)$, $F_{CON}(X)$, $F_{USCB}(X)$, and $F^{1}_{USCB}(X)$. Proposition 3.3. Let $(X,d)$ be a metric space. If $u\in F_{USC}(X)$ (respectively, $u\in F^{1}_{USC}(X)$, $u\in F_{USCG}(X)$, $u\in F_{CON}(X)$), then (i)  $[u]_{\alpha}\in C(X)\cup\{\emptyset\}$ (respectively, $[u]_{\alpha}\in C(X)$, $[u]_{\alpha}\in K(X)\cup\{\emptyset\}$, $[u]_{\alpha}$ is connected in $(X,d)$) for all $\alpha\in(0,1]$, and (ii)  $[u]_{\alpha}=\bigcap_{\beta<\alpha}[u]_{\beta}$ for all $\alpha\in(0,1]$. Conversely, suppose that the family of sets $\{v_{\alpha}:\alpha\in(0,1]\}$ satisfies conditions (i) and (ii). Define $u\in F(X)$ by $u(x):=\sup\{\alpha:x\in v_{\alpha}\}$ for each $x\in X$. Then $u$ is the unique fuzzy set in $X$ satisfying that $[u]_{\alpha}=v_{\alpha}$ for each $\alpha\in(0,1]$. Moreover, $u\in F_{USC}(X)$ (respectively, $u\in F^{1}_{USC}(X)$, $u\in F_{USCG}(X)$, $u\in F_{CON}(X)$). Proof. The proof is routine. We only show the case of $F_{USC}(X)$. The other cases can be verified similarly. If $x\in F_{USC}(X)$, then clearly (i) is true. From Theorem 3.1, (ii) is true. Conversely, suppose that the family of sets $\{v_{\alpha}:\alpha\in(0,1]\}$ satisfies conditions (i) and (ii). Define $u\in F(X)$ by $u(x):=\sup\{\alpha:x\in v_{\alpha}\}$ for each $x\in X$. Then by Theorem 3.1, $u$ is the unique fuzzy set in $X$ satisfying that $[u]_{\alpha}=v_{\alpha}$ for each $\alpha\in(0,1]$. Since $\{[u]_{\alpha},\alpha\in(0,1]\}$ satisfies condition (i), $u\in F_{USC}(X)$. ∎ Proposition 3.4. Let $(X,d)$ be a metric space. If $u\in F_{USCB}(X)$ (respectively, $u\in F^{1}_{USCB}(X)$), then (i)  $[u]_{\alpha}\in K(X)\cup\{\emptyset\}$ (respectively, $[u]_{\alpha}\in K(X)$) for all $\alpha\in[0,1]$, (ii)  $[u]_{\alpha}=\bigcap_{\beta<\alpha}[u]_{\beta}$ for all $\alpha\in(0,1]$, and (iii)  $[u]_{0}=\overline{\bigcup_{\beta>0}[u]_{\beta}}$. Conversely, suppose that the family of sets $\{v_{\alpha}:\alpha\in[0,1]\}$ satisfies conditions (i) through (iii). Define $u\in F(X)$ by $u(x):=\sup\{\alpha:x\in v_{\alpha}\}$ for each $x\in X$. Then $u$ is the unique fuzzy set in $X$ satisfying that $[u]_{\alpha}=v_{\alpha}$ for each $\alpha\in[0,1]$. Moreover, $u\in F_{USCB}(X)$ (respectively, $u\in F^{1}_{USCB}(X)$). Proof. The proof is routine. We only show the case of $F_{USCB}(X)$. The case of $F^{1}_{USCB}(X)$ can be verified similarly. If $x\in F_{USCB}(X)$, then clearly (i) is true. By Theorem 3.1, (ii) is true. From the definition of $[u]_{0}$, (iii) is true. Conversely, suppose that the family of sets $\{v_{\alpha}:\alpha\in[0,1]\}$ satisfies conditions (i) through (iii). Define $u\in F(X)$ by $u(x):=\sup\{\alpha:x\in v_{\alpha}\}$ for each $x\in X$. Then by Theorem 3.1, $u$ is the unique fuzzy set in $X$ satisfying that $[u]_{\alpha}=v_{\alpha}$ for each $\alpha\in(0,1]$. Clearly $[u]_{0}=\overline{\bigcup_{\beta>0}[u]_{\beta}}=\overline{\bigcup_{\beta>0}v_{\beta}}=v_{0}$. Since $\{[u]_{\alpha},\alpha\in[0,1]\}$ satisfies condition (i), $u\in F_{USCB}(X)$. ∎ Similarly, we can obtain the representation theorems for $F_{USCCON}(X)$, $F_{USCGCON}(X)$, $F^{1}_{USCCON}(X)$, etc. Based on these representation theorems, we can define a fuzzy set or a certain type fuzzy set by giving the family of its $\alpha$-cuts. In the sequel, we will directly point out that what we defined is a fuzzy set or a certain type fuzzy set without saying which representation theorem is used since it is easy to see. 4 Characterization of compactness in $(F_{USCG}(X),H_{\rm end})$ In this section, we give the characterizations of relatively compact sets, totally bounded sets, and compact sets in $(F_{USCG}(X),H_{\rm end})$, respectively. We point out that these results improve the characterizations of relatively compact sets, totally bounded sets, and compact sets in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$ given in our previous work huang , respectively. 1. A subset $Y$ of a topological space $Z$ is said to be compact if for every set $I$ and every family of open sets, $O_{i}$, $i\in I$, such that $Y\subset\bigcup_{i\in I}O_{i}$ there exists a finite family $O_{i_{1}}$, $O_{i_{2}}$ …, $O_{i_{n}}$ such that $Y\subseteq O_{i_{1}}\cup O_{i_{2}}\cup\ldots\cup O_{i_{n}}$. In the case of a metric topology, the criterion for compactness becomes that any sequence in $Y$ has a subsequence convergent in $Y$. 2. A relatively compact subset $Y$ of a topological space $Z$ is a subset with compact closure. In the case of a metric topology, the criterion for relative compactness becomes that any sequence in $Y$ has a subsequence convergent in $X$. 3. Let $(X,d)$ be a metric space. A set $U$ in $X$ is totally bounded if and only if, for each $\varepsilon>0$, it contains a finite $\varepsilon$ approximation, where an $\varepsilon$ approximation to $U$ is a subset $S$ of $U$ such that $d(x,S)<\varepsilon$ for each $x\in U$. An $\varepsilon$ approximation to $U$ is also called an $\varepsilon$-net of $U$. Let $(X,d)$ be a metric space. A set $U$ is compact in $(X,d)$ implies that $U$ is relatively compact in $(X,d)$, which in turn implies that $U$ is totally bounded in $(X,d)$. Let $Y$ be a subset of $X$ and $A$ a subset of $Y$. Then $A$ is totally bounded in $(Y,d)$ if and only if $A$ is totally bounded in $(X,d)$. Theorem 4.1. huang719 Let $(X,d)$ be a metric space and $\mathcal{D}\subseteq K(X)$. Then $\mathcal{D}$ is totally bounded in $(K(X),H)$ if and only if $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is totally bounded in $(X,d)$. Theorem 4.2. greco Let $(X,d)$ be a metric space and $\mathcal{D}\subseteq K(X)$. Then $\mathcal{D}$ is relatively compact in $(K(X),H)$ if and only if $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is relatively compact in $(X,d)$. Theorem 4.3. huang719 Let $(X,d)$ be a metric space and $\mathcal{D}\subseteq K(X)$. Then the following are equivalent: (i)  $\mathcal{D}$ is compact in $(K(X),H)$; (ii)  $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is relatively compact in $(X,d)$ and $\mathcal{D}$ is closed in $(K(X),H)$; (iii)  $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is compact in $(X,d)$ and $\mathcal{D}$ is closed in $(K(X),H)$. Let $u\in F_{USC}(X)$. Define $\bm{u^{e}}={\rm end}\,u$. Let $A$ be a subset of $F_{USC}(X)$. Define $\bm{A^{e}}=\{u^{e}:u\in A\}$. Clearly $F_{USC}(X)^{e}\subseteq C(X\times[0,1])$. Define $g:(F_{USC}(X),H_{\rm end})\to(C(X\times[0,1]),H)$ given by $g(u)={\rm end}\,u$. Then 1. $g$ is an isometric embedding of $(F_{USC}(X),H_{\rm end})$ in $(C(X\times[0,1]),H)$, 2. $g(F_{USC}(X))=F_{USC}(X)^{e}$, and 3. $(F_{USC}(X),H_{\rm end})$ is isometric to $(F_{USC}(X)^{e},H)$. The following representation theorem for $F_{USC}(X)^{e}$ follows immediately from Proposition 3.3. Proposition 4.4. Let $U$ be a subset of $X\times[0,1]$. Then $U\in F_{USC}(X)^{e}$ if and only if the following properties (i)-(iii) are true. (i) For each $\alpha\in(0,1]$, $\langle U\rangle_{\alpha}\in C(X)\cup\{\emptyset\}$. (ii)  For each $\alpha\in(0,1]$, $\langle U\rangle_{\alpha}=\bigcap_{\beta<\alpha}\langle U\rangle_{\beta}$. (iii)  $\langle U\rangle_{0}=X$. Proposition 4.5. Let $U\in C(X\times[0,1])$. Then the following (i) and (ii) are equivalent. (i) For each $\alpha$ with $0<\alpha\leq 1$, $\langle U\rangle_{\alpha}=\bigcap_{\beta<\alpha}\langle U\rangle_{\beta}$. (ii) For each $\alpha,\beta$ with $0\leq\beta<\alpha\leq 1$, $\langle U\rangle_{\alpha}\subseteq\langle U\rangle_{\beta}$. Proof. The proof is routine. (i)$\Rightarrow$(ii) is obviously. Suppose that (ii) is true. To show that (i) is true, let $\alpha\in(0,1]$. From (ii), $\langle U\rangle_{\alpha}\subseteq\bigcap_{\beta<\alpha}\langle U\rangle_{\beta}$. So we only need to prove that $\langle U\rangle_{\alpha}\supseteq\bigcap_{\beta<\alpha}\langle U\rangle_{\beta}$. To do this, let $x\in\bigcap_{\beta<\alpha}\langle U\rangle_{\beta}$. This means that $(x,\beta)\in U$ for $\beta\in[0,\alpha)$. Since $\lim_{\beta\to\alpha-}\overline{d}((x,\beta),(x,\alpha))=0$, from the closedness of $U$, it follows that $(x,\alpha)\in U$. Hence $x\in\langle U\rangle_{\alpha}$. Thus $\langle U\rangle_{\alpha}\supseteq\bigcap_{\beta<\alpha}\langle U\rangle_{\beta}$ from the arbitrariness of $x$ in $\bigcap_{\beta<\alpha}\langle U\rangle_{\beta}$. So (ii)$\Rightarrow$(i). ∎ Proposition 4.6. Let $U\in C(X\times[0,1])$. Then $U\in F_{USC}(X)^{e}$ if and only if $U$ has the following properties: (i)  for each $\alpha,\beta$ with $0\leq\beta<\alpha\leq 1$, $\langle U\rangle_{\alpha}\subseteq\langle U\rangle_{\beta}$, and (ii)  $\langle U\rangle_{0}=X$. Proof. Since $U\in C(X\times[0,1])$, then clearly $\langle U\rangle_{\alpha}\in C(X)\cup\{\emptyset\}$ for all $\alpha\in[0,1]$. Thus the desired result follows immediately from Propositions 4.4 and 4.5. ∎ As a shorthand, we denote the sequence $x_{1},x_{2},\ldots,x_{n},\ldots$ by $\{x_{n}\}$. Proposition 4.7. $F_{USC}(X)^{e}$ is a closed subset of $(C(X\times[0,1]),H)$. Proof. Let $\{u_{n}^{e}:n=1,2,\ldots\}$ be a sequence in $F_{USC}(X)^{e}$ with $\{u_{n}^{e}\}$ converging to $U$ in $(C(X\times[0,1]),H)$. To show the desired result, we only need to show that $U\in F_{USC}(X)^{e}$. We claim that (i)  for each $\alpha,\beta$ with $0\leq\beta<\alpha\leq 1$, $\langle U\rangle_{\alpha}\subseteq\langle U\rangle_{\beta}$; (ii)  $\langle U\rangle_{0}=X$. To show (i), let $\alpha,\beta$ in $[0,1]$ with $\beta<\alpha$, and let $x\in\langle U\rangle_{\alpha}$, i.e. $(x,\alpha)\in U$. By Theorem 2.3, $\lim_{n\to\infty}^{(K)}u_{n}^{e}\,=U$. Then there is a sequence $\{(x_{n},\alpha_{n})\}$ satisfying $(x_{n},\alpha_{n})\in u_{n}^{e}$ for $n=1,2,\ldots$ and $\lim_{n\to\infty}\overline{d}((x_{n},\alpha_{n}),(x,\alpha))=0$. Hence there is an $N$ such that $\alpha_{n}>\beta$ for all $n\geq N$. Thus $(x_{n},\beta)\in u_{n}^{e}$ for $n\geq N$. Note that $\lim_{n\to\infty}\overline{d}((x_{n},\beta),(x,\beta))=0$. Then $(x,\beta)\in\lim_{n\to\infty}^{(K)}u_{n}^{e}=U$. This means that $x\in\langle U\rangle_{\beta}$. So (i) is true. Clearly $\langle U\rangle_{0}\subseteq X$. From $\lim_{n\to\infty}^{(K)}u_{n}^{e}\,=U$ and $\langle u_{n}^{e}\rangle_{0}=X$, we have that $\langle U\rangle_{0}\supseteq X$. Thus $\langle U\rangle_{0}=X$. So (ii) is true. By Proposition 4.6, (i) and (ii) imply that $U\in F_{USC}(X)^{e}$. ∎ Remark 4.8. Let $a\in[0,1]$. From Proposition 5.1, we can deduce that $F^{{}^{\prime}a}_{USC}(X)$ is a closed subset of $(F_{USC}(X),H_{\rm end})$. Then by Proposition 4.7, we have that $F^{{}^{\prime}a}_{USC}(X)^{e}$ is a closed subset of $(C(X\times[0,1]),H)$. We use $(\widetilde{X},\widetilde{d})$ to denote the completion of $(X,d)$. We see $(X,d)$ as a subspace of $(\widetilde{X},\widetilde{d})$. If there is no confusion, we also use $H$ to denote the Hausdorff metric on $C(\widetilde{X})$ induced by $\widetilde{d}$. We also use $H$ to denote the Hausdorff metric on $C(\widetilde{X}\times[0,1])$ induced by $\overline{\widetilde{d}}$. We also use $H_{\rm end}$ to denote the endograph metric on $F_{USC}(\widetilde{X})$ given by using $H$ on $C(\widetilde{X}\times[0,1])$. $F(X)$ can be naturally embedded into $F(\widetilde{X})$. An embedding $\bm{j}$ from $F(X)$ to $F(\widetilde{X})$ is defined as follows. Let $u\in F(X)$. We can define $j(u)\in F(\widetilde{X})$ as $$j(u)(t)=\left\{\begin{array}[]{ll}u(t),&t\in X,\\ 0,&t\in\widetilde{X}\setminus X.\end{array}\right.$$ Let $U\subseteq X$. If $U$ is compact in $(X,d)$, then $U$ is compact in $(\widetilde{X},\widetilde{d})$. So if $u\in F_{USCG}(X)$, then $j(u)\in F_{USCG}(\widetilde{X})$ because $[j(u)]_{\alpha}=[u]_{\alpha}\subseteq K(\widetilde{X})\cup\{\emptyset\}$ for each $\alpha\in(0,1]$. We can see that for $u,v\in F_{USCG}(X)$, $H_{\rm end}(u,v)=H_{\rm end}(j(u),j(v))$. So $j|_{F_{USCG}(X)}$ is an isometric embedding of $(F_{USCG}(X),H_{\rm end})$ in $(F_{USCG}(\widetilde{X}),H_{\rm end})$. Since $(F_{USCG}(X),H_{\rm end})$ can be embedded isometrically in $(F_{USCG}(\widetilde{X}),H_{\rm end})$, in the sequel, we treat $(F_{USCG}(X),H_{\rm end})$ as a metric subspace of $(F_{USCG}(\widetilde{X}),H_{\rm end})$ by identifying $u$ in $F_{USCG}(X)$ with $j(u)$ in $F_{USCG}(\widetilde{X})$. So a subset $U$ of $F_{USCG}(X)$ can be seen as a subset of $F_{USCG}(\widetilde{X})$. Suppose that $U$ is a subset of $F_{USC}(X)$ and $\alpha\in[0,1]$. For writing convenience, we denote 1. $U(\alpha):=\bigcup_{u\in U}[u]_{\alpha}$, and 2. $U_{\alpha}:=\{[u]_{\alpha}:u\in U\}$. Here we mention that an empty union is $\emptyset$. Lemma 4.9. Let $U$ be a subset of $F_{USCG}(X)$. If $U$ is totally bounded in $(F_{USCG}(X),H_{\rm end})$, then $U(\alpha)$ is totally bounded in $(X,d)$ for each $\alpha\in(0,1]$. Proof. The proof is similar to that of the necessity part of Theorem 7.8 in huang719 . Let $\alpha\in(0,1]$. To show that $U(\alpha)$ is totally bounded in $X$, we only need to show that each sequence in $U(\alpha)$ has a Cauchy subsequence. Let $\{x_{n}\}$ be a sequence in $U(\alpha)$. Then there is a sequence $\{u_{n}\}$ in $U$ with $x_{n}\in[u_{n}]_{\alpha}$ for $n=1,2,\ldots$. Since $U$ is totally bounded in $(F_{USCG}(X),H_{\rm end})$, $\{u_{n}\}$ has a Cauchy subsequence $\{u_{n_{l}}\}$ in $(F_{USCG}(X),H_{\rm end})$. So given $\varepsilon\in(0,\alpha)$, there is a $K(\varepsilon)\in\mathbb{N}$ such that $$H_{\rm end}(u_{n_{l}},u_{n_{K}})<\varepsilon$$ for all $l\geq K$. Thus $$H^{*}([u_{n_{l}}]_{\alpha},[u_{n_{K}}]_{\alpha-\varepsilon})<\varepsilon$$ (2) for all $l\geq K$. From (2) and the arbitrariness of $\varepsilon$, $\bigcup_{l=1}^{+\infty}[u_{n_{l}}]_{\alpha}$ is totally bounded in $(X,d)$. Thus $\{x_{n_{l}}\}$, which is a subsequence of $\{x_{n}\}$, has a Cauchy subsequence, and so does $\{x_{n}\}$. ∎ Remark 4.10. It is easy to see that for a totally bounded set $U$ in $(F_{USCG}(X),H_{\rm end})$ and $\alpha\in(0,1]$, $U(\alpha)=\emptyset$ is possible even if $U\not=\emptyset$. For $D\subseteq X\times[0,1]$ and $\alpha\in[0,1]$, define $\bm{\langle D\rangle_{\alpha}}:=\{x:(x,\alpha)\in D\}$. Let $u\in F(X)$ and $0\leq r\leq t\leq 1$. We use the symbol $\bm{{\rm end}_{r}^{t}\,u}$ to denote the subset of ${\rm end}\,u$ given by $${\rm end}_{r}^{t}\,u:={\rm end}\,u\cap([u]_{r}\times[r,t]).$$ For simplicity, we write ${\rm end}_{r}^{1}\,u$ as ${\rm end}_{r}\,u$. We can see that ${\rm end}_{0}\,u={\rm send}\,u$. Theorem 4.11. Let $U$ be a subset of $F_{USCG}(X)$. Then $U$ is relatively compact in $(F_{USCG}(X),H_{\rm end})$ if and only if $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,1]$. Proof. Necessity. Suppose that $U$ is relatively compact in $(F_{USCG}(X),H_{\rm end})$. Let $\alpha\in(0,1]$. Then by Lemma 4.9, $U(\alpha)$ is totally bounded. Hence $U(\alpha)$ is relatively compact in $(\widetilde{X},\widetilde{d})$. To show that $U(\alpha)$ is relatively compact in $(X,d)$, we proceed by contradiction. If this were not the case, then there exists a sequence $\{x_{n}\}$ in $U(\alpha)$ such that $\{x_{n}\}$ converges to $x\in\widetilde{X}\setminus X$ in $(\widetilde{X},\widetilde{d})$. Assume that $x_{n}\in[u_{n}]_{\alpha}$ and $u_{n}\in U$, $n=1,2,\ldots$. From the relative compactness of $U$, there is a subsequence $\{u_{n_{k}}\}$ of $\{u_{n}\}$ such that $\{u_{n_{k}}\}$ converges to $u\in F_{USCG}(X)$. Since $F_{USCG}(X)$ can be seen as a subspace of $F_{USCG}(\widetilde{X})$, we obtain that $\{u_{n_{k}}\}$ converges to $u$ in $(F_{USCG}(\widetilde{X}),H_{\rm end})$. By Theorem 2.3, $\lim_{k\to\infty}^{(K)}u_{n_{k}}^{e}\,=u^{e}$ according to $(\widetilde{X}\times[0,1],\overline{\widetilde{d}})$. Notice that $(x_{n_{k}},\alpha)\in u_{n_{k}}^{e}$ for $k=1,2,\ldots$, and $\{(x_{n_{k}},\alpha)\}$ converges to $(x,\alpha)$ in $(\widetilde{X}\times[0,1],\overline{\widetilde{d}})$. Thus $(x,\alpha)\in u^{e}$, which contradicts $x\in\widetilde{X}\setminus X$. It can be seen that the necessity part of Theorem 7.10 in huang719 can be verified in a similar manner to that in the necessity part of this theorem. Sufficiency.  Suppose that $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,1]$. To show that $U$ is relatively compact in $(F_{USCG}(X),H_{\rm end})$, we only need to show that each sequence in $U$ has a convergent subsequence in $(F_{USCG}(X),H_{\rm end})$. Let $\{u_{n}\}$ be a sequence in $U$. If $\liminf_{n\to\infty}S_{u_{n}}=0$, i.e. there is a subsequence $\{u_{n_{k}}\}$ of $\{u_{n}\}$ such that $\lim_{k\to\infty}S_{u_{n_{k}}}=0$. Then clearly $H_{\rm end}(u_{n_{k}},\emptyset_{F(X)})=S_{u_{n_{k}}}\to 0$ as $k\to\infty$. Since $\emptyset_{F(X)}\in F_{USCG}(X)$, $\{u_{n_{k}}\}$ is a convergent subsequence in $(F_{USCG}(X),H_{\rm end})$. If $\liminf_{n\to\infty}S_{u_{n}}>0$, then there is a $\xi>0$ and an $N\in\mathbb{N}$ such that $[u_{n}]_{\xi}\not=\emptyset$ for all $n\geq N$. First we claim the following property: (a) Let $\alpha\in(0,1]$ and $S$ be a subset of $U$ with $[u]_{\alpha}\not=\emptyset$ for each $u\in S$. Then $\{{\rm end}_{\alpha}u:u\in S\}$ is a relatively compact set in $(K(X\times[\alpha,1]),H)$. It can be seen that for each $u\in S$, ${\rm end}_{\alpha}\,u\in K(X\times[\alpha,1])$. As $U(\alpha)$ is relatively compact in $(X,d)$, $U(\alpha)\times[\alpha,1]$ is relatively compact in $(X\times[\alpha,1],\overline{d})$. Since $\bigcup_{u\in S}{\rm end}_{\alpha}u$ is a subset of $U(\alpha)\times[\alpha,1]$, then $\bigcup_{u\in S}{\rm end}_{\alpha}u$ is also a relatively compact set in $(X\times[\alpha,1],\overline{d})$. Thus by Theorem 4.2, $\{{\rm end}_{\alpha}u:u\in S\}$ is relatively compact in $(K(X\times[\alpha,1]),H)$. So affirmation (a) is true. Take a sequence $\{\alpha_{k},\ k=1,2,\ldots\}$ which satisfies that $0<\alpha_{k+1}<\alpha_{k}\leq\min\{\xi,\frac{1}{k}\}$ for $k=1,2,\ldots$. We can see that $\alpha_{k}\to 0$ as $k\to\infty$. By affirmation (a), $\{{\rm end}_{\alpha_{1}}\,u_{n}:n=N,N+1,\ldots\}$ is relatively compact in $(K(X\times[\alpha_{1},1]),H)$. So there is a subsequence $\{u_{n}^{(1)}\}$ of $\{u_{n}:n\geq N\}$ and $v^{1}\in K(X\times[\alpha_{1},1])$ such that $H({\rm end}_{\alpha_{1}}\,u_{n}^{(1)},v^{1})\to 0$. Clearly, $\{u_{n}^{(1)}\}$ is also a subsequence of $\{u_{n}\}$. Again using affirmation (a), $\{{\rm end}_{\alpha_{2}}\,u_{n}^{(1)}\}$ is relatively compact in $(K(X\times[\alpha_{2},1]),H)$. So there is a subsequence $\{u_{n}^{(2)}\}$ of $\{u_{n}^{(1)}\}$ and $v^{2}\in K(X\times[\alpha_{2},1])$ such that $H({\rm end}_{\alpha_{2}}\,u_{n}^{(2)},v^{2})\to 0$. Repeating the above procedure, we can obtain $\{u_{n}^{(k)}\}$ and $v^{k}\in K(X\times[\alpha_{k},1])$, $k=1,2,\ldots$, such that for each $k=1,2,\ldots$, $\{u_{n}^{(k+1)}\}$ is a subsequence of $\{u_{n}^{(k)}\}$ and $H({\rm end}_{\alpha_{k}}\,u_{n}^{(k)},v^{k})\to 0$. We claim that (b) Let $k_{1}$ and $k_{2}$ be in $\mathbb{N}$ with $k_{1}<k_{2}$. Then (i)  $\langle v^{k_{1}}\rangle_{\alpha_{k_{1}}}\subseteq\langle v^{k_{2}}\rangle_{\alpha_{k_{1}}}$, (ii) $\langle v^{k_{1}}\rangle_{\alpha}=\langle v^{k_{2}}\rangle_{\alpha}$ when $\alpha\in(\alpha_{k_{1}},1]$, (iii) $v^{k}\subseteq v^{k+1}$ for $k=1,2,\ldots$. Note that $\{u_{n}^{(k_{2})}\}$ is a subsequence of $\{u_{n}^{(k_{1})}\}$ and that $\alpha_{k_{2}}<\alpha_{k_{1}}$. Thus by Theorem 2.3, for each $\alpha\in[\alpha_{k_{1}},1]$, $$\displaystyle\langle v^{k_{1}}\rangle_{\alpha}\times\{\alpha\}$$ $$\displaystyle=\liminf_{n\to\infty}{\rm end}_{\alpha_{k_{1}}}\,u_{n}^{(k_{1})}\cap(X\times\{\alpha\})$$ $$\displaystyle\subseteq\liminf_{n\to\infty}{\rm end}_{\alpha_{k_{2}}}\,u_{n}^{(k_{2})}\cap(X\times\{\alpha\})$$ $$\displaystyle=\langle v^{k_{2}}\rangle_{\alpha}\times\{\alpha\}.$$ (3) So (i) is true. Let $\alpha\in[0,1]$ with $\alpha>\alpha_{k_{1}}$. Observe that if a sequence $\{(x_{m},\beta_{m})\}$ converges to a point $(x,\alpha)$ as $m\to\infty$ in $(X\times[0,1],\overline{d})$, then there is an $M$ such that for all $m>M$, $\beta_{m}>\alpha_{k_{1}}$, i.e. $(x_{m},\beta_{m})\in X\times(\alpha_{k_{1}},1]$. Thus by Theorem 2.3, for each $\alpha\in(\alpha_{k_{1}},1]$, $$\displaystyle\langle v^{k_{1}}\rangle_{\alpha}\times\{\alpha\}$$ $$\displaystyle=\limsup_{n\to\infty}{\rm end}_{\alpha_{k_{1}}}\,u_{n}^{(k_{1})}\cap(X\times\{\alpha\})$$ $$\displaystyle\supseteq\limsup_{n\to\infty}{\rm end}_{\alpha_{k_{2}}}\,u_{n}^{(k_{2})}\cap(X\times\{\alpha\})$$ $$\displaystyle=\langle v^{k_{2}}\rangle_{\alpha}\times\{\alpha\}.$$ (4) Hence by (4) and (4), $\langle v^{k_{1}}\rangle_{\alpha}=\langle v^{k_{2}}\rangle_{\alpha}$ for $\alpha\in(\alpha_{k_{1}},1]$. So (ii) is true. (iii) follows immediately from (i) and (ii). Define a subset $v$ of $X\times[0,1]$ given by $$v=\cup_{k=1}^{+\infty}v^{k}\cup(X\times\{0\}).$$ (5) From affirmation (b), we can see that $$\langle v\rangle_{\alpha}=\left\{\begin{array}[]{ll}\langle v^{k}\rangle_{\alpha},&\mbox{ if }\mbox{for some }k\in\mathbb{N},\ \alpha>\alpha_{k},\\ X,&\mbox{ if }\alpha=0,\end{array}\right.$$ (6) and hence $$\displaystyle v\cap(X\times(\alpha_{k},1])=v^{k}\cap(X\times(\alpha_{k},1])\subseteq v^{k}.$$ (7) We show that $v\in C(X\times[0,1])$. To this end, let $\{(x_{l},\gamma_{l})\}$ be a sequence in $v$ which converges to an element $(x,\gamma)$ in $X\times[0,1]$. If $\gamma=0$, then clearly $(x,\gamma)\in v$. If $\gamma>0$, then there is a $k_{0}\in\mathbb{N}$ such that $\gamma>\alpha_{k_{0}}$. Hence there is an $L$ such that $\gamma_{l}>\alpha_{k_{0}}$ when $l\geq L$. So by (7), $(x_{l},\gamma_{l})\in v^{k_{0}}$ when $l\geq L$. Since $v^{k_{0}}\in K(X\times[\alpha_{k_{0}},1])$, it follows that $(x,\gamma)\in v^{k_{0}}\subset v$. We claim that (c) $\lim_{n\to\infty}H({\rm end}\,u_{n}^{(n)},v)=0$ and $v\in F_{USCG}(X)^{e}$. Let $n\in\mathbb{N}$ and $k\in\mathbb{N}$. Then by (5), $$\displaystyle H^{*}({\rm end}\,u_{n}^{(n)},v)$$ $$\displaystyle=\max\{H^{*}({\rm end}_{\alpha_{k}}\,u_{n}^{(n)},v),\ H^{*}({\rm end}_{0}^{\alpha_{k}}\,u_{n}^{(n)},v)\}$$ $$\displaystyle\leq\max\{H^{*}({\rm end}_{\alpha_{k}}\,u_{n}^{(n)},\,v^{k}),\ \alpha_{k}\},$$ (8) and by (7), $$\displaystyle H^{*}(v,{\rm end}\,u_{n}^{(n)})$$ $$\displaystyle=\max\{\sup_{(x,\gamma)\in v\cap(X\times(\alpha_{k},1]}\overline{d}((x,\gamma),\,{\rm end}\,u_{n}^{(n)}),\ H^{*}(v\cap(X\times[0,\alpha_{k}]),\,{\rm end}\,u_{n}^{(n)})\}$$ $$\displaystyle\leq\max\{H^{*}(v^{k},\,{\rm end}_{\alpha_{k}}\,u_{n}^{(n)}),\ \alpha_{k}\}.$$ (9) Clearly, (4) and (4) imply that $$\displaystyle H({\rm end}\,u_{n}^{(n)},\,v)$$ $$\displaystyle\leq\max\{H({\rm end}_{\alpha_{k}}\,u_{n}^{(n)},\,v^{k}),\ \alpha_{k}\}.$$ (10) Now we show that $$\lim_{n\to\infty}H({\rm end}\,u_{n}^{(n)},v)=0.$$ (11) To see this, let $\varepsilon>0$. Notice that $\alpha_{k}\to 0$ and for each $\alpha_{k}$, $k=1,2,\ldots$, $\lim_{n\to\infty}H({\rm end}_{\alpha_{k}}\,u_{n}^{(n)},v^{k})=0$. Then there is an $\alpha_{k_{0}}$ and an $N\in\mathbb{N}$ such that $\alpha_{k_{0}}<\varepsilon$ and $H({\rm end}_{\alpha_{k_{0}}}\,u_{n}^{(n)},v^{k_{0}})<\varepsilon$ for all $n\geq N$. Thus by (10), $H({\rm end}\,u_{n}^{(n)},v)<\varepsilon$ for all $n\geq N$. So (11) is true. Since the sequence $\{{\rm end}\,u_{n}^{(n)}\}$ is in $F_{USC}(X)^{e}$ and $\{{\rm end}\,u_{n}^{(n)}\}$ converges to $v$ in $(C(X\times[0,1]),H)$, by Proposition 4.7, it follows that $v\in F_{USC}(X)^{e}$. Let $k\in\mathbb{N}$. Then $v^{k}\in K(X\times[\alpha_{k},1])$, and hence $\langle v^{k}\rangle_{\alpha}\in K(X)\cup\{\emptyset\}$ for all $\alpha\in[0,1]$. So from (6), $\langle v\rangle_{\alpha}\in K(X)\cup\{\emptyset\}$ for all $\alpha\in(0,1]$, and thus $v\in F_{USCG}(X)^{e}$. From affirmation (c), we have that $\{u_{n}^{(n)}\}$ is a convergent sequence in $(F_{USCG}(X),H_{\rm end})$. Note that $\{u_{n}^{(n)}\}$ is a subsequence of $\{u_{n}\}$. Thus the proof is completed. ∎ Theorem 4.12. Let $U$ be a subset of $F_{USCG}(X)$. Then $U$ is totally bounded in $(F_{USCG}(X),H_{\rm end})$ if and only if $U(\alpha)$ is totally bounded in $(X,d)$ for each $\alpha\in(0,1]$. Proof. Necessity. The necessity part is Lemma 4.9. Sufficiency. Suppose that $U(\alpha)$ is totally bounded in $(X,d)$ for each $\alpha\in(0,1]$. Then $U(\alpha)$ is relatively compact in $(\widetilde{X},\widetilde{d})$ for each $\alpha\in(0,1]$. Thus by Theorem 4.11, $U$ is relatively compact in $(F_{USCG}(\widetilde{X}),H_{\rm end})$. Hence $U$ is totally bounded in $(F_{USCG}(\widetilde{X}),H_{\rm end})$. So clearly $U$ is totally bounded in $(F_{USCG}(X),H_{\rm end})$. ∎ Theorem 4.13. Let $U$ be a subset of $F_{USCG}(X)$. Then the following are equivalent: (i) $U$ is compact in $(F_{USCG}(X),H_{\rm end})$; (ii) $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,1]$ and $U$ is closed in $(F_{USCG}(X),H_{\rm end})$; (iii) $U(\alpha)$ is compact in $(X,d)$ for each $\alpha\in(0,1]$ and $U$ is closed in $(F_{USCG}(X),H_{\rm end})$. Proof. By Theorem 4.11, (i) $\Leftrightarrow$ (ii). Obviously (iii) $\Rightarrow$ (ii). We shall complete the proof by showing that (ii) $\Rightarrow$ (iii). Suppose that (ii) is true. To verify (iii), it suffices to show that $U(\alpha)$ is closed in $(X,d)$ for each $\alpha\in(0,1]$. To do this, let $\alpha\in(0,1]$ and let $\{x_{n}\}$ be a sequence in $U(\alpha)$ with $\{x_{n}\}$ converges to an element $x$ in $(X,d)$. We only need to show that $x\in U(\alpha)$. Pick a sequence $\{u_{n}\}$ in $U$ such that $x_{n}\in[u_{n}]_{\alpha}$ for $n=1,2,\ldots$, which means that $(x_{n},\alpha)\in{\rm end}\,u_{n}$ for $n=1,2,\ldots$. From the equivalence of (i) and (ii), $U$ is compact in $(F_{USCG}(X),H_{\rm end})$. So there exists a subsequence $\{u_{n_{k}}\}$ of $\{u_{n}\}$ and $u\in U$ such that $H_{\rm end}(u_{n_{k}},u)\to 0$. Hence by Remark 2.4, $\lim_{n\to\infty}^{(\Gamma)}u_{n_{k}}=u$. Note that $(x,\alpha)=\lim_{k\to\infty}(x_{n_{k}},\alpha)$. Thus $$(x,\alpha)\in\liminf_{n\to\infty}{\rm end}\,u_{n_{k}}={\rm end}\,u.$$ So $x\in[u]_{\alpha}$, and therefore $x\in U(\alpha)$. It can be seen that Theorem 7.11 in huang719 can be verified in a similar manner to that in this theorem. ∎ We huang gave the following characterizations of compactness in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$. Theorem 4.14. (Theorem 7.1 in huang ) Let $U$ be a subset of $F_{USCG}(\mathbb{R}^{m})$. Then $U$ is a relatively compact set in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$ if and only if $U(\alpha)$ is a bounded set in $\mathbb{R}^{m}$ when $\alpha\in(0,1]$. Theorem 4.15. (Theorem 7.3 in huang ) Let $U$ be a subset of $F_{USCG}(\mathbb{R}^{m})$. Then $U$ is a totally bounded set in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$ if and only if, for each $\alpha\in(0,1]$, $U(\alpha)$ is a bounded set in $\mathbb{R}^{m}$. Theorem 4.16. (Theorem 7.2 in huang ) $U$ is a compact set in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$ if and only if $U$ is a closed set in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$ and $U(\alpha)$ is a bounded set in $\mathbb{R}^{m}$ when $\alpha\in(0,1]$. Let $S$ be a set in $\mathbb{R}^{m}$. Then the following properties are equivalent. (i)  $S$ is a bounded set in $\mathbb{R}^{m}$. (ii)  $S$ is a totally bounded set in $\mathbb{R}^{m}$. (iii)  $S$ is a relatively compact set in $\mathbb{R}^{m}$. Using the above well-known fact, we can see that Theorem 4.11 implies Theorem 4.14; Theorem 4.12 implies Theorem 4.15; Theorem 4.13 implies Theorem 4.16. So the characterizations of relative compactness, total boundedness, and compactness in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$ given in our previous work huang are corollaries of the characterizations of relative compactness, total boundedness, and compactness of $(F_{USCG}(X),H_{\rm end})$ given in this section, respectively. Furthermore, the characterizations of relative compactness, total boundedness, and compactness of $(F_{USCG}(X),H_{\rm end})$ given in this section illustrate the relationship between relative compactness, total boundedness, and compactness of a set in $F_{USCG}(X)$ and that of the union of its elements’ $\alpha$-cuts. From above discussions, we can see that the characterizations of relative compactness, total boundedness, and compactness of $(F_{USCG}(X),H_{\rm end})$ given in this section significantly improve the characterizations of relative compactness, total boundedness, and compactness in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$ given in our previous work huang . Remark 4.17. The following clauses (i) and (ii) are pointed out in Remark 5.1 of chinaXiv:202107.00011v2 (we submitted it on 2021-07-22). (i) $(F^{1}_{USCG}(X),H_{\rm end})$ can be treated as a subspace of $(C(X\times[0,1]),H)$ by seeing each $u\in F^{1}_{USCG}(X)$ as its endograph. (ii) We can discuss the properties of $(F^{1}_{USCG}(X),H_{\rm end})$ by treating $(F^{1}_{USCG}(X),H_{\rm end})$ as a subspace of $(C(X\times[0,1]),H)$. These properties include characterizations of total boundedness, relative compactness and compactness of $(F^{1}_{USCG}(X),H_{\rm end})$. In this paper, we treat $(F_{USCG}(X),H_{\rm end})$ as a subspace of $(C(X\times[0,1]),H)$ to discuss the properties of $(F_{USCG}(X),H_{\rm end})$. At the end of this section, we illustrate that Theorems 4.2, 4.1 and 4.3 can be seen as special cases of Theorems 4.11, 4.12, and 4.13, respectively. We begin with some propositions. The following proposition follows immediately from the basic definitions. Proposition 4.18. Let $A$ be a subset of $X$. (i) The conditions (i-1) $A$ is a set in $C(X)$, and (i-2) $\chi_{A}$ is a fuzzy set in $F_{USC}(X)$, are equivalent. (ii) The conditions (ii-1) $A$ is a set in $K(X)$, (ii-2) $\chi_{A}$ is a fuzzy set in $F_{USCB}(X)$, and (ii-3) $\chi_{A}$ is a fuzzy set in $F_{USCG}(X)$, are equivalent. Let $\mathcal{D}\subseteq P(X)$. We use the symbol $\mathcal{D}_{F(X)}$ to denote the set $\{C_{F(X)}:C\in\mathcal{D}\}$. Let $A,B\in C(X)$. Then $$H_{\rm end}(\chi_{A},\chi_{B})=\min\{H(A,B),\ 1\}.$$ (12) Proposition 4.19. Let $\mathcal{D}$ be a subset of $K(X)$. (i) $\mathcal{D}$ is totally bounded in $(K(X),H)$ if and only if $\mathcal{D}_{F(X)}$ is totally bounded in $(F_{USCG}(X),H_{\rm end})$; (ii) $\mathcal{D}$ is compact in $(K(X),H)$ if and only if $\mathcal{D}_{F(X)}$ is compact in $(F_{USCG}(X),H_{\rm end}).$ Proof. From (12), it follows immediately that (i) is true. By (12), we have that $\mathcal{D}$ is compact in $(K(X),H)$ if and only if $\mathcal{D}_{F(X)}$ is compact in $(K(X)_{F(X)},H_{\rm end}).$ Clearly $\mathcal{D}_{F(X)}$ is compact in $(K(X)_{F(X)},H_{\rm end})$ if and only if $\mathcal{D}_{F(X)}$ is compact in $(F_{USCG}(X),H_{\rm end}).$ So (ii) is true. ∎ Proposition 4.20. Let $\{A_{n}\}$ be a sequence of sets in $C(X)$. If $\{\chi_{A_{n}}\}$ converges to a fuzzy set $u$ in $F_{USC}(X)$ according to the $H_{\rm end}$ metric, then there is an $A\in C(X)$ such that $u=\chi_{A}$ and $H(A_{n},A)\to 0$ as $n\to\infty$. Proof. We will show in turn, the following properties (i), (ii) and (iii). (i) Let $x\in X$ and $\alpha,\beta\in(0,1]$. Then $(x,\alpha)\in{\rm end}\,u$ if and only if $(x,\beta)\in{\rm end}\,u$. (ii) $[u]_{\alpha}=[u]_{\beta}$ for all $\alpha,\beta\in[0,1]$. (iii) There is an $A$ in $C(X)$ such that $u=\chi_{A}$ and $H(A_{n},A)\to 0$ as $n\to\infty$. To show (i), we only need to show that if $(x,\alpha)\in{\rm end}\,u$ then $(x,\beta)\in{\rm end}\,u$ since $\alpha$ and $\beta$ can be interchanged. Assume that $(x,\alpha)\in{\rm end}\,u$. Since $H_{\rm end}(\chi_{A_{n}},u)\to 0$, by Theorem 2.3 and Remark 2.4, $\lim_{n\to\infty}^{(\Gamma)}\chi_{A_{n}}=u$. Then there is a sequence $\{(x_{n},\alpha_{n})\}$ such that $(x_{n},\alpha_{n})\in{\rm end}\,\chi_{A_{n}}$ for $n=1,2,\ldots$, and $\lim_{n\to\infty}\overline{d}((x_{n},\alpha_{n}),(x,\alpha))=0$. As $\alpha>0$, it follows that there exists an $N$ such that $\alpha_{n}>0$ for all $n\geq N$. This yields that $(x_{n},\alpha_{n})\in{\rm send}\,\chi_{A_{n}}=A_{n}\times[0,1]$ for all $n\geq N$. Hence $(x_{n},\beta)\in{\rm send}\,\chi_{A_{n}}$ for all $n\geq N$. Observe that $\lim_{n\to\infty}\overline{d}((x_{n},\beta),(x,\beta))=0$, i.e. $\{(x_{n},\beta):n\geq N\}$ converges to $(x,\beta)$ in $(X\times[0,1],\overline{d})$. Thus we have $(x,\beta)\in\liminf_{n\to\infty}{\rm end}\,\chi_{A_{n}}={\rm end}\,u$. So (i) is true. From (i), we have that $[u]_{\alpha}=[u]_{\beta}$ for all $\alpha,\beta\in(0,1]$. Then $[u]_{0}=\overline{\cup_{\alpha>0}[u]_{\alpha}}=[u]_{1}$. So (ii) is true. Set $A=[u]_{1}$. By Proposition 5.1, $u\in F^{{}^{\prime}1}_{USC}(X)$. From this and (ii), it follows that $A\in C(X)$ and $u=\chi_{A}$. Since by (12), $$H_{\rm end}(\chi_{A_{n}},u)=H_{\rm end}(\chi_{A_{n}},\chi_{A})=\min\{H({A_{n}},A),\ 1\}\to 0\mbox{ as }n\to\infty,$$ we obtain that $H({A_{n}},A)\to 0$ as $n\to\infty$. So (iii) is true. This completes the proof. ∎ Proposition 4.21. Let $\{x_{n}\}$ be a sequence in $X$. If $\{\widehat{x_{n}}\}$ converges to a fuzzy set $u$ in $F_{USC}(X)$ according to the $H_{\rm end}$ metric, then there is an $x\in X$ such that $u=\widehat{x}$ and $d(x_{n},x)\to 0$ as $n\to\infty$. Proof. Note that $\widehat{z}=\chi_{\{z\}}$ for each $z\in X$. Thus by Proposition 4.20, there is an $A\in C(X)$ such that $u=\chi_{A}$ and $H(\{x_{n}\},A)\to 0$ as $n\to\infty$. Since $\lim^{(K)}_{n\to\infty}\{x_{n}\}=A$, it follows that $A$ is a singleton. Set $A=\{x\}$. Then $u=\widehat{x}$ and $d(x_{n},x)=H(\{x_{n}\},\{x\})\to 0$ as $n\to\infty$. This completes the proof. ∎ Proposition 4.21 is Proposition 8.15 in our paper arXiv:submit/4644498. It can be seen that we can also use the idea in the proof of Proposition 4.20 to show Proposition 4.21 directly. It can be seen that using the idea in the proof of Proposition 8.15 in arXiv:submit/4644498, we can show that $A$ in the proof of Proposition 4.21 is a singleton as follows. Assume that $A$ has at least two distinct elements. Pick $p,q$ in $A$ with $p\not=q$. Let $z\in X$. Since $d(p,z)+d(q,z)\geq d(p,q)$, it follows that $\max\{d(p,z),d(q,z)\}\geq\frac{1}{2}d(p,q)$. Thus $H(A,\{x_{n}\})=H^{*}(A,\{x_{n}\})\geq\frac{1}{2}d(p,q)$, which contradicts $H(A,\{x_{n}\})\to 0$ as $n\to\infty$. Proposition 4.22. Let $\mathcal{D}$ be a subset of $K(X)$ and $\mathcal{B}$ a subset of $C(X)$. (i) $C(X)_{F(X)}$ is closed in $(F_{USC}(X),H_{\rm end}).$ (ii) $K(X)_{F(X)}$ is closed in $(F_{USCG}(X),H_{\rm end}).$ (iii) $\mathcal{D}$ is closed in $(K(X),H)$ if and only if $\mathcal{D}_{F(X)}$ is closed in $(F_{USCG}(X),H_{\rm end}).$ (iv) $\mathcal{D}$ is relatively compact in $(K(X),H)$ if and only if $\mathcal{D}_{F(X)}$ is relatively compact in $(F_{USCG}(X),H_{\rm end}).$ (v) $\mathcal{B}$ is closed in $(C(X),H)$ if and only if $\mathcal{B}_{F(X)}$ is closed in $(F_{USC}(X),H_{\rm end}).$ (vi) $\mathcal{B}$ is relatively compact in $(C(X),H)$ if and only if $\mathcal{B}_{F(X)}$ is relatively compact in $(F_{USC}(X),H_{\rm end}).$ Proof. From Proposition 4.20 we have that (i) is true. By (i), the closure of $K(X)_{F(X)}$ in $(F_{USCG}(X),H_{\rm end})$ is contained in $F_{USCG}(X)\cap C(X)_{F(X)}$. From Proposition 4.18 (ii), $F_{USCG}(X)\cap C(X)_{F(X)}=K(X)_{F(X)}$. Thus the closure of $K(X)_{F(X)}$ in $(F_{USCG}(X),H_{\rm end})$ is $K(X)_{F(X)}$. So (ii) is true. Suppose the following conditions: (a-1) $\mathcal{D}$ is closed in $(K(X),H)$, (a-2) $\mathcal{D}_{F(X)}$ is closed in $(K(X)_{F(X)},H_{\rm end})$, and (a-3) $\mathcal{D}_{F(X)}$ is closed in $(F_{USCG}(X),H_{\rm end}).$ By (12), (a-1)$\Leftrightarrow$(a-2). From (ii), (a-2)$\Leftrightarrow$(a-3). Thus (a-1)$\Leftrightarrow$(a-3). So (iii) is true. Suppose the following conditions: (b-1) $\mathcal{D}$ is relatively compact in $(K(X),H)$, (b-2) $\mathcal{D}_{F(X)}$ is relatively compact in $(K(X)_{F(X)},H_{\rm end})$, and (b-3) $\mathcal{D}_{F(X)}$ is relatively compact in $(F_{USCG}(X),H_{\rm end}).$ By (12), (b-1)$\Leftrightarrow$(b-2). From (ii), (b-2)$\Leftrightarrow$(b-3). Thus (b-1)$\Leftrightarrow$(b-3). So (iv) is true. Using (12) and (i), (v) and (vi) can be proved in a similar manner to (iii) and (iv), respectively. ∎ Each subset $\mathcal{D}$ of $(K(X),H)$ corresponds a subset $\mathcal{D}_{F(X)}$ of $(F_{USCG}(X),H_{\rm end}).$ Using Theorems 4.11, 4.12, and 4.13, we obtain the characterizations of relative compactness, total boundedness, and compactness for $\mathcal{D}_{F(X)}$ in $(F_{USCG}(X),H_{\rm end})$ as follows. Corollary 4.23. Let $\mathcal{D}$ be a subset of $K(X)$. Then $\mathcal{D}_{F(X)}$ is relatively compact in $(F_{USCG}(X),H_{\rm end})$ if and only if $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is relatively compact in $(X,d)$. Corollary 4.24. Let $\mathcal{D}$ be a subset of $K(X)$. Then $\mathcal{D}_{F(X)}$ is totally bounded in $(F_{USCG}(X),H_{\rm end})$ if and only if $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is totally bounded in $(X,d)$. Corollary 4.25. Let $\mathcal{D}$ be a subset of $K(X)$. Then the following are equivalent: (i) $\mathcal{D}_{F(X)}$ is compact in $(F_{USCG}(X),H_{\rm end})$; (ii) $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is relatively compact in $(X,d)$ and $\mathcal{D}_{F(X)}$ is closed in $(F_{USCG}(X),H_{\rm end})$; (iii) $\mathbf{D}=\bigcup\{C:C\in\mathcal{D}\}$ is compact in $(X,d)$ and $\mathcal{D}_{F(X)}$ is closed in $(F_{USCG}(X),H_{\rm end})$. From Proposition 4.19 and clauses (iii) and (iv) of Proposition 4.22, we obtain that Corollaries 4.23, 4.24 and 4.25 are equivalent forms of Theorems 4.2, 4.1 and 4.3, respectively. So we can see Theorems 4.2, 4.1 and 4.3 as special cases of Theorems 4.11, 4.12, and 4.13, respectively. 5 Characterizations of compactness in $(F^{r}_{USCG}(X),H_{\rm end})$ In this section, we first investigate the properties of the $H_{\rm end}$ metric. Then based on the characterizations of relative compactness, total boundedness and compactness in $(F_{USCG}(X),H_{\rm end})$ given in Section 4, we give characterizations of relatively compact sets, totally bounded sets, and compact sets in $(F^{r}_{USCG}(X),H_{\rm end})$, $r\in[0,1]$. $(F^{r}_{USCG}(X),H_{\rm end})$, $r\in[0,1]$ are a kind of subspaces of $(F_{USCG}(X),H_{\rm end})$. Each element in $F^{r}_{USCG}(X)$ takes $r$ as its maximum value. $(F^{1}_{USCG}(X),H_{\rm end})$ is one of these subspaces. For $D\subseteq X\times[0,1]$, define $\bm{S_{D}}:=\sup\{\alpha:(x,\alpha)\in D\}$. We claim that for $D,E\in C(X\times[0,1])$, $$H(D,E)\geq|S_{D}-S_{E}|.$$ (13) To see this, let $D,E\in C(X\times[0,1])$. If $|S_{D}-S_{E}|=0$, then (13) is true. If $|S_{D}-S_{E}|>0$. Assume that $S_{D}>S_{E}$. Note that for each $(x,t)\in D$ with $t>S_{E}$, $\overline{d}((x,t),E)\geq t-S_{E}$. Thus $H(D,E)\geq\sup\{t-S_{E}:(x,t)\in D\mbox{ with }t>S_{E}\}=S_{D}-S_{E}$. So (13) is true. Let $u\in F(X)$. Define $\bm{S_{u}}:=\sup\{u(x):x\in X\}$. We can see that $S_{u}=S_{{\rm end}\,u}$. Clearly $[u]_{S_{u}}=\emptyset$ is possible. From (13), we have that for $u,v\in F_{USC}(X)$, $$H_{\rm end}(u,v)\geq|S_{u}-S_{v}|.$$ (14) Proposition 5.1. Let $u$ and $u_{n}$, $n=1,2,\ldots$, be fuzzy sets in $F_{USC}(X)$. If $H_{\rm end}(u_{n},u)\to 0$ as $n\to\infty$, then $S_{u_{n}}\to S_{u}$ as $n\to\infty$. Proof. The desired result follows immediately from (14). ∎ Let $u\in F(X)$. $\max\{u(x):x\in X\}$ may not exist. If $\max\{u(x):x\in X\}$ exists, then obviously $S_{u}=\max\{u(x):x\in X\}$. If $[u]_{S_{u}}\not=\emptyset$, then, as $S_{u}=\sup\{u(x):x\in X\}$, it follows that $S_{u}=\max\{u(x):x\in X\}$. Proposition 5.2. (i) Let $u\in F_{USC}(X)$. If there is an $\alpha\in[0,S_{u}]$ with $[u]_{\alpha}\in K(X)$, then $[u]_{S_{u}}\not=\emptyset$ and $S_{u}=\max\{u(x):x\in X\}$. (ii) Let $u\in F_{USCG}(X)$. Then $S_{u}=\max\{u(x):x\in X\}$. Proof. First, we show (i). If $\alpha=S_{u}$, then $[u]_{S_{u}}\not=\emptyset$. If $\alpha<S_{u}$, then pick a sequence $\{x_{n}\}$ in $[u]_{\alpha}$ with $u(x_{n})\to S_{u}$. From the compactness of $[u]_{\alpha}$, there is a subsequence $\{x_{n_{k}}\}$ of $\{x_{n}\}$ such that $\{x_{n_{k}}\}$ converges to a point $x$ in $[u]_{\alpha}$. Thus $u(x)\geq\lim_{k\to\infty}u(x_{n_{k}})=S_{u}$. Hence $u(x)=S_{u}$, and therefore $[u]_{S_{u}}\not=\emptyset$ and $S_{u}=\max\{u(x):x\in X\}$. So (i) is true. In the $\alpha<S_{u}$ case, we can also prove $[u]_{S_{u}}\not=\emptyset$ as follows. Take an increasing sequence $\{\alpha_{k}\}$ in $[\alpha,1]$ with $\alpha_{k}\to S_{u}-$. Then $[u]_{\alpha_{k}}\in K(X)$ for each $k=1,2,\ldots$, and thus $[u]_{S_{u}}=\cap_{k=1}^{+\infty}[u]_{\alpha_{k}}\in K(X)$. So $[u]_{S_{u}}\not=\emptyset$. Now we show (ii). If $u\in F_{USCG}(X)\setminus\{\emptyset_{F(X)}\}$, then there exists an $\alpha\in(0,S_{u}]$ such that $[u]_{\alpha}\in K(X)$. So from (i), $S_{u}=\max\{u(x):x\in X\}$. If $u=\emptyset_{F(X)}$, then $S_{u}=0=\max\{u(x):x\in X\}$. So for each $u\in F_{USCG}(X)$, $S_{u}=\max\{u(x):x\in X\}$. ∎ Let $r\in[0,1]$. Define $$\displaystyle F^{{}^{\prime}r}_{USC}(X)=\{u\in F_{USC}(X):S_{u}=r\},$$ $$\displaystyle F^{r}_{USC}(X)=\{u\in F_{USC}(X):r=\max\{u(x):x\in X\}\},$$ $$\displaystyle F^{{}^{\prime}r}_{USCG}(X)=\{u\in F_{USCG}(X):S_{u}=r\},$$ $$\displaystyle F^{r}_{USCG}(X)=\{u\in F_{USCG}(X):r=\max\{u(x):x\in X\}\},$$ $$\displaystyle F^{{}^{\prime}r}_{USCB}(X)=\{u\in F_{USCB}(X):S_{u}=r\},$$ $$\displaystyle F^{r}_{USCB}(X)=\{u\in F_{USCB}(X):r=\max\{u(x):x\in X\}\}.$$ Let $r\in[0,1]$. We can see that $F^{r}_{USC}(X)\subseteq F^{{}^{\prime}r}_{USC}(X)$. Clearly, $F^{{}^{\prime}0}_{USC}(X)=F^{0}_{USC}(X)=F^{0}_{USCG}(X)=F^{0}_{USCB}(X)=\{\emptyset_{F(X)}\}$. Proposition 5.3. Let $r\in[0,1]$. Then (i)  $F^{r}_{USCG}(X)=F^{{}^{\prime}r}_{USCG}(X)$, $F^{r}_{USCB}(X)=F^{{}^{\prime}r}_{USCB}(X)$, (ii)  $F^{{}^{\prime}r}_{USC}(X)$ is a closed subset of $(F_{USC}(X),H_{\rm end})$, (iii)  $F^{r}_{USCG}(X)$ is a closed subset of $(F_{USCG}(X),H_{\rm end})$, and (iv)  $F^{r}_{USCB}(X)$ is a closed subset of $(F_{USCB}(X),H_{\rm end})$. Proof. From Proposition 5.2 (ii) and the fact that $F_{USCB}(X)\subseteq F_{USCG}(X)$, we have that $F^{r}_{USCG}(X)=F^{{}^{\prime}r}_{USCG}(X)$, $F^{r}_{USCB}(X)=F^{{}^{\prime}r}_{USCB}(X)$. So (i) is true. By Proposition 5.1, (ii) is true. From Proposition 5.1, $F^{{}^{\prime}r}_{USCG}(X)$ is a closed subset of $(F_{USCG}(X),H_{\rm end})$, and $F^{{}^{\prime}r}_{USCB}(X)$ is a closed subset of $(F_{USCB}(X),H_{\rm end})$. Then by (i), (iii) and (iv) are true. ∎ Lemma 5.4. Let $r\in[0,1]$ and let $U$ be a subset of $F^{r}_{USCG}(X)$. Then the following (i-1) is equivalent to (i-2), and (ii-1) is equivalent to (ii-2). (i-1)  $U$ is relatively compact in $(F_{USCG}(X),H_{\rm end})$. (i-2)  $U$ is relatively compact in $(F^{r}_{USCG}(X),H_{\rm end})$. (ii-1)  $U$ is closed in $(F_{USCG}(X),H_{\rm end})$. (ii-2)  $U$ is closed in $(F^{r}_{USCG}(X),H_{\rm end})$. Proof. Clause (iii) of Proposition 5.3 says that $F^{r}_{USCG}(X)$ is a closed subset of $(F_{USCG}(X),H_{\rm end})$. From this we obtain that (i-1)$\Leftrightarrow$(i-2), and (ii-1)$\Leftrightarrow$(ii-2). ∎ In this paper, we suppose that $(r,r]=\emptyset$ for $r\in\mathbb{R}$. Lemma 5.5. Let $r\in[0,1]$ and let $U$ be a subset of $F^{r}_{USCG}(X)$. Then the following (i-1) is equivalent to (i-2), (ii-1) is equivalent to (ii-2), and (iii-1) is equivalent to (iii-2). (i-1)  $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,1]$. (i-2)  $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,r]$. (ii-1) $U(\alpha)$ is totally bounded in $(X,d)$ for each $\alpha\in(0,1]$. (ii-2) $U(\alpha)$ is totally bounded in $(X,d)$ for each $\alpha\in(0,r]$. (iii-1)  $U(\alpha)$ is compact in $(X,d)$ for each $\alpha\in(0,1]$. (iii-2)  $U(\alpha)$ is compact in $(X,d)$ for each $\alpha\in(0,r]$. Proof. Observe that if $\alpha\in(r,1]$ then $U(\alpha)=\emptyset$. From this we obtain that (i-1)$\Leftrightarrow$(i-2), (ii-1)$\Leftrightarrow$(ii-2), and (iii-1)$\Leftrightarrow$(iii-2). ∎ Corollary 5.6. Let $r\in[0,1]$ and let $U$ be a subset of $F^{r}_{USCG}(X)$. Then the following properties are equivalent. (i)  $U$ is relatively compact in $(F_{USCG}(X),H_{\rm end})$. (ii)  $U$ is relatively compact in $(F^{r}_{USCG}(X),H_{\rm end})$. (iii)  $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,1]$. (iv)  $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,r]$. Proof. By Lemma 5.4, (i)$\Leftrightarrow$(ii). From this and Theorem 4.11, we obtain that (ii)$\Leftrightarrow$(iii). By Lemma 5.5, (iii)$\Leftrightarrow$(iv), and the proof is complete. ∎ Corollary 5.7. Let $r\in[0,1]$ and let $U$ be a subset of $F^{r}_{USCG}(X)$. Then the following properties are equivalent. (i) $U$ is totally bounded in $(F_{USCG}(X),H_{\rm end})$. (ii) $U$ is totally bounded in $(F^{r}_{USCG}(X),H_{\rm end})$. (iii) $U(\alpha)$ is totally bounded in $(X,d)$ for each $\alpha\in(0,1]$. (iv) $U(\alpha)$ is totally bounded in $(X,d)$ for each $\alpha\in(0,r]$. Proof. Clearly (i)$\Leftrightarrow$(ii). From this and Theorem 4.12, we obtain that (ii)$\Leftrightarrow$(iii). By Lemma 5.5, (iii)$\Leftrightarrow$(iv), and the proof is complete. ∎ Corollary 5.8. Let $r\in[0,1]$ and let $U$ be a subset of $F^{r}_{USCG}(X)$. Then the following properties are equivalent. (i) $U$ is compact in $(F_{USCG}(X),H_{\rm end})$ (ii) $U$ is compact in $(F^{r}_{USCG}(X),H_{\rm end})$. (iii) $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,1]$ and $U$ is closed in $(F_{USCG}(X),H_{\rm end})$; (iv) $U(\alpha)$ is compact in $(X,d)$ for each $\alpha\in(0,1]$ and $U$ is closed in $(F_{USCG}(X),H_{\rm end})$. (v) $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,r]$ and $U$ is closed in $(F^{r}_{USCG}(X),H_{\rm end})$. (vi) $U(\alpha)$ is compact in $(X,d)$ for each $\alpha\in(0,r]$ and $U$ is closed in $(F^{r}_{USCG}(X),H_{\rm end})$. Proof. Clearly (i)$\Leftrightarrow$(ii). From this and Theorem 4.13, we obtain that (ii)$\Leftrightarrow$(iii)$\Leftrightarrow$(iv). By Lemma 5.4, $U$ is closed in $(F_{USCG}(X),H_{\rm end})$ if and only if $U$ is closed in $(F^{r}_{USCG}(X),H_{\rm end})$. By Lemma 5.5, $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,1]$ if and only if $U(\alpha)$ is relatively compact in $(X,d)$ for each $\alpha\in(0,r]$. So (iii)$\Leftrightarrow$(v). Similarly, from Lemmas 5.4 and 5.5, we have that (iv)$\Leftrightarrow$(vi). So (i)$\Leftrightarrow$(ii)$\Leftrightarrow$(iii)$\Leftrightarrow$(iv)$\Leftrightarrow$(v)$\Leftrightarrow$(vi). ∎ Remark 5.9. From Corollary 5.8, Lemmas 5.4 and 5.5, the following properties are equivalent. (i) $U$ is compact in $(F_{USCG}(X),H_{\rm end})$. (ii) $U$ is compact in $(F^{r}_{USCG}(X),H_{\rm end})$. (iii) At least one of (i-1),(i-2), (iii-1) and (iii-2) in Lemma 5.5 holds, and at least one of (ii-1) and (ii-2) in Lemma 5.4 holds. (iv) All of (i-1),(i-2), (iii-1) and (iii-2) in Lemma 5.5 hold, and all of (ii-1) and (ii-2) in Lemma 5.4 hold. 6 An application on relationship between $H_{\rm end}$ metric and $\Gamma$-convergence As an application of the characterizations of relative compactness, total boundedness and compactness given in Section 4, we discuss the relationship between $H_{\rm end}$ metric and $\Gamma$-convergence on fuzzy sets. Proposition 6.1. Let $S$ be a nonempty subset of $F_{USC}(X)$. Let $u$ be a fuzzy set in $S$, and let $\{u_{n}\}$ be a fuzzy set sequence in $S$. Then the following properties are equivalent. (i)  $H_{\rm end}(u_{n},u)\to 0$. (ii)  $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and $\{u_{n},n=1,2,\ldots\}$ is a relatively compact set in $(F_{USC}(X),H_{\rm end})$. (iii)  $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and $\{u_{n},n=1,2,\ldots\}$ is a relatively compact set in $(S,H_{\rm end})$. (iv)  $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is a compact set in $(S,H_{\rm end})$. (v)  $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is a compact set in $(F_{USC}(X),H_{\rm end})$. Proof. To show (i)$\Rightarrow$(v). Assume that (i) is true. By Theorem 2.3 and Remark 2.4, $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$. Clearly $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is a compact set in $(F_{USC}(X),H_{\rm end})$. So (v) is true. It can be seen that $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is a compact set in $(F_{USC}(X),H_{\rm end})$ if and only if $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is a compact set in $(S,H_{\rm end})$. So (v)$\Leftrightarrow$(iv). If $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is a compact set in $(S,H_{\rm end})$, then $\{u_{n},n=1,2,\ldots\}$ is relatively compact in $(S,H_{\rm end})$ because $\{u_{n},n=1,2,\ldots\}$ is a subset of $\{u_{n},n=1,2,\ldots\}\cup\{u\}$. So (iv)$\Rightarrow$(iii). Clearly if $\{u_{n},n=1,2,\ldots\}$ is a relatively compact set in $(S,H_{\rm end})$, then $\{u_{n},n=1,2,\ldots\}$ is a relatively compact set in $(F_{USC}(X),H_{\rm end})$. So (iii)$\Rightarrow$(ii). To show (ii)$\Rightarrow$(i), we proceed by contradiction. Assume that (ii) is true. If (i) is not true; that is, $H_{\rm end}(u_{n},u)\not\to 0$. Then there is an $\varepsilon>0$ and a subsequence $\{v_{n}^{(1)}\}$ of $\{u_{n}\}$ that $$H_{\rm end}(v_{n}^{(1)},u)\geq\varepsilon\mbox{ for all }n=1,2,\ldots.$$ (15) Since $\{u_{n},n=1,2,\ldots\}$ is relatively compact in $(F_{USC}(X),H_{\rm end})$, there is a subsequence $\{v_{n}^{(2)}\}$ of $\{v_{n}^{(1)}\}$ and $v\in F_{USC}(X)$ such that $H_{\rm end}(v_{n}^{(2)},v)\to 0$. Hence by Theorem 2.3 and Remark 2.4, $\lim_{n\to\infty}^{(\Gamma)}v_{n}^{(2)}=v$. Since $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, then by Remark 2.5, $u=v$. So $H_{\rm end}(v_{n}^{(2)},u)\to 0$, which contradicts (15). Since we have shown (i)$\Rightarrow$(v), (v)$\Leftrightarrow$(iv), (iv)$\Rightarrow$(iii), (iii)$\Rightarrow$(ii) and (ii)$\Rightarrow$(i), the proof is complete. We can also show this theorem as follows. First we show that (i)$\Leftrightarrow$(iii) $\Leftrightarrow$(iv) by verifying that (i)$\Rightarrow$(iv) $\Rightarrow$(iii)$\Rightarrow$(i) (The proof of (i)$\Rightarrow$(iv) is similar to that of (i)$\Rightarrow$(v). The proof of (iii)$\Rightarrow$(i) is similar to that of (ii)$\Rightarrow$(i)). Then put $S=F_{USC}(X)$, we obtain that (i)$\Leftrightarrow$(ii) from (i)$\Leftrightarrow$(iii), and that (i)$\Leftrightarrow$(v) from (i)$\Leftrightarrow$(iv). So we have that (i), (ii), (iii), (iv) and (v) are equivalent to each other. ∎ Proposition 6.2. Let $u$ be a fuzzy set in $F_{USCG}(X)$, and let $\{u_{n}\}$ be a fuzzy set sequence in $F_{USCG}(X)$. Then the following properties are equivalent. (i)  $H_{\rm end}(u_{n},u)\to 0$. (ii)  $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and for each $\alpha\in(0,1]$, $\bigcup_{n=1}^{+\infty}[u_{n}]_{\alpha}$ is relatively compact in $(X,d)$. (iii)  $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and for each $\alpha\in(0,1]$, $\bigcup_{n=1}^{+\infty}[u_{n}]_{\alpha}\cup[u]_{\alpha}$ is compact in $(X,d)$. (iv)  $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is closed in $(F_{USCG}(X),H_{\rm end})$, and for each $\alpha\in(0,1]$, $\bigcup_{n=1}^{+\infty}[u_{n}]_{\alpha}\cup[u]_{\alpha}$ is compact in $(X,d)$. Proof. The desired result follows from Proposition 6.1, Theorem 4.11 and Theorem 4.13. The proof is routine. Put $S=F_{USCG}(X)$ in Proposition 6.1. Then we obtain that the following conditions (a), (b) and (c) are equivalent. (a) $H_{\rm end}(u_{n},u)\to 0$. (b) $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and $\{u_{n},n=1,2,\ldots\}$ is a relatively compact set in $(F_{USCG}(X),H_{\rm end})$. (c) $\lim_{n\to\infty}^{(\Gamma)}u_{n}=u$, and $\{u_{n},n=1,2,\ldots\}\cup\{u\}$ is a compact set in $(F_{USCG}(X),H_{\rm end})$. (a) is (i). By Theorem 4.11, (b)$\Leftrightarrow$(ii). By Theorem 4.13, (c)$\Leftrightarrow$(iv). We can see that (iv)$\Rightarrow$(iii) $\Rightarrow$(ii). So from (a)$\Leftrightarrow$(b)$\Leftrightarrow$(c), we have that (i)$\Leftrightarrow$(ii)$\Leftrightarrow$ (iii)$\Leftrightarrow$(iv). ∎ Proposition 6.3. Let $\mathcal{D}$ be a nonempty subset of $C(X)$. Let $A$ be a set in $\mathcal{D}$, and let $\{A_{n}\}$ be a sequence of sets in $\mathcal{D}$. Then the following properties are equivalent. (i)  $H(A_{n},A)\to 0$. (ii)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\{A_{n},n=1,2,\ldots\}$ is a relatively compact set in $(C(X),H)$. (iii)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\{A_{n},n=1,2,\ldots\}$ is a relatively compact set in $(\mathcal{D},H)$. (iv)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\{A_{n},n=1,2,\ldots\}\cup\{A\}$ is a compact set in $(\mathcal{D},H)$. (v)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\{A_{n},n=1,2,\ldots\}\cup\{A\}$ is a compact set in $(C(X),H)$. Proof. The proof is similar to that of Proposition 6.1. ∎ Proposition 6.4. Let $A$ be a set in $K(X)$, and let $\{A_{n}\}$ be a sequence of sets in $K(X)$. Then the following properties are equivalent. (i)  $H(A_{n},A)\to 0$. (ii)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\bigcup_{n=1}^{+\infty}A_{n}$ is a relatively compact set in $(X,d)$. (iii)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\bigcup_{n=1}^{+\infty}A_{n}\cup A$ is a compact set in $(X,d)$. (iv)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, $\bigcup_{n=1}^{+\infty}A_{n}\cup A$ is a compact set in $(X,d)$, and $\{A_{n},n=1,2,\ldots\}\cup\{A\}$ is a closed set in $(K(X),H)$. Proof. The desired result follows from Proposition 6.3 and Theorems 4.2 and 4.3. The proof is routine and similar to that of Proposition 6.2. Put $\mathcal{D}=K(X)$ in Proposition 6.3. Then we obtain that the following conditions (a), (b) and (c) are equivalent. (a)  $H(A_{n},A)\to 0$. (b)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\{A_{n},n=1,2,\ldots\}$ is a relatively compact set in $(K(X),H)$. (c)  $\lim_{n\to\infty}^{(K)}A_{n}=A$, and $\{A_{n},n=1,2,\ldots\}\cup\{A\}$ is a compact set in $(K(X),H)$. (a) is (i). By Theorem 4.2, (b)$\Leftrightarrow$(ii). By Theorem 4.3, (c)$\Leftrightarrow$(iv). We can see that (iv)$\Rightarrow$(iii) $\Rightarrow$(ii). So from (a)$\Leftrightarrow$(b)$\Leftrightarrow$(c), we have that (i)$\Leftrightarrow$(ii)$\Leftrightarrow$ (iii)$\Leftrightarrow$(iv). ∎ Remark 6.5. Let $u\in F_{USCG}(X)$ and $\{u_{n}\}$ be a fuzzy set sequence in $F_{USC}(X)$. Let $\alpha\in(0,1]$. Since $[u]_{\alpha}$ is compact in $X$, we have that the conditions (a) $\bigcup_{n=1}^{+\infty}[u_{n}]_{\alpha}$ is relatively compact in $(X,d)$, and (b) $\bigcup_{n=1}^{+\infty}[u_{n}]_{\alpha}\cup[u]_{\alpha}$ is relatively compact in $(X,d)$, are equivalent. So “$\bigcup_{n=1}^{+\infty}[u_{n}]_{\alpha}$ is relatively compact in $(X,d)$” can be replaced by “$\bigcup_{n=1}^{+\infty}[u_{n}]_{\alpha}\cup[u]_{\alpha}$ is relatively compact in $(X,d)$” in clause (ii) of Proposition 6.2. Similar replacement can be made in Propositions 6.1, 6.3 and 6.4. Propositions 6.1, 6.2, 6.3 and 6.4 can be shown in different ways. Below we give some other proofs. Proposition 6.3 implies Proposition 6.1. Let $S$ be a nonempty subset of $F_{USC}(X)$. Let $u$ be a fuzzy set in $S$, and let $\{u_{n}\}$ be a fuzzy set sequence in $S$. Put $A={\rm end}\,u$, and for $n=1,2,\ldots$, put $A_{n}={\rm end}\,u_{n}$ in Proposition 6.3. Put $\mathcal{D}=\{{\rm end}\,u:u\in F_{USC}(X)\}$ in Proposition 6.3. Then from (i)$\Leftrightarrow$(iii) in Proposition 6.3, we obtain that (i)$\Leftrightarrow$(ii) in Proposition 6.1. Put $\mathcal{D}=\{{\rm end}\,u:u\in S\}$ in Proposition 6.3. Then from (i)$\Leftrightarrow$(iii) in Proposition 6.3, we obtain that (i)$\Leftrightarrow$(iii) in Proposition 6.1. Similarly, we can show that (i)$\Leftrightarrow$(iv) and (i)$\Leftrightarrow$(v) in Proposition 6.1. So Proposition 6.3 implies Proposition 6.1. Proposition 6.3, Theorem 4.11 and Theorem 4.13 imply Proposition 6.2. Let $u$ be a fuzzy set in $F_{USCG}(X)$, and let $\{u_{n}\}$ be a fuzzy set sequence in $F_{USCG}(X)$. Put $A={\rm end}\,u$, and for $n=1,2,\ldots$, put $A_{n}={\rm end}\,u_{n}$ in Proposition 6.3. Put $\mathcal{D}=\{{\rm end}\,u:u\in F_{USCG}(X)\}$ in Proposition 6.3. Then from (i)$\Leftrightarrow$(iii) in Proposition 6.3 and Theorem 4.11, we obtain that (i)$\Leftrightarrow$(ii) in Proposition 6.2. Similarly from (i)$\Leftrightarrow$(iv) in Proposition 6.3 and Theorem 4.13, we obtain that (i)$\Leftrightarrow$(iv) in Proposition 6.2. Since (iv)$\Rightarrow$(iii) $\Rightarrow$(ii) in Proposition 6.2, it follows that (i)$\Leftrightarrow$(ii)$\Leftrightarrow$(iii)$\Leftrightarrow$(iv) in Proposition 6.2. So Proposition 6.3, Theorem 4.11 and Theorem 4.13 imply Proposition 6.2. Proposition 6.1 implies Propositions 6.3 and 6.4. Proposition 6.2 implies Proposition 6.4. Proposition 6.6. (i) Let $A\in C(X)$ and $\{A_{n}\}$ a sequence in $C(X)$. Then $H_{\rm end}(\chi_{A_{n}},\chi_{A})\to 0$ if and only if $H(A_{n},A)\to 0$. (ii) Let $A\in P(X)$ and $\{A_{n}\}$ a sequence in $P(X)$. Then $\lim_{n\to\infty}^{(\Gamma)}\chi_{A_{n}}=\chi_{A}$ if and only if $\lim_{n\to\infty}^{(K)}A_{n}=A$. (iii) Let $\mathcal{D}$ be a subset of $C(X)$ and $\mathcal{B}$ a subset of $\mathcal{D}$. Then $\mathcal{B}$ is totally bounded (respectively, relatively compact, compact, closed) in $(\mathcal{D},H)$ if and only if $\mathcal{B}_{F(X)}$ is totally bounded(respectively, relatively compact, compact, closed) in $(\mathcal{D}_{F(X)},H_{\rm end})$. Proof. (i) and (iii) follow immediately from (12). (ii) follows from the definition of Kuratowski convergence and $\Gamma$-convergence. ∎ Let $\mathcal{D}$ be a nonempty subset of $C(X)$. Let $A$ be a set in $C(X)$, and let $\{A_{n}\}$ be a sequence of sets in $\mathcal{D}$. Let $S=C(X)_{F(X)}$ in Proposition 6.1. Then from (i)$\Leftrightarrow$(iii) in Proposition 6.1, we have that: (c-1) $H_{\rm end}(\chi_{A_{n}},\chi_{A})\to 0$ if and only if $\lim_{n\to\infty}^{(\Gamma)}\chi_{A_{n}}=\chi_{A}$, and $\{\chi_{A_{n}},n=1,2,\ldots\}$ is a relatively compact set in $(C(X)_{F(X)},H_{\rm end})$. By Proposition 6.6, (c-1) means that (i)$\Leftrightarrow$(ii) in Proposition 6.3. Let $S=\mathcal{D}_{F(X)}$ in Proposition 6.1. Then from (i)$\Leftrightarrow$(iii) in Proposition 6.1, we have that: (c-2) $H_{\rm end}(\chi_{A_{n}},\chi_{A})\to 0$ if and only if $\lim_{n\to\infty}^{(\Gamma)}\chi_{A_{n}}=\chi_{A}$, and $\{\chi_{A_{n}},n=1,2,\ldots\}$ is a relatively compact set in $(\mathcal{D}_{F(X)},H_{\rm end})$. By Proposition 6.6, (c-2) means that (i)$\Leftrightarrow$(iii) in Proposition 6.3. Similarly, by using Proposition 6.6, we can show that (i)$\Leftrightarrow$(iv) in Proposition 6.1 implies that (i)$\Leftrightarrow$(iv) and (i)$\Leftrightarrow$(v) in Proposition 6.3. By clauses (i) and (ii) of Proposition 6.6 and clause (iii) of Proposition 4.22, we can see that Proposition 6.2 implies Proposition 6.4. By using level characterizations of $H_{\rm end}$ and $\Gamma$-convergence on fuzzy sets, it is easy to show that Proposition 6.4 also implies Proposition 6.2. 7 Conclusion In this paper, we present the characterizations of total boundedness, relative compactness and compactness in $(F_{USCG}(X),H_{\rm end})$. Here $X$ is a general metric space. Based on this, we also give the characterizations of total boundedness, relative compactness and compactness in $(F^{r}_{USCG}(X),H_{\rm end})$, $r\in[0,1]$. $(F^{r}_{USCG}(X),H_{\rm end})$, $r\in[0,1]$, are metric subspaces of $(F_{USCG}(X),H_{\rm end})$. The conclusions in this paper significantly improve the corresponding conclusions given in our previous paper huang . Therein we give the characterizations of total boundedness, relative compactness and compactness in $(F_{USCG}(\mathbb{R}^{m}),H_{\rm end})$. $\mathbb{R}^{m}$ is a special type of metric space. We discuss the relationship between $H_{\rm end}$ metric and $\Gamma$-convergence as an application of the characterizations of relative compactness, total boundedness and compactness given in this paper. The results in this paper have potential applications in the research of fuzzy sets involved the endograph metric and the $\Gamma$-convergence. References (1) Dubois, D.; Prade, H. (Eds.), Fundamentals of Fuzzy Sets, vol 1 of the Handbooks of Fuzzy Sets; Kluwer: Boston, Mass, 2000. (2) Diamond, P.; Kloeden, P. Metric Spaces of Fuzzy Sets; World Scientific: Singapore, 1994. (3) Wu, C. Ma, M. The Basic of Fuzzy Analysis (in Chinese); National Defence Industry press: Beijing, 1991. (4) Gutiérrez García, J.; de Prada Vicente, M.A. Hutton $[0,1]$-quasi-uniformities induced by fuzzy (quasi-)metric spaces, Fuzzy Sets Syst. 2006, 157, 755–766. (5) Gong, Z.; Hao, Y. Fuzzy Laplace transform based on the Henstock integral and its applications in discontinuous fuzzy systems, Fuzzy Sets Syst. 2019, 358, 1–28. (6) Wang, L.; Mendel, J. Fuzzy basis functions, universal approximation, and orthogonal least-squares learning, IEEE Trans. Neural Netw. 1992, 3(5), 807–814. (7) Wang, G.; Shi, P.; Wang, B.; Zhang, J. Fuzzy $n$-ellipsoid numbers and representations of uncertain multichannel digital information. IEEE Trans. Fuzzy Syst. 2014, 22(5), 1113–1126. (8) Kloeden, P.E. Compact supported endographs and fuzzy sets, Fuzzy Sets Syst. 1980, 4(2), 193–201. (9) Kloeden, P.E.; Lorenz, T. A Peano theorem for fuzzy differential equations with evolving membership grade, Fuzzy Sets Syst. 2015, 280, 1–26. (10) Kupka, J. On approximations of Zadeh’s extension principle, Fuzzy Sets Syst. 2016, 283, 26–39. (11) Cánovas, J.S.; Kupka, J. On fuzzy entropy and topological entropy of fuzzy extensions of dynamical systems, Fuzzy Sets Syst. 2017, 309 (15), 115–130. (12) Kelley, J. L. General Topology; Springer: New York, USA, 1975. (13) Román-Flores, H. The compactness of $E(X)$, Appl. Math. Lett. 1998, 11, 13–17. (14) Greco, G. H. Sendograph metric and relatively compact sets of fuzzy sets, Fuzzy Sets Syst. 2006, 157, 286–291. (15) Greco, G.H.; Moschen, M.P. Supremum metric and relatively compact sets of fuzzy sets, Nonlinear Anal. 2006, 64, 1325–1335. (16) Trutschnig, W. Characterization of the sendograph-convergence of fuzzy sets by means of their $L_{p}$- and levelwise convergence, Fuzzy Sets Syst. 2010, 161, 1064–1077. (17) Huang, H.; Wu, C. Characterizations of compact sets in fuzzy set spaces with $L_{p}$ metric, Fuzzy Sets Syst. 2018, 330, 16–40. (18) Huang, H. Characterizations of endograph metric and $\Gamma$-convergence on fuzzy sets, Fuzzy Sets Syst. 2018, 350, 55–84. (19) Huang, H. Properties of several fuzzy set spaces, arXiv:submit/4419682 [v2] [math.GM] 25 Jul 2022, submitted to Fuzzy Sets Syst. The early versions of this paper are submitted to ChinaXiv. See H. Huang, Properties of several fuzzy set spaces, chinaXiv:202107.00011. v2-v13. (20) Rojas-Medar, M.; Román-Flores, H. On the equivalence of convergences of fuzzy sets, Fuzzy Sets Syst. 1996, 80, 217–224. (21) H. Huang, Some properties of Skorokhod metric on fuzzy sets, Fuzzy Sets Syst. 2022, 437, 35–52. (22) Beer, G. Topologies on Closed and Closed Convex Sets; Kluwer, 1993. (23) Klein, E.; Thomposon, A.C. Theory of Correspondence; Wiley: New York, 1984.
[ V.Gerdt Laboratory of Information Technologies, Joint Institute for Nuclear Research 141980 Dubna, Russia University ‘‘Dubna’’ 141982 Dubna, Russia [email protected]    A.Khvedelidze Institute of Quantum Physics and Engineering Technologies, Georgian Technical University, Tbilisi, Georgia A Razmadze Mathematical Institute, Iv. Javakhishvili, Tbilisi State University, Tbilisi, Georgia National Research Nuclear University, MEPhI (Moscow Engineering Physics Institute), 115409 Moscow, Russia [email protected]    Yu. Palii Institute of Applied Physics, Chisinau, Republic of Moldova [email protected] () Аннотация Entangling properties of a mixed 2-qubit system can be described by the local homogeneous unitary invariant polynomials in elements of the density matrix. The structure of the corresponding invariant polynomial ring for the special subclass of states, the so-called mixed $X-$states, is established. It is shown that for the $X-$states there is an injective ring homomorphism of the quotient ring of $SU(2)\times SU(2)$ invariant polynomials modulo its syzygy ideal and the $SO(2)\times SO(2)-$invariant ring freely generated by five homogeneous polynomials of degrees 1,1,1,2,2. Ring of 2-qubit $X-$states]On the ring of local unitary invariants for mixed $X-$states of two qubits Содержание 1 Introduction 2 Framework and settings 2.1 General algebraic settings and conventions 2.2 Settings for quantum systems 3 Applying invariant theory 3.1 Basis of the ${SU(2)\times SU(2)}\--$invariant ring 4 Constructing invariant polynomial ring of ${X-}$states 4.1 $X-$states 4.2 Restriction of Quesne’s invariants to the $X-$states subspace 4.3 Syzygy ideal in ${\mathbb{R}[\mathcal{P}]}$ 4.4 Mapping ${\mathbb{R}[\mathcal{P}]}$ to a freely generating ring 5 Concluding remarks 1 Introduction $\bullet$  Motivation $\bullet$ In this paper, we consider a bipartite quantum system composed of two qubits, whose state space, $\mathfrak{P}_{X},$ is a special 7-dimensional family of the so-called $X-$states [1]. Our interest to this subspace of a generic two qubit space $\mathfrak{P}$ is due to fact that many well-known states, e.g. the Bell states [2], Werner states [3], isotropic states [4] and maximally entangled mixed states [5, 6] are particular subsets of the $X-$states. Since their introduction in [1] many interesting properties of $X-$states have been established. Particularly, it was shown that for a fixed set of eigenvalues the states of maximal concurrence, negativity or relative entropy of entanglement are the $X-$states. 111 For detailed review of the $X-$states and their applications we refer to the recent article [7]. $\bullet$  Content $\bullet$ Here we pose the question about the algebraic structure of the local unitary polynomial invariants algebra corresponding to the $X-$states. More precisely, the fate of generic $SU(2)\times SU(2)$-invariant polynomial ring of 2-qubits [8]–[11] under the restriction of the total 2-qubits state space $\mathfrak{P}$ to its subspace $\mathfrak{P}_{X}$ will be discussed. The quotient structure of the ring obtained as a result of restriction will be determined. Furthermore, we establish an injective homomorphism between this ring and the invariant ring ${\mathbb{R}}[\mathfrak{P}_{X}]^{SO(2)\times SO(2)}\,,$ of local unitary invariant polynomials for 2-qubit $X-$states. In doing so, we show that the latter ring is freely generated by five homogeneous invariants of degrees 1,1,1,2,2. 2 Framework and settings In this section the collection of main algebraic structures associated with a finite dimensional quantum system is given. 2.1 General algebraic settings and conventions Hereafter, we use the standard notation, $\mathbb{R}[x_{1},\ldots x_{n}]\,,$ for the ring of polynomials in $n$ variables $x_{1},\ldots x_{n}$ with coefficients in $\mathbb{R}\,.$ Given a polynomial set $$F:=\{\,f_{1},\dots,f_{m}\,\}\in\mathbb{R}[x_{1},\dots,x_{n}]\,,$$ (1) generating the subring $$\mathbb{R}[F]:=\mathbb{R}[f_{1},\dots,f_{m}]\subset\mathbb{R}[x_{1},\ldots,x_{% n}],$$ (2) we shall consider the polynomial ring $\mathbb{R}[y_{1},\ldots,y_{m}]$ associated with $\mathbb{R}[F]$ where $y_{1},\ldots,y_{m}$ are variables (indeterminates). Note, that $\mathbb{R}[F]$ differs from the ideal $I_{F}=\langle F\rangle\subseteq\mathbb{R}[x_{1},\ldots,x_{n}]\,,$ generated by $F$ $$I_{F}=\left\{\sum_{i=1}^{m}h_{i}f_{i}\,\mid\,h_{1},\dots,h_{m}\in\mathbb{R}[x_% {1},\ldots,x_{n}]\right\}\,.$$ (3) The polynomial set $F$ defines the real affine variety $V\subset\mathbb{R}^{m}$. The radical ideal $I(V):=\sqrt{I_{F}}$ of $I_{F}$, i.e. the ideal such that $f\in\sqrt{I_{F}}$ iff $f^{m}\in I_{F}$ for some positive integer $m$, yields a coordinate ring of $V$ as the quotient ring $$\mathbb{R}[x_{1},\ldots,x_{n}]/I(V)\,.$$ (4) A nonzero polynomial $g(y_{1},\ldots,y_{m})\in\mathbb{R}[y_{1},\ldots,y_{m}]$ such that $g(f_{1},\ldots,f_{m})=0$ in $\mathbb{R}[x_{1},\ldots,x_{n}]$ is called a syzygy or a nontrivial algebraic relation among $f_{1},\ldots,f_{m}$. The set of all syzygies forms the syzygy ideal $$I_{F}:=\{\,s\in\mathbb{R}[y_{1},\ldots,y_{m}]\ \mid\ s=0\ \text{in}\ \mathbb{R% }[x_{1},\ldots,x_{n}]\,\}\,.$$ In doing so, the following ring isomorphism holds (cf. [12], Ch.7, Prop.2) $$\mathbb{R}[F]\cong\mathbb{R}[y_{1},\ldots,y_{m}]/I_{F}\,.$$ (5) Given an ideal $I_{F}$ in (3), a subset $\mathfrak{X}\subseteq\{\,x_{1},\ldots,x_{n}\,\}$ of indeterminates is called independent modulo $I_{F}$ if $I_{F}\cap\mathbb{R}[\mathfrak{X}]=\{\,\}$. Otherwise, $\mathfrak{X}$ is called dependent modulo $I_{F}$. The affine dimension of $I_{F}$, denoted by $\dim(I_{F})$, is defined to be the cardinality of a largest subset independent modulo $I_{F}$. If $I_{F}=\mathbb{R}[x_{1},\ldots,x_{n}]=\langle 1\rangle$, then the affine dimension of $I_{F}$ is defined to be $-1$. The ring of elements in $\mathbb{R}[x_{1},\ldots,x_{n}]$ invariant under the action of a group $G$ on $\{\,x_{1},\ldots,x_{n}\,\}$ will be denoted by $\mathbb{R}[x_{1},\ldots,x_{n}]^{G}$. 2.2 Settings for quantum systems The mathematical structures associated with finite dimensional quantum systems, in particularly with multi-qubit systems, can be described as follows. $\bullet$  Quantum state space $\bullet$ Introducing the space of $n\times n$ Hermitian matrices, $H_{n}\,,$ one can identify the density operators of an individual qubit and of a pair of qubits with a certain variety of $H_{2}\,$ and $H_{4}\,$ respectively. In general, for $n\--$dimensional quantum system this variety, the state space $\mathfrak{P}(H_{n}),$ is given as the subset of elements from $H_{n}\,,$ which satisfy the semipositivity and unit trace conditions: $$\mathfrak{P}(H_{n}):=\{\varrho\in H_{n}\mid\varrho\geq 0\,,\mbox{tr}\varrho=1% \}\,,$$ $\bullet$  Unitary symmetry of state space $\bullet$ The traditional guiding philosophy to study physical models is based on the symmetry principle. In the case of quantum theory the basic symmetry is realized in the form of the adjoint action of the unitary group $U(n)$ on $H_{n}$: $$(g,\varrho)\to g\varrho g^{\dagger}\,,\qquad g\in U(n)\,,\quad\varrho\in H_{n}\,.$$ (6) Owing to this global unitary symmetry, the correspondence between states and physically relevant configurations is not one to one. All density matrices, along the unitary orbit, $$\mathcal{O}_{\varrho}=\{g\varrho g^{\dagger},\forall g\in SU(n)\}\,,$$ represent one and the same physical state. The symmetry transformations (6) set the equivalence relation $\varrho\ {\sim}\ g\varrho g^{\dagger}$ on the state space $\mathfrak{P}(H_{n})\,,$ This equivalence defines the factor space $\mathfrak{P}(H_{n})/\sim$ and allows to ‘‘reduce’’ the above outlined ‘‘redundant’’ description of quantum system by passing to the global unitary orbit space, $\mathfrak{P}(H_{n})/U(n)$ . The global unitary orbit space accumulates all physically relevant information about the system as a whole. Characteristics of $\mathfrak{P}(H_{n})/U(n)$ as an algebraic variety are encoded in the center of universal enveloping algebra $\mathfrak{U}(su(n))\,,$ and can be described in terms of the algebra of real $SU(n)-$invariant polynomials in $\mathfrak{P}(H_{n}).$ $\bullet$  Composite quantum systems $\bullet$ If the space $H_{n}$ is associated with a composite quantum system, then another so-called local unitary group symmetry comes into play. Restricting ourselves to the case of two-qubit system, the local unitary group is identified with the subgroup $G=SU(2)\times SU(2)\subset SU(4)\,$ of the global unitary group $SU(4)\,.$ Opposite to the global unitary symmetry, the local unitary group sets the equivalence between states of composite system which have one and the same entangling properties. The algebra of corresponding local unitary $G-$invariant polynomials can be used for quantitative characterization of entanglement. Having in mind application for the 2-qubit system it is convenient to introduce in this algebra of local unitary $G\--$invariant polynomials the $\mathbb{Z}^{3}-$grading. This can be achieved by allocating the algebra $\imath\mathfrak{su}(4)$ from $H_{4}$: $$\varrho=\frac{1}{4}\left[I_{4}+\imath\mathfrak{su}(4)\right]\,,\qquad I_{4}\ % \mbox{is identity $4\times 4$ matrix}$$ and the decomposition of the latter into the direct sum of three real spaces $$V_{1}=\imath\mathfrak{su}(2)\otimes I_{2}\,,\qquad V_{2}=I_{2}\otimes\imath% \mathfrak{su}(2)\qquad V_{3}=\imath\mathfrak{su}(2)\otimes\imath\mathfrak{su}(% 2)\,,$$ (7) each representing a $G-$invariant subspace. Note that, if the basis for the $\mathfrak{su}(2)$ algebra in each subspace $V$ is chosen using the Pauli matrices $\boldsymbol{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})\,,$ the above $G-$invariant $\mathbb{Z}^{3}-$grading gives $$\varrho=\frac{1}{4}\left[I_{2}\otimes I_{2}+\sum_{i=1}^{3}a_{i}\sigma_{i}% \otimes{I}_{2}+\sum_{i=1}^{3}b_{i}I_{2}\otimes\sigma_{i}+\sum_{i,j=1}^{3}c_{ij% }\sigma_{i}\otimes\sigma_{j}\right]\,.$$ (8) This representation of 2-qubit state is known as the Fano decomposition [13]. The real parameters $a_{i},b_{i}$ and $c_{ij}\,,\,i,j=1,2,3\,,$ are subject to constraints coming from the semipositivity condition imposed on the density matrix: $$\varrho\geq 0\,.$$ (9) Explicitly, the semipositivity condition (9) reads as a set of polynomial inequalities in the fifteen variables $a_{i},b_{i}$ and $c_{ij}\,$ (see, e.g., [11] and references therein). 3 Applying invariant theory The entangling properties of composite quantum systems admit description within the general framework of the classical theory of invariants (see books [14, 15] and references therein). As it was mentioned above, for the case of a 2-qubit the local unitary group is $G=SU(2)\times SU(2)\,$. The adjoint action (6) of this group on the 2-qubits density matrix $\rho$ induces transformations on the space $W\,,$ defined by the 15 real Fano variables (8)222 More precisely, in correspondence with the above mentioned $\mathbb{Z}^{3}-$ grading, the space $W$ is the representation space of irreducible representations of the form $D_{1}\times D_{0}\,,$ $D_{0}\times D_{1}\,$ and $D_{1}\times D_{1}\,$ of $SU(2)\times SU(2)$, respectively. $$W:=\{\,(a_{i},b_{j},c_{kl})\in\mathbb{R}^{15}\mid i,j,k,l=1,2,3\,\}\,,$$ (10) and the corresponding $G\--$invariant polynomials accumulate all relevant information on the two-qubit entanglement. Now we will give some known results on the $G\--$invariant polynomials ring structure. It is worth to note, that most of these results are applicable for the linear actions of the compact groups on the linear spaces and thus cannot be directly used for a description of quantum systems due to the semipositivity of density matrix (9). However, for a moment we relax the semipositivity constraints on the Fano parameters and identify the space $W$ with $\mathbb{R}^{15}\,.$ The positivity of density matrices can be written in the $G\--$invariant form and therefore can be taken into account later. $\bullet$  The ring of $G-$invariant polynomials $\bullet$ Let $\mathbb{R}[W]:=\mathbb{R}[x_{1},x_{2},\dots,x_{15}]$ be the coordinate ring of $W$ ( with the ideal $I(W)=\{0\}$ in (4) ) and its subring $R:={\mathbb{R}}[W]^{G}\subset\mathbb{R}[W]$ be the ring of polynomials invariant under the above mentioned transformations on $W\,.$ The invariant polynomial ring $R$ has the following important properties [10, 15]. • $R$ is a graded algebra over $\mathbb{R}$, and according to the classical Hilbert theorem there is a finite set of homogeneous fundamental invariants generating ${R}$ as an $\mathbb{R}-$algebra. • The invariant ring $R$ is Cohen-Macaulay, that is, $R$ is finitely generated free module over $\mathbb{R}[F_{p}]$ (Hironaka decomposition) $$R=\bigoplus_{f_{k}\in F_{s}}f_{k}\,{\mathbb{R}}[F_{p}]\,.$$ where $F_{p}$ is a set of algebraically independent primary invariants or homogeneous system of parameters [15] sometimes called integrity basis and $F_{s}$ is a set of linearly independent secondary invariants. In doing so, $1\in F_{s}$ and the set $F_{p}\cup F_{s}$ generates $R$. • Let $R_{k}$ be a subspace spanned by all homogeneous invariants in $R$ of degree $k$. If this subspace has dimension $d_{k}$, then the corresponding Molien series $$M(q)=\sum_{k=0}^{\infty}d_{k}q^{k}$$ (11) generated by the Molien function $M(q)$ contains information on the number of primary and secondary invariants and their degrees (see formula (12) in the next Section). • Orbit separation: $$\forall u,v\in W\ \text{such that}\ G\cdot u\neq G\cdot v\ :\ \exists p\in R\ % \text{such that}\ p(u)\neq p(v)\,.$$ Because of the $G-$invariance of polynomials in $R$, their orbit separation property and Noetherianity of $R$, the use of fundamental invariants is natural in description of the orbit space of a linear action of a compact Lie group, and in particular of the $G-$invariant entanglement space of 2-qubit states. $\bullet$  Computational aspects $\bullet$ Constructive methods and algorithms for computing homogeneous generators of invariant rings are the main research objects of the computational invariant theory [15, 16]. There are various algorithms known in the literature together with their implementation in computer algebra software, e.g. Maple, Singular, Magma (see book [15], Ch.3 and more recent paper [17]). But, unfortunately, construction of a basis of invariants for $SU(2)\times SU(2)$ is too hard computationally for all those algorithms oriented to some rather wide classes of algebraic groups, and the integrity basis together with the secondary invariants for the group has been constructed (see [10] and references therein) by the methods exploited its particular properties. We shall use this basis in the next sections. Moreover, even our attempts to verify algebraic independence of the primary invariants, that is, to check that the variety in $\mathbb{C}$ defined by polynomial set $F_{p}$ is $0$, by using the standard Gröbner basis technique for algebraic elimination, failed because of too high computer resources required. 3.1 Basis of the ${SU(2)\times SU(2)}\--$invariant ring For two qubits the basis of the polynomial ring ${\mathbb{R}}[W]^{SU(2)\times SU(2)}$ was constructed in [10]. The explicit form of its elements will be presented below. As it was mentioned above, the space of polynomials in the fifteen variables (10) is decomposed into the irreducible representations of $SO(3)\times SO(3)$. Furthermore, it inherits the $\mathbb{Z}^{3}-$grading in $H_{4}$, and since the space of homogeneous polynomials of degree $s,t,q$ in $a_{i},b_{i},c_{ij}\ (i,j=1,2,3)$, respectively, is invariant under the action of ${SU(2)\times SU(2)}$. All such invariants $C$ can be classified according to their degrees $s,t,q$ of homogeneity in $a_{i},b_{i},c_{ij}$. Following Quesne’s construction [8], we shall denote them by ${C}^{(s\,t\,q)}$. The degrees of homogeneous polynomials can be controlled from the knowledge of the Molien function. The Molien function for mixed states of two qubits [8, 9, 10]: $$M(q)=\frac{1+q^{4}+q^{5}+{3}q^{6}+{2}q^{7}+{2}q^{8}+{3}q^{9}+q^{10}+q^{11}+q^{% 15}}{(1-q)(1-q^{2})^{3}(1-q^{3})^{2}(1-q^{4})^{3}(1-q^{6})}\,,$$ (12) shows that integrity basis of the invariant ring consists of 10 primary invariants of degrees $1,2,2,2,3,3,4,4,4,6\,$, and there are 15 secondary invariants whose degrees are given by $4,5,6,6,6,7,7,8,8,9,9,9,10,11,15\,$. The Quesne invariants represent the resource of such primary and secondary invariants. Explicitly the Quesne invariants read: 3 invariants of the second degree $$\displaystyle C^{(002)}=c_{ij}c_{ij}\,,\quad C^{(200)}=a_{i}a_{i}\,,\quad C^{(% 020)}=b_{i}b_{i}\,,$$ 2 invariants of the third degree $$\displaystyle C^{(003)}=\frac{1}{3!}\epsilon_{ijk}\epsilon_{\alpha\beta\gamma}% c_{i\alpha}c_{j\beta}c_{k\gamma}\,,\qquad C^{(111)}=a_{i}c_{ij}b_{j}\,,$$ 4 invariants of the fourth degree $$\displaystyle C^{(004)}$$ $$\displaystyle=$$ $$\displaystyle c_{i\alpha}c_{i\beta}c_{j\alpha}c_{j\beta}\,,$$ $$\displaystyle C^{(202)}$$ $$\displaystyle=$$ $$\displaystyle a_{i}a_{j}c_{i\alpha}c_{j\alpha}\,,$$ $$\displaystyle C^{(022)}$$ $$\displaystyle=$$ $$\displaystyle b_{\alpha}b_{\beta}c_{i\alpha}c_{i\beta}\,,$$ $$\displaystyle C^{(112)}$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{2}\,\epsilon_{ijk}\epsilon_{\alpha\beta\gamma}a_{i}b_{% \alpha}c_{j\beta}c_{k\gamma}\,,$$ 1 invariant of the fifth degree $$\displaystyle C^{(113)}=a_{i}c_{i\alpha}c_{\beta\alpha}c_{\beta j}b_{j}\,,$$ 4 invariance of the six degree $$\displaystyle C^{(123)}=\epsilon_{ijk}b_{i}c_{\alpha j}a_{\alpha}c_{\beta k}c_% {\beta l}b_{l}\,,$$ $$\displaystyle C^{(204)}=a_{i}c_{i\alpha}c_{j\alpha}c_{j\beta}c_{k\beta}a_{k}\,,$$ $$\displaystyle C^{(024)}=b_{i}c_{\alpha i}c_{\alpha j}c_{\beta j}c_{\beta,k}b_{% k}\,,$$ $$\displaystyle C^{(213)}=\epsilon_{\alpha\beta\gamma}a_{\alpha}c_{\beta i}b_{i}% c_{\gamma j}c_{\delta j}a_{\delta}\,,$$ 2 invariants of the seventh degree $$\displaystyle C^{(214)}=\epsilon_{ijk}b_{i}c_{\alpha j}a_{\alpha}c_{\beta k}c_% {\beta l}c_{\gamma l}a_{l}\,,$$ $$\displaystyle C^{(124)}=\epsilon_{\alpha\beta\gamma}a_{\alpha}c_{\beta j}b_{j}% c_{\gamma k}c_{\delta k}c_{\delta l}b_{l}\,,$$ 2 invariants of the eights degree $$\displaystyle C^{(125)}=\epsilon_{ijk}b_{i}c_{\alpha j}c_{\alpha l}b_{l}c_{% \beta k}c_{\beta m}c_{\gamma m}a_{\gamma}\,,$$ $$\displaystyle C^{(215)}=\epsilon_{\alpha\beta\gamma}a_{\alpha}c_{\beta i}c_{% \delta i}a_{\delta}c_{\gamma k}c_{\varrho k}c_{\varrho l}b_{l}\,,$$ 2 invariants of the ninth degree $$\displaystyle C^{(306)}=\epsilon_{\alpha\beta\gamma}a_{\alpha}c_{\beta i}c_{% \delta i}a_{\delta}c_{\gamma j}c_{\varrho j}c_{\varrho k}c_{\sigma k}a_{\sigma% }\,,$$ $$\displaystyle C^{(036)}=\epsilon_{ijk}b_{i}c_{\alpha j}c_{\alpha l}b_{l}c_{% \beta k}c_{\beta m}c_{\gamma m}c_{\gamma s}b_{s}\,,$$ In the above formulas the summation over all repeated indices from one to three is assumed. 4 Constructing invariant polynomial ring of ${X-}$states Now we shall discuss the fate of the ${SU(2)\times SU(2)}\--$invariant polynomial ring, when the state space of two qubits is restricted to the subspace of the $X\--$states. We start with very brief settings of the $X-$states characteristics. 4.1 $X-$states Consider subspace $\mathfrak{P}_{X}\subset\mathfrak{P}(\mathbb{R}^{15})$, of the $X-$states. These states got such name due to the visual similarity of the density matrix, whose non-zero entries lie only on the main and minor (secondary) diagonals, with the Latin letter ‘‘X’’: $$\varrho_{X}:=\left(\begin{array}[]{cccc}\varrho_{11}&0&0&\varrho_{14}\\ 0&\varrho_{22}&\varrho_{23}&0\\ 0&\varrho_{32}&\varrho_{33}&0\\ \varrho_{41}&0&0&\varrho_{44}\end{array}\right)\,.$$ (13) In (13) the diagonal entries are real numbers, while elements of the minor diagonal are pairwise complex conjugated, $\varrho_{14}=\overline{\varrho}_{14}$ and $\varrho_{23}=\overline{\varrho}_{32}\,.$ Comparing with the Fano decomposition (8) one can see, that the $X\--$states belong to the 7-dimensional subspace $W_{X}$ of the vector space $W$ (10) defined as: $$W_{X}:=\{\,w\in W\ |\ c_{13}=c_{23}=c_{31}=c_{32}=0\,,a_{i}=b_{i}=0\,,\ i=1,2\,\}$$ The $X-$matrices represent density operators that do not mix the subspaces corresponding to matrix elements with indices 1-4 and 2-3 of the elements in the Hilbert space $\mathcal{H}_{4}$. It can be easily verified by using the permutation matrix $$P_{\pi}=\left[\begin{matrix}1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\\ 0&1&0&0\end{matrix}\right]$$ that corresponds to the permutation $$\pi=\left(\begin{matrix}1&2&3&4\\ 1&4&3&2\end{matrix}\right)\,.$$ The $X-$states can be transformed into the $2\times 2$ block-diagonal form $$\varrho_{X}=P_{\pi}\left(\begin{array}[]{cccc}\varrho_{11}&\varrho_{14}&0&0\\ \varrho_{41}&\varrho_{44}&0&0\\ 0&0&\varrho_{33}&\varrho_{32}\\ 0&0&\varrho_{23}&\varrho_{22}\,.\end{array}\right)P_{\pi}\,.$$ (14) 4.2 Restriction of Quesne’s invariants to the $X-$states subspace Now we consider the restriction of the above written fundamental invariants ${C}^{(s\,t\,q)}$ by Quesne to the subspace $W_{X}.$ The straightforward evaluation shows that the set of fundamental invariants restricted to the $X-$state subspace $W_{X}\,,$ reduces to 12 nonzero invariants: $${\small\mathcal{P}=\{C^{200},C^{020},C^{002},C^{111},C^{003},C^{202},C^{022},C% ^{004},C^{112},C^{113},C^{204},C^{024}\}.}$$ (15) The explicit form of these invariants as polynomials in seven real variables, coordinates on $W_{X}\,,$ $$W_{X}:=\{\,(\alpha,\beta,\gamma,c_{11},c_{12},c_{21},c_{22})\in\mathbb{R}^{7}% \,\}\,,$$ (16) is given by $$\displaystyle\mbox{deg}=2$$ $$\displaystyle C^{200}=\alpha^{2},\quad C^{020}=\beta^{2},\quad C^{002}=c_{11}^% {2}+c_{12}^{2}+c_{21}^{2}+c_{22}^{2}+\gamma^{2},$$ $$\displaystyle\mbox{deg}=3$$ $$\displaystyle C^{111}=\alpha\beta\gamma,\quad C^{003}=\gamma(c_{11}c_{22}-c_{1% 2}c_{21}),$$ $$\displaystyle\mbox{deg}=4$$ $$\displaystyle C^{202}=\alpha^{2}\gamma^{2},\quad C^{022}=\beta^{2}\gamma^{2},% \quad C^{112}=\alpha\beta(c_{11}c_{22}-c_{12}c_{21}),$$ $$\displaystyle C^{004}=\left(c_{11}^{2}+c_{12}^{2}+c_{21}^{2}+c_{22}^{2}\right)% ^{2}-2(c_{11}c_{22}-c_{12}c_{21})^{2}+\gamma^{4},$$ $$\displaystyle\mbox{deg}=5$$ $$\displaystyle C^{113}=\alpha\beta\gamma^{3},$$ $$\displaystyle\mbox{deg}=6$$ $$\displaystyle C^{204}=\alpha^{2}\gamma^{4},\quad C^{024}=\beta^{2}\gamma^{4}.$$ Now, having the set of polynomials $\mathcal{P}$ in (15), one can consider the polynomial ring $\mathbb{R}[\mathcal{P}]\subset\mathbb{R}[W_{X}]$ generating by $\mathcal{P}$.333 Hereafter, slightly abusing notations we shall write $\mathbb{R}[W]$ and $\mathbb{R}[W_{X}]$ for the coordinate ring of the variety $W$ in (10) and its subvariety $W_{X}$ respectively. To save a space, the coordinate ring of $W_{X}$ will be denoted as, $\mathbb{R}[w_{1},\ldots,w_{7}]\equiv\mathbb{R}[\alpha,\beta,\gamma,c_{11},c_{1% 2},c_{21},c_{22}]\,.$ 4.3 Syzygy ideal in ${\mathbb{R}[\mathcal{P}]}$ According to the isomorphism (5), mentioned in the Section 2.1, the subring $\mathbb{R}[\mathcal{P}_{1},\dots,\mathcal{P}_{12}]$ can be written in the quotient form $$\mathbb{R}[\mathcal{P}_{1},\dots,\mathcal{P}_{12}]\cong{\mathbb{R}[y_{1},y_{2}% \dots y_{12}]}/{I_{\mathcal{P}}}\,,$$ (17) with the syzygy ideal $I_{\mathcal{P}}$ for $\mathcal{P}$ $$I_{\mathcal{P}}:=\left\{h\in\mathbb{R}[y_{1},\dots,y_{12}]\,\mid\,h(\mathcal{P% }_{1},\dots,\mathcal{P}_{12})\in\mathbb{R}[w_{1},\ldots,w_{7}]\right\}\,.$$ The syzygy ideal can be determined by applying the well-known elimination technique [16]. Following this method we compute a Gröbner basis of the ideal $$J_{\mathcal{P}}=\langle\mathcal{P}_{1}-y_{1},\dots,\mathcal{P}_{12}-y_{12}% \rangle\in J_{\mathcal{P}}\subset\mathbb{R}[\omega_{1},\dots,\omega_{7},y_{1},% y_{12}]$$ for the lexicographic ordering $$\displaystyle c_{11}\succ c_{12}\succ c_{21}\succ c_{22}\succ\alpha\succ\beta% \succ\gamma\succ$$ $$\displaystyle\succ y_{12}\succ y_{11}\succ y_{10}\succ y_{8}\succ y_{9}\succ y% _{7}\succ y_{6}\succ y_{5}\succ y_{4}\succ y_{3}\succ\,.y_{2}\succ y_{1}\,.$$ The intersection of the obtained Gröbner basis with $\mathbb{R}[y_{1},\ldots,y_{12}]$ forms a lexicographic Gröbner basis of the syzygy ideal $I_{\mathcal{P}}$. This basis consists of the following 37 polynomials $$\displaystyle I_{\mathcal{P}}=$$ $$\displaystyle\langle\,y_{2}y_{6}-y_{4}^{2},\ y_{1}y_{7}-y_{4}^{2}\,,\ -y_{1}y_% {2}y_{5}+y_{4}y_{9}\,,\ -y_{1}y_{4}y_{5}+y_{6}y_{9}\,,$$ $$\displaystyle-y_{2}y_{4}y_{5}+y_{7}y_{9}\,,\ -y_{1}y_{2}y_{3}^{2}+y_{1}y_{2}y_% {8}+2y_{3}y_{4}^{2}-2y_{6}y_{7}+2y_{9}^{2}\,,$$ $$\displaystyle-y_{1}^{2}y_{3}^{2}y_{4}+y_{1}^{2}y_{4}y_{8}+2y_{1}^{2}y_{5}y_{9}% +2y_{1}y_{3}y_{4}y_{6}-2y_{4}y_{6}^{2}\,,$$ $$\displaystyle-y_{2}^{2}y_{3}^{2}y_{4}+y_{2}^{2}y_{4}y_{8}+2y_{2}^{2}y_{5}y_{9}% +2y_{2}y_{3}y_{4}y_{7}-2y_{4}y_{7}^{2}\,,$$ $$\displaystyle 2y_{1}^{2}y_{2}y_{5}^{2}-y_{1}y_{3}^{2}y_{4}^{2}+y_{1}y_{4}^{2}y% _{8}+2y_{3}y_{4}^{2}y_{6}-2y_{6}^{2}y_{7}\,,$$ $$\displaystyle 2y_{1}y_{2}^{2}y_{5}^{2}-y_{2}y_{3}^{2}y_{4}^{2}+y_{2}y_{4}^{2}y% _{8}+2y_{3}y_{4}^{2}y_{7}-2y_{6}y_{7}^{2}\,,$$ $$\displaystyle 2y_{1}y_{2}y_{4}^{2}y_{5}^{2}-y_{3}^{2}y_{4}^{4}+2y_{3}y_{4}^{2}% y_{6}y_{7}+y_{4}^{4}y_{8}-2y_{6}^{2}y_{7}^{2}\,,$$ $$\displaystyle 2y_{1}^{3}y_{5}^{2}-y_{1}^{2}y_{3}^{2}y_{6}+y_{1}^{2}y_{6}y_{8}+% 2y_{1}y_{3}y_{6}^{2}-2y_{6}^{3}\,,$$ $$\displaystyle 2y_{2}^{3}y_{5}^{2}-y_{2}^{2}y_{3}^{2}y_{7}+y_{2}^{2}y_{7}y_{8}+% 2y_{2}y_{3}y_{7}^{2}-2y_{7}^{3}\,,$$ $$\displaystyle y_{1}y_{10}-y_{4}y_{6}\,,y_{10}y_{2}-y_{4}y_{7}\,,y_{10}y_{4}-y_% {6}y_{7}\,,$$ $$\displaystyle y_{1}y_{3}^{2}y_{4}-y_{1}y_{4}y_{8}-2y_{1}y_{5}y_{9}-2y_{3}y_{4}% y_{6}+2y_{10}y_{6}\,,$$ $$\displaystyle y_{2}y_{3}^{2}y_{4}-y_{2}y_{4}y_{8}-2y_{2}y_{5}y_{9}-2y_{3}y_{4}% y_{7}+2y_{10}y_{7}\,,-y_{4}^{2}y_{5}+y_{10}y_{9}\,,$$ $$\displaystyle-2y_{1}y_{2}y_{5}^{2}+y_{3}^{2}y_{4}^{2}-2y_{3}y_{6}y_{7}-y_{4}^{% 2}y_{8}+2y_{10}^{2}\,,y_{1}y_{11}-y_{6}^{2}\,,y_{11}y_{2}-y_{6}y_{7}\,,$$ $$\displaystyle y_{1}y_{3}^{2}y_{4}-y_{1}y_{4}y_{8}-2y_{1}y_{5}y_{9}-2y_{3}y_{4}% y_{6}+2y_{11}y_{4}\,,$$ $$\displaystyle-2y_{1}^{2}y_{5}^{2}+y_{1}y_{3}^{2}y_{6}-y_{1}y_{6}y_{8}-2y_{3}y_% {6}^{2}+2y_{11}y_{6}\,,$$ $$\displaystyle-2y_{1}y_{2}y_{5}^{2}+y_{3}^{2}y_{4}^{2}-2y_{3}y_{6}y_{7}-y_{4}^{% 2}y_{8}+2y_{11}y_{7}\,,\ -y_{4}y_{5}y_{6}+y_{11}y_{9}\,,$$ $$\displaystyle y_{1}y_{3}^{3}y_{4}-y_{1}y_{3}y_{4}y_{8}-2y_{1}y_{3}y_{5}y_{9}-2% y_{1}y_{4}y_{5}^{2}-y_{3}^{2}y_{4}y_{6}-y_{4}y_{6}y_{8}+2y_{10}y_{11}\,,$$ $$\displaystyle-2y_{1}^{2}y_{3}y_{5}^{2}+y_{1}y_{3}^{3}y_{6}-y_{1}y_{3}y_{6}y_{8% }-2y_{1}y_{5}^{2}y_{6}-y_{3}^{2}y_{6}^{2}-y_{6}^{2}y_{8}+2y_{11}^{2}\,,$$ $$\displaystyle y_{1}y_{12}-y_{6}y_{7},\,y_{12}y_{2}-y_{7}^{2},\,y_{2}y_{3}^{2}y% _{4}-y_{2}y_{4}y_{8}-2y_{2}y_{5}y_{9}-2y_{3}y_{4}y_{7}+2y_{12}y_{4},$$ $$\displaystyle-2y_{1}y_{2}y_{5}^{2}+y_{3}^{2}y_{4}^{2}-2y_{3}y_{6}y_{7}-y_{4}^{% 2}y_{8}+2y_{12}y_{6}\,,$$ $$\displaystyle-2y_{2}^{2}y_{5}^{2}+y_{2}y_{3}^{2}y_{7}-y_{2}y_{7}y_{8}-2y_{3}y_% {7}^{2}+2y_{12}y_{7}\,,-y_{4}y_{5}y_{7}+y_{12}y_{9}\,,$$ $$\displaystyle y_{2}y_{3}^{3}y_{4}-y_{2}y_{3}y_{4}y_{8}-2y_{2}y_{3}y_{5}y_{9}-2% y_{2}y_{4}y_{5}^{2}-y_{3}^{2}y_{4}y_{7}-y_{4}y_{7}y_{8}+2y_{10}y_{12}\,,$$ $$\displaystyle-2y_{1}y_{2}y_{3}y_{5}^{2}+y_{3}^{3}y_{4}^{2}-y_{3}^{2}y_{6}y_{7}% -y_{3}y_{4}^{2}y_{8}-2y_{4}^{2}y_{5}^{2}-y_{6}y_{7}y_{8}+2y_{11}y_{12}\,,$$ $$\displaystyle-2y_{2}^{2}y_{3}y_{5}^{2}+y_{2}y_{3}^{3}y_{7}-y_{2}y_{3}y_{7}y_{8% }-2y_{2}y_{5}^{2}y_{7}-y_{3}^{2}y_{7}^{2}-y_{7}^{2}y_{8}+2y_{12}^{2}\,\rangle.$$ Both Maple and Mathematica compute this basis in a few seconds on a PC. The ideal $I_{\mathcal{P}}$ has dimension 5. It is computed by the command HilbertDimension in Maple or using the code available on the Mathematica Stack Exchange Web page http://mathematica.stackexchange.com/questions/37015/. If one uses the maximal independent set of variables $\{\,y_{1},y_{2},y_{3},y_{4},y_{5}\,\}$ as parameters, due to the above list of algebraic relations, the other variables are easily expressible in terms of the five parametric variables by applying the command Solve in Maple or Mathematica: $$\displaystyle y_{6}=\frac{y_{4}^{2}}{y_{2}}\,,\ \ y_{7}=\frac{y_{4}^{2}}{y_{1}% }\,,$$ $$\displaystyle y_{8}=\frac{2y_{1}^{3}y_{2}^{3}y_{5}^{2}+y_{1}^{2}y_{2}^{2}y_{3}% ^{2}y_{4}^{2}-2y_{1}y_{2}y_{3}y_{4}^{4}+2y_{4}^{6}}{y_{1}^{2}y_{2}^{2}y_{4}^{2% }}\,,$$ (18) $$\displaystyle y_{9}=\frac{y_{1}y_{2}y_{5}}{y_{4}}\,,\ \ y_{10}=\frac{y_{4}^{3}% }{y_{1}y_{2}}\,,\ \ y_{11}=\frac{y_{4}^{4}}{y_{1}y_{2}^{2}}\,,\ \ y_{12}=\frac% {y_{4}^{4}}{y_{1}^{2}y_{2}}\,.$$ This structure of the ring ${\mathbb{R}[y_{1},y_{2}\dots y_{12}]}/{I_{\mathcal{P}}}$ indicate the fact that the polynomial invariants obtained from the rational relations (18), by their conversion into polynomials, form a Gröbner basis of the syzygy ideal $I_{\mathcal{P}}$ in the ring of polynomials in $y_{6},\ldots,y_{12}$ over the parametric coefficient field $\mathbb{R}(y_{1},\ldots,y_{5})$ of rational functions. The determined properties of the ring $\mathbb{R}[\mathcal{P}_{1},\dots,\mathcal{P}_{12}]$ is in a partial agreement with the initial structure of $\mathbb{R}[W]^{SU(2)\times SU(2)}$. Indeed, the five Quesne polynomials $C^{(200)},C^{(020)},C^{(002)},C^{(111)}$ and $C^{(003)}$, that represent the subset of the algebraically independent invariants, survive after restriction to the subspace $W_{X}$ and correspond to the variables $y_{1},y_{2},y_{3},y_{4},y_{5}$ which are independent modulo $I_{\mathcal{P}}$. While restriction of the other Quesne invariants represent variables which are dependent modulo $I_{\mathcal{P}}$. 4.4 Mapping ${\mathbb{R}[\mathcal{P}]}$ to a freely generating ring Now we establish the injective homomorphism between the ring $\mathbb{R}[\mathcal{P}]$ and a certain subring of the coordinate ring $\mathbb{R}[W_{X}]$ which is freely generated by polynomials of degrees 1,1,1,2,2. The latter subring is defined as follows. Consider the set of polynomials on $W_{X}$ $$f_{1}=\gamma\,,\quad g_{1}:=x_{3}+y_{3},\quad g_{2}:=x_{3}-y_{3},\ \quad g_{3}% :=x_{1}^{2}+x_{2}^{2}\,,\quad g_{4}:=y_{1}^{2}+y_{2}^{2}\,.$$ (19) where the following variables are introduced $$\displaystyle x_{1}$$ $$\displaystyle=$$ $$\displaystyle c_{11}-c_{22}\,,\qquad y_{1}=c_{11}+c_{22}\,,$$ $$\displaystyle x_{2}$$ $$\displaystyle=$$ $$\displaystyle c_{12}+c_{21}\,,\qquad y_{2}=c_{12}-c_{21}\,,$$ (20) $$\displaystyle x_{3}$$ $$\displaystyle=$$ $$\displaystyle\alpha+\beta\,,\qquad\quad\ y_{3}=\beta-\alpha$$ It turns out that all twelve Quesne’s polynomials $\mathcal{P}$ in (15) can be expanded over these 5 algebraically independent polynomials. The explicit form of these expansions for all non-vanishing Quesne’s polynomials up to the order six are given in the Table 1. Let us now introduce the ring $\mathbb{R}[f_{1},g_{1},g_{2},g_{3},g_{4}]$, which is generated by the set (19). The relation between the polynomials of $\mathcal{P}$ in variables (19) and Quesne’s invariants, as shown in Table 1, defines the mapping between the quotient ring of $SU(2)\times SU(2)\--$invariant polynomials and the ring $\mathbb{R}[f_{1},g_{1},g_{2},g_{3},g_{4}]$ $$\phi:\ \mathbb{R}[y_{1},y_{2},\dots,y_{12}]/I_{\mathcal{P}}\longrightarrow% \mathbb{R}[f_{1},g_{1},g_{2},g_{3},g_{4}]\,,$$ (21) which is an injective ring homomorphism. Indeed, apparently mapping (21) satisfies $$\phi(p+q)=\phi(p)+\phi(q)\,,\quad\phi(pq)=\phi(p)\phi(q)\,,$$ and $$\phi(p)-\phi(q)=0\quad\text{if and only if}\quad p-q\in I_{\mathcal{P}}\,.$$ However, (21) is not isomorphism. The linear invariants $f,g_{1},g_{2}$ have no preimages in $\mathbb{R}[\mathcal{P}]$ since the polynomial invariants (15) have degree $\geq 2$. 5 Concluding remarks We conclude with a group-theoretical explanation of the algebraic results obtained in the previous section. Note that the generic action of $SU(4)$ group on subspace $\mathfrak{P}_{X}\subset\mathfrak{P}$ moves its elements from $\mathfrak{P}_{X}\,.$ But one can point out the 7-dimensional subgroup $G_{X}\subset SU(4)$ that preserves the form of $X-$states. $\bullet$  Invariance of the $X-$states $\bullet$ The 7-parametric subgroup $G_{X}\subset SU(4)$ that preserves $\mathfrak{P}_{X}$, i.e., $$G_{X}\varrho_{X}G_{X}^{\dagger}\in\mathfrak{P}_{X}\,,$$ can be easily constructed. Let fix the following elements of the $SU(4)$ algebra 444The choice of such subgroup is not unique, and there is the 15-fold degeneration: one can consider 15 different sets of seven generators that carry $X-$states into each other [18]. $$\displaystyle e_{1}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{3}\otimes\sigma_{3},$$ $$\displaystyle e_{2}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{2}\otimes\sigma_{1},\ e_{3}=I\otimes\sigma_{3},\ e_{4}=-% \sigma_{2}\otimes\sigma_{2},$$ $$\displaystyle e_{5}$$ $$\displaystyle=$$ $$\displaystyle\sigma_{1}\otimes\sigma_{2},\ e_{6}=\sigma_{3}\otimes I,\ e_{7}=% \sigma_{1}\otimes\sigma_{1}.$$ The set of $(e_{1},e_{2},\dots,e_{7})$ is closed under multiplication, i.e., they form a basis of the subalgebra $\mathfrak{g}_{X}:=\mathfrak{su(2)}\oplus\mathfrak{u(1)}\oplus\mathfrak{su(2)}% \in\mathfrak{su(4)}\,.$ Exponentiation of the algebra $\mathfrak{g}_{X}$ gives the subgroup $$G_{X}:=\exp(i\mathfrak{g}_{X})\in SU(4)\,,$$ which is the invariance group of the $X-$states space $\mathfrak{P}_{X}$. Writing the generic element of algebra $\mathfrak{g}_{X}$ as $i\sum_{i}^{7}\omega_{i}e_{i}\,,$ one can verify that an arbitrary element of $G_{X}$ can be represented in the following block-diagonal form $$G_{X}=P_{\pi}\left(\begin{array}[]{c|c}{e^{-i{\omega_{1}}}SU(2)}&0\\ \hline 0&{e^{i{\omega_{1}}}SU(2)^{\prime}}\\ \end{array}\right)P_{\pi}\,,$$ (22) where the two copies of SU(2) are parametrized as follows $$\displaystyle SU(2)$$ $$\displaystyle=$$ $$\displaystyle\exp{\left[i\left(\omega_{4}+\omega_{7}\right)\sigma_{1}+i\left(% \omega_{2}+\omega_{5}\right)\sigma_{2}+i\left(\omega_{3}+\omega_{6}\right)% \sigma_{3}\right]}\,,$$ $$\displaystyle SU(2)^{\prime}$$ $$\displaystyle=$$ $$\displaystyle\exp{\left[i\left(-\omega_{4}+\omega_{7}\right)\sigma_{1}+i\left(% -\omega_{2}+\omega_{5}\right)\sigma_{2}+i\left(\omega_{3}-\omega_{6}\right)% \sigma_{3}\right]}\,.$$ Having the representation (22), one can find the transformation laws for elements of $X-$matrices. $\bullet$  Action $G_{X}$ on the $X-$states $\bullet$ First of all, the group $G_{X}$ leaves the parameter $c_{33}$ unchanged. Secondly, according to (22), the adjoint action of the group $X$ induces the transformations of Fano parameters which are are unitary equivalent to the following block diagonal actions of two copies of $SO(3)$ on pair of 3-dimensional vectors in $W$ with coordinates (20): $$\left(\begin{array}[]{c}x_{1}^{\prime}\\ x_{2}^{\prime}\\ x_{3}^{\prime}\\ y_{1}^{\prime}\\ y_{2}^{\prime}\\ y_{3}^{\prime}\\ \end{array}\right)=\left(\begin{array}[]{c|c}&\\ \mbox{\Huge{SO(3)}}&\mbox{\Huge{O}}\\ &\\ \hline&\\ \mbox{\Huge{O}}&\mbox{\Huge{SO(3)}}^{\prime}\\ \end{array}\right)\left(\begin{array}[]{c}x_{1}\\ x_{2}\\ x_{3}\\ y_{1}\\ y_{2}\\ y_{3}\end{array}\right)\,.$$ Thus we conclude that there are 3 independent $G_{X}$-polynomial invariants $$f_{1}:=c_{33}\,,\quad f_{2}:=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}\,,\quad f_{3}:=y_{1% }^{2}+y_{2}^{2}+y_{3}^{2}\,.$$ Similarly, the local transformation of the $X-$states can be identified and the corresponding local unitary polynomial invariants can be determined. $\bullet$  Local subgroup of $G_{X}$ $\bullet$ One can easy verify that the local subgroup of $G_{X}$ is $$P_{\pi}\exp(\imath\ \frac{\varphi_{1}}{2}\sigma_{3})\times\exp(\imath\frac{% \varphi_{2}}{2}\sigma_{3})P_{\pi}\subset G_{X}\,.$$ Its action induces two independent $SO(2)$ rotations of two planar vectors,  $\boldsymbol{x}:=(x_{1},\ x_{2})$ and $\boldsymbol{y}:=\left(y_{1},\ y_{2}\right)$ on angles $\varphi_{1}+\varphi_{2}$ and $\varphi_{1}-\varphi_{2}\,$ respectively. Therefore, the five polynomials (19), used in the previous section for expansion of the $SU(2)\times SU(2)\--$invariants, represent algebraically independent local invariants for the $X-$ states. Concluding, our analysis of 2-qubits $X-$states space shows the existence of two freely generated polynomial rings, one related to the global $G_{X}$-invariance $${\mathbb{R}}[c_{33},\boldsymbol{x},\boldsymbol{y}]^{G_{X}}={\mathbb{R}}[f_{1},% f_{2},f_{3}]$$ and another one, corresponding to the local unitary symmetry of $X\--$states, $${\mathbb{R}}[c_{33},\boldsymbol{x},\boldsymbol{y}]^{SO(2)\times SO(2)}={% \mathbb{R}}[f_{1},g_{1},g_{2},g_{3},g_{4}]$$ generated by the linear invariants $f_{1},g_{1},g_{2}$ together with the quadratic invariants $g_{3},g_{4}$ of two planar vectors under the linear action of $SO(2)\times SO(2)$ group. Moreover, the injective homomorphism of the ring of local unitary polynomial invariants, ${\mathbb{R}}[W]^{SU(2)\times SU(2)}$ restricted to the subspace of 2-qubit $X-$states, to the above introduced freely generated invariant ring ${\mathbb{R}}[W_{X}]^{SO(2)\times SO(2)}$ has been established. Acknowledgments The authors thank S.Evlakhov for his help in some related computations. The contribution of the first author (V.G.) was partially supported by the Russian Foundation for Basic Research (grant 16-01-00080). Список литературы [1] Ting Yu and J.H. Eberly. Evolution from entanglement to decoherence of bipartite mixed "X"states. Quantum Information and Computation, 7, 459-468 (2007). [2] M.A. Nielsen and I.L. Chuang. Quantum Computation and Quantum Information. Cambridge University Press (2011). [3] R.F. Werner. Quantum states with Einstein-Podolsky-Rosen correlations admitting a hidden-variable model. Phys.  Rev.  A 40(8) 4277–4281 (1989). [4] M. Horodecki and P. Horodecki. Reduction criterion of separability and limits for a class of distillation protocols. Phys.  Rev. A 59 4206–4216 (1999). [5] S. Ishizaka and T. Hiroshima. Maximally entangled mixed states under nonlocal unitary operations in two qubits. Phys.  Rev. A 62 22310 (4pp) (2000). [6] F. Verstraete, K. Audenaert, T.D. Bie and B.D. Moor. Maximally entangled mixed states of two qubits. Phys. Rev. A 64 012316 (6pp) (2001). [7] P. Mendonca, M. Marchiolli and D. Galetti, Entanglement universality of two-qubit $X-$states. Annals of Physics. 351, 79–-103 (2014). [8] C. Quesne. $SU(2)\times SU(2)$ scalars in the enveloping algebra of SU(4), J. Math. Phys. 17, 1452–1467 (1976). [9] M. Grassl, M.  Rotteler and T.  Beth. Computing local invariants of qubit quantum systems. Phys. Rev.  A58, 1833–1839 (1998). [10] R. C. King, T. A. Welsh and P D Jarvis. The mixed two-qubit system and the structure of its ring of local invariants, J. Phys. A: Math. Theor.  40, 10083–10108 (2007). [11] V. Gerdt, A. Khvedelidze and Yu. Palii. On the ring of local polynomial invariants for a pair of entangled qubits, Zapiski POMI  373, 104-123 (2009). [12] D. Cox, J. Little and D. O’Shi. Ideals, Varieties and Algorithms. 3rd edition, Springer-Verlag, New York, 2007. [13] U. Fano. Description of States in Quantum Mechanics by Density Matrix and Operator Techniques. Reviews of Modern Physics 29, 74–-93 (1957). [14] V.L. Popov and E.B. Vinberg. Invariant Theory, In:N.N. Parshin and I.R. Shafarevich, eds., Algebraic Geometry IV, Encyclopedia of Mathematical Sciences 55, Springer-Verlag, Berlin, Heidelberg (1994). [15] H. Derksen and G. Kemper. Computational Invariant Theory. Springer-Verlag, Berlin, Heidelberg (2002). [16] B. Sturmfels. Algorithms in Invariant Theory. Texts and Monographs in Symbolic Computation. Second Edition. SpringerWienNewYork (2008). [17] H. Derksen and G. Kemper. Computing invariants of algebraic groups in arbitrary characteristic. Adv. Math. 217, 2089–2129 (2008). [18] A.R.P. Rau, Algebraic characterization of $X-$states in quantum information. J. Phys. A: Math. Gen. 42, 412002 (7pp) (2009).
Superdiffusive dispersals impart the geometry of underlying random walks V. Zaburdaev Max Planck Institute for the Physics of Complex Systems, Nöthnitzer Strasse 38, D-01187 Dresden, Germany Institute of Supercomputing Technologies, Lobachevsky State University of Nizhny Novgorod, 603140 N. Novgorod, Russia    I. Fouxon Department of Physics, Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat-Gan, 52900, Israel    S. Denisov Department of Applied Mathematics, Lobachevsky State University of Nizhny Novgorod, 603140 N. Novgorod, Russia Sumy State University, Rimsky-Korsakov Street 2, 40007 Sumy, Ukraine Institute of Physics, University of Augsburg, Universitätsstrasse 1, D-86135, Augsburg Germany    E. Barkai Department of Physics, Institute of Nanotechnology and Advanced Materials, Bar-Ilan University, Ramat-Gan, 52900, Israel Abstract It is recognised now that a variety of real-life phenomena ranging from diffuson of cold atoms to motion of humans exhibit dispersal faster than normal diffusion. Lévy walks is a model that excelled in describing such superdiffusive behaviors albeit in one dimension. Here we show that, in contrast to standard random walks, the microscopic geometry of planar superdiffusive Lévy walks is imprinted in the asymptotic distribution of the walkers. The geometry of the underlying walk can be inferred from trajectories of the walkers by calculating the analogue of the Pearson coefficient. pacs: 05.40.Fb,02.50.Ey Introduction. The Lévy walk (LW) model yosi1 ; yosi2 ; rmp was developed to describe spreading phenomena that were not fitting the paradigm of Brownian diffusion beyond . Still looking as a random walk, see Fig. 1, but with a very broad distribution of excursions’ lengths, the corresponding processes exhibit dispersal faster than in the case of normal diffusion. Conventionally, this difference is quantified with the mean squared displacement (MSD), $\langle r^{2}(t)\rangle\propto t^{\alpha}$, and the regime with $\alpha>1$ is called super-diffusion. Examples of such systems range from cold atoms moving in dissipative optical lattices sagi2012 to T cells migrating in the brain tissue Harris2012 . Most of the existing theoretical results, however, were derived for one dimensional LW processes rmp . In contrast, real life phenomena – biological motility (from bacteria taktikos to humans gadza and autonomous robots robot1 ; swarms ), animal foraging mendes ; foraging and search search – happen in two dimensions. Somewhat surprisingly, generalizations of the Lévy walks to two dimensions are still virtually unexplored. In this work we extend the concept of LWs to two dimensions. Our main finding is that the microscopic geometry of planar Lévy walks reveals itself in the shape of the asymptotic probability density functions (PDF) $P(\textbf{r},t)$ of finding a particle at position r at time $t$ after it was launched from the origin. This is in a sharp contrast to the standard 2D random walks, where, by virtue of the central limit theorem (CLT) gnedenko , the asymptotic PDFs do not depend on geometry of the walks and have a universal form of the two-dimensional Gaussian distribution spitzer ; sokolov . Models. We begin with a core of the Lévy walk concept yosi1 ; yosi2 : A particle performs ballistic moves with constant speed, alternated by instantaneous re-orientation events, and the length of the moves is a random variable with a power-law distribution. Because of the constant speed $v_{0}$, the length $l_{i}$ and duration $\tau_{i}$ of the $i$-th move are linearly coupled, $l_{i}=v_{0}\tau_{i}$. As a result, the model can be equally well defined by the distribution of ballistic move (flight) times $$\psi(\tau)=\frac{1}{\tau_{0}}\frac{\gamma}{(1+\tau/\tau_{0})^{1+\gamma}},\quad\tau_{0},\gamma>0.$$ (1) Depending on the value of $\gamma$, it can lead to a dispersal $\alpha=1$, typical for normal diffusion ($\gamma>2$), and very long excursions leading to the fast dispersal with $1<\alpha\leq 2$ in the case of super-diffusion ($0<\gamma<2$). At each moment of time $t$ the finite speed sets the ballistic front beyond which there are no particles. Below we consider three intuitive models of two-dimensional superdiffusive dispersals. a) The simplest way to obtain two-dimensional Lévy walk out of the one-dimensional one is to assume that the motions along each axis, $x$ and $y$, are identical and independent one-dimensional LW processes, as shown in Fig 1a. The two-dimensional PDF $P(\mathbf{r},t)$, $\mathbf{r}(t)=\{x(t),y(t)\}$, of this product model is given by the product of two one-dimensional LW PDFs, $P_{\text{prod}}(\mathbf{r},t)=P_{\text{LW}}(x,t)\cdot P_{\text{LW}}(y,t)$. On the microscopic scale, each ballistic event corresponds to the motion along either the diagonal or anti-diagonal. Every re-orientation only partially erases the memory about the last ballistic flight: while the direction of the motion along one axis could be changed, the direction along the other axis almost surely remains the same. The ballistic front has the shape of a square with borders given by $|x|=|y|=v_{0}t$. b) In the XY-model, a particle is allowed to move only along one of the axes at a time. A particle chooses a random flight time $\tau$ from Eq. (1) and one out of four directions. Then it moves with a constant speed $\upsilon_{0}$ along the chosen direction. After the flight time is elapsed, a new random direction and a new flight time are chosen. This process is sketched in Fig. 1b. The ballistic front is a square defined by the equation $|x|+|y|=v_{0}t$. c) The uniform model follows the original definition by Pearson Pearson1905 . A particle chooses a random direction, parametrized by the angle $\phi$, uniformly distributed in the interval $[0,2\pi]$, and then moves ballistically for a chosen flight time. The direction of the next flight is chosen randomly and independently of the direction of the elapsed flight. The corresponding trajectory is sketched in Fig. 1c. The ballistic front of the model is a circle $|\mathbf{r}|=v_{0}t$. Governing equations. We now derive equations describing the density of particles for the $XY$ and uniform models. The following two coupled equations govern the space-time evolution of the PDF rmp : $$\nu(\mathbf{r},t)=\int_{0}^{t}{\rm d}\tau\int{\rm d}\mathbf{v}\,\nu(\mathbf{r}-\mathbf{v}\tau,t-\tau)\psi(\tau)h(\mathbf{v})+\delta(\mathbf{r})\delta(t),$$ $$P(\mathbf{r},t)=\int_{0}^{t}{\rm d}\tau\int{\rm d}\mathbf{v}\,\nu(\mathbf{r}-\mathbf{v}\tau,t-\tau)\Psi(\tau)h(\mathbf{v}).$$ (2) The first equation describes the events of velocity reorientation [marked by $\circ$ in Fig. 1(b-c)], where the density $\nu(\mathbf{r},t)$ allows us to count the number of particles, $\nu(\mathbf{r},t)d\mathbf{r}dt$, whose flights ended in the interval $[\mathbf{r},\mathbf{r}+d\mathbf{r}]$ during the time interval $[t,t+dt]$. Velocity at each reorientation event is chosen from the model specific velocity distribution $h(\mathbf{v})$ and is statistically independent of the flight time. Second equation relates the events of velocity reorientations to the density of the particles. Here $\Psi(\tau)$ is the probability to remain in the flight for time $\tau$, $\Psi(\tau)=1-\int_{0}^{\tau}\psi(t^{\prime})\rm{d}t^{\prime}$. The formal solution of the transport equations can be found via combined Fourier-Laplace transform si : $$P(\mathbf{k},s)=\frac{\int{\rm d}\mathbf{v}\,\Psi(s+i\mathbf{k}\cdot\mathbf{v})h(\mathbf{v})}{1-\int{\rm d}\mathbf{v}\,\psi(s+i\mathbf{k}\cdot\mathbf{v})h(\mathbf{v})}$$ (3) This is a general answer for a random walk process in arbitrary dimensions with an arbitrary velocity distribution, where $\mathbf{k}$ and $s$ are coordinates in Fourier and Laplace space corresponding to $\mathbf{r}$ and $t$ respectively (but not for the product model, which is described by two independent random walk processes). The microscopic geometry of the process can be captured with $h(\mathbf{v})$. For the $XY$-model we have $h_{XY}(\mathbf{v})=[\delta(v_{y})\delta(|v_{x}|-v_{0})+\delta(v_{x})\delta(|v_{y}|-v_{0})]/4$, while for the uniform model it is $h_{\text{uniform}}(\mathbf{v})=\delta(|\mathbf{v}|-v_{0})/2\pi v_{0}$. The technical difficulty is to find the inverse transform of Eq. (3). We therefore employ the asymptotic analysis yosi1 ; yosi2 ; rmp to switch from Fourier-Laplace representation to the space-time coordinates and analyze model PDFs $P(\mathbf{r},t)$ in the limit of large $\bf{r}$ and $t$ si . In the diffusion limit, $\gamma>2$, the mean squared flight length is finite. In the large time limit, the normalized covariance of the flight components in all three models is the identity matrix and so the cores of their PDFs are governed by the vector-valued CLT van and have the universal Gaussian shape $P(\mathbf{r},t)\simeq\frac{1}{4\pi Dt}{\rm e}^{-\frac{\mathbf{r}^{2}}{4Dt}}$, where $\quad D=v_{0}^{2}\tau_{0}/[2(\gamma-2)]$ (for the product model the velocity has to be rescaled to $v_{0}/\sqrt{2}$). For the outer parts of the PDFs some bounds can be obtained based on a theory developed for sums of random variables with slowly decaying regular distributions borovkov . The difference between the three walks becomes sharp in the regime of sub-ballistic super-diffusion, $1<\gamma<2$. Figure 2 presents the PDFs of the three models obtained by sampling si over the corresponding stochastic processes for $t=10^{4}\gg\tau_{0}=1$. These results reveal a striking feature, namely, that the geometry of a random walk is imprinted in its PDF. This is very visual close to the ballistic fronts, however, as we show below, the non universality is already present in the PDF cores. The PDF of the product model is the product of the PDFs for two identical one-dimensional LWs rmp . In the case of the $XY$-model, the central part of the propagator can be written in Fourier-Laplace space as $P_{XY}(k_{x},k_{y},s)\simeq(s+\frac{K_{\gamma}}{2}|k_{x}|^{\gamma}+\frac{K_{\gamma}}{2}|k_{y}|^{\gamma})^{-1}$, where $K_{\gamma}=\Gamma[2-\gamma]|\cos(\pi\gamma/2)|\tau_{0}^{\gamma-1}v_{0}^{\gamma}$ si . By inverting the Laplace transform, we also arrive at the product of two characteristic functions of one-dimensional Lévy distributions uchaikinzolotarev ; span1 : $P_{\text{XY}}(k_{x},k_{y},t)\simeq\text{e}^{-tK_{\gamma}|k_{x}|^{\gamma}/2}\text{e}^{-tK_{\gamma}|k_{y}|^{\gamma}/2}$. In this case the spreading of the particles along each axis happens twice slower (note a factor $1/2$ in the exponent) than in the one-dimensional case; each excursion along an axis acts as a trap for the motion along the adjacent axis thus reducing the characteristic time of the dispersal process by factor $2$. As a result, the bulk PDF of the $XY$-model is similar to that of the product model after the velocity rescaling $\tilde{v}_{0}=v_{0}/2^{1/\gamma}$. This explains why on the macroscopic scales the trajectory of the product model, see Fig. 1e, looks similar to that of the $XY$ - model. The difference between the PDFs of these two models appears in the outer parts of the distributions, see Figs. 2a,b; it can not be resolved with the asymptotic analysis which addresses only the central cores of the PDFs. The PDF of the $XY$-model has a cross-like structure with sharp peaks at the ends, see Fig. 3a. The appearance of these Gothic-like ‘flying buttresses’ stone , capped with ‘pinnacles’, can be understood by analyzing the process near the ballistic fronts si . For the uniform model we obtain $P_{\text{uniform}}(\mathbf{r},t)\simeq\frac{1}{2\pi}\int\limits_{0}^{\infty}J_{0}(kr)\text{e}^{-t\widetilde{K}_{\gamma}k^{\gamma}}k{\rm d}k$, where $\widetilde{K}_{\gamma}=\tau_{0}^{\gamma-1}v_{0}^{\gamma}\sqrt{\pi}\Gamma[2-\gamma]/\Gamma\left[1+\gamma/2\right]\left|\Gamma\left[(1-\gamma)/2\right]\right|$, and $J_{0}(x)$ is the Bessel function of the first kind (see si for more details). Different to the product and $XY$-models, this is a radially symmetric function which naturally follows from the microscopic isotropy of the walk. Mathematically, the expression above is a generalization of the Lévy distribution to two dimensions uchaikinzolotarev ; klages . However, from the physics point of view, it provides the generalization of the Einstein relation and relates the generalized diffusion constant $\widetilde{K}_{\gamma}$ to the physical parameters of the 2d process, $v_{0}$, $\tau_{0}$ and $\gamma$. In Fig. 3b we compare the simulation results for the PDF of the uniform model with the analytical expression above. The regime of ballistic diffusion occurs when the mean flight time diverges, $0<\gamma<1$ daniela ; madrigas . Long flights dominate the distribution of particles and this causes the probability concentration at the ballistic fronts. Since the latter are model specific, see Fig. 1, the difference in the microscopic schematization reveals itself in the PDFs even more clearly si . Pearson coefficient. The difference in the model PDFs can by quantified by looking into moments of the corresponding processes. The most common is the MSD, $\langle\mathbf{r}^{2}(t)\rangle=\int{\rm d}\mathbf{r}\,\,\mathbf{r}^{2}P(\mathbf{r},t)$. Remarkably, for the $XY$- and uniform models the MSD is the same as for the 1D Lévy walk with the distribution of flight times given by Eq. (1) si . The MSD therefore does not differentiate between the $XY-$ and uniform random walks (and, if the velocity $v_{0}$ is not known a priori, the product random walks as well). Next are the fourth order moments, including the cross-moment $\langle x^{2}(t)y^{2}(t)\rangle$. They can be evaluated analytically for all three models si . The ratio between the cross-moment and the product of the second moments, $PC(t)=\langle x^{2}(t)y^{2}(t)\rangle/\langle x^{2}(t)\rangle\langle y^{2}(t)\rangle$, is a scalar characteristic similar to the Pearson coefficient pears2 ; ken . In the asymptotic limit and in the most interesting regime of sub-ballistic super-diffusion, $1<\gamma<2$, this generalized Pearson coefficient equals $$PC(t)\!=\!\!\left\{\!\!\begin{array}[]{cc}1,&\text{product model}\\ \frac{\gamma\Gamma[4-\gamma]^{2}}{\Gamma[7-2\gamma]},&XY-\text{model}\\ \frac{(2-\gamma)^{2}(3-\gamma)^{2}}{2(4-\gamma)(5-\gamma)(\gamma-1)}(\frac{t}{\tau_{0}})^{\gamma-1}&\text{uniform model}\end{array}\right.$$ (4) The $PC$ parameter is distinctly different for the three processes: the product model has the $PC(t)\equiv 1$, for the $XY$-model it is a constant smaller than one for any $\gamma\in~{}]1,2]$ and does not depend on $\upsilon_{0}$ and $\tau_{0}$, while for the uniform model it diverges in the asymptotic limit as $t^{\gamma-1}$. Figure 3 presents the $PC(t=5\cdot 10^{5})$ of the $XY-$ (c) and uniform models (d) obtained by samplings over $10^{14}$ stochastic realizations. We attribute the deviation of the numerical results for the $XY$-model from the analytical result Eq. (4) near $\gamma\lesssim 2$ to a slow convergence to the asymptotic limit $PC(t\rightarrow\infty)$ si . $PC$ can be used to find how close is a particular two-dimensional super-diffusive process to each of the models. The value of $\gamma$ can be estimated from the MSD exponent $\alpha$, $\gamma=3-\alpha$. To test this idea we investigate a classical two-dimensional chaotic Hamiltonian system geisel ; egg which exhibits a super-diffusive LW-like dynamics egg ; beyond . In this system, a particle moves in a dissipationless egg-crate potential and, depending on its total energy, exhibits normal or super-diffusive dispersals si . It is reported in Ref. egg that for the energy $E=4$ the dispersal is strongly anomalous, while in Ref. geisel it is stated that the diffusion is normal with $\alpha=1$, within the energy range $E\in[5,6]$. We sampled the system dynamics for two energy values, $E=4$ and $5.5$. The obtained MSD exponents are $1.62\pm 0.04$ and $1\pm 0.02$, respectively. We estimated the $PC(t)$ for the time $t=10^{5}$ and obtained values $0.35$ and $0.997$. The analytical value of the $PC$ (4) for the $XY$-process with $\gamma=3-1.62=1.38$ is $0.355$. This $PC$ value thus suggests that we are witnessing a super-diffusive $XY$ Lévy walk. The numerically sampled PDF of the process si , see inset in Fig. 3c, confirms this conjecture. In contrast to the uniform model, the $PC$ parameters for the $XY$ and product models are not invariant with respect to rotations of the reference frame, $\{x^{\prime},y^{\prime}\}=\{x\cos\phi-y\sin\phi,x\sin\phi+y\cos\phi\}$. While in theory the frame can be aligned with the directions of maximal spread exhibited by an anisotropic particle density at long times, see Fig. 2a,b, it might be not so evident in real-life settings. The angular dependence of the $PC$ can be explored by rotating the reference frame by an angle $\phi\in[0,\pi/2]$, starting from some initial orientation, and calculating dependence $PC[\phi]$. The result can then be compared to analytical predictions for the asymptotic limit where the three models show different angular dependencies si . In addition, the time evolution of $PC[\phi]$ is quantitatively different for the product and $XY$-models and thus can be used to discriminate between the two processes. In the product model, the dependence $PC[\phi]$ changes with time qualitatively. For short times it reflects the diagonal ballistic motion of particles and for longer times attains the shape characteristic to the $XY$ - model si ; an effect which we could already anticipate from inspecting the trajectories in Fig. 1d. In the $XY$ - model the positions of minima and maxima of $PC[\phi]$ are time-independent. Conclusion. We have considered three intuitive models of planar Lévy walks. Our main finding is that the geometry of a walk appears to be imprinted into the asymptotic distributions of walking particles, both in the core of the distribution and in its tails. We also proposed a scalar characteristic which can be used to differentiate between the types of walks. Further analytical results can be obtained for arbitrary velocity distribution and dimensionality of the problem itz . For example, it is worthwhile to explore the connections between underling symmetries of 2D Hamiltonian potentials and the symmetries of the emerging LWs zas1 . The existing body of results on two-dimensional super-diffusive phenomena demonstrates that the three models we considered have potential applications. A spreading of cold atoms in a two-dimensional dissipative optical potential atoms is a good candidate for a realization of the product model. Lorentz billiards sinai ; bg1990 ; cristadoro reproduce the $XY$ Lévy walk with exponent $\gamma=2$. The uniform model is relevant to the problems of foraging, motility of microorganisms, and mobility of humans rmp ; mendes ; foraging ; sims1 ; sims2 . On the physical side, the uniform model is relevant to a Lévy-like super-diffusive motion of a gold nanocluster on a plane of graphite nanocluster and a graphene flake placed on a graphene sheet graphene2 . LWs were also shown, under certain conditions, to be the optimal strategy for searching random sparse targets optimalsearch ; search . The performance of searchers using different types of 2D LWs (for example, under specific target arrangements) is a perspective topic benichou . Finally, it would be interesting to explore a non-universal behavior of systems driven by different types of multi-dimensional Lévy noise span2 ; span3 ; span4 . Acknowledgements. This work was supported by the the Russian Science Foundation grant No. 16-12-10496 (VZ and SD). IF and EB acknowledge the support by the Israel Science Foundation. References (1) M. F. Shlesinger, J. Klafter, and Y. Wong, J. Stat. Phys. 27, 499 (1982). (2) M. F. Shlesinger, B. J. West, and J. Klafter, Phys. Rev. Lett. 58, 1100 (1987). (3) V. Zaburdaev, S. Denisov, J. Klafter, Rev. Mod. Phys. 87, 483 (2015). (4) J. Klafter, M.F. Shlesinger, and G. Zumofen, Phys. Today 49, 33 (1996). (5) Y. Sagi, M. Brook, I. Almog, and N. Davidson, Phys. Rev. Lett. 108, 093002 (2012). (6) T. H. Harris, E. J. Banigan, D. A. Christian, C. Konradt, E. D. Tait Wojno, K. Norose, E. H. Wilson, B. John, W. Weninger, A. D. Luster, A. J. Liu, and C. A. Hunter, Nature 486, 545 (2012). (7) C. Ariel, A. Rabani, S. Benisty, J. D. Partridge, R. M. Harshey, and A. Be’er, Nature Comm. 6, 8396 (2015). (8) D. A. Raichlen, B. M. Wood, A. D. Gordon, A. Z. P. Mabulla, F. W. Marlowe, H. Pontzer, Proc. Nat. Acad. Sci. USA 111, 728 (2014). (9) G. M. Fricke, F. Asperti-Boursin, J. Hecker, J. Cannon, and M. Moses, Adv. Artif. Life ECAL 12, 1009 (2013). (10) J. Beal, ACM Trans. Auton. Adapt. Syst. 10, 1 (2015). (11) V. Méndez, D. Campos, and F. Bartumeus, Stochastic Foundations in Movementecology (Springer, Berlin/Heidelberg, 2014). (12) G. Viswanathan, M. da Luz, E. Raposo, and H. Stanley, The Physics of Foraging: An Introduction to Random Searches and Biological Encounters (Cambridge University Press, 2011). (13) O. Bénichou, C. Loverdo, M. Moreau, and R. Voituriez, Rev. Mod. Phys. 83, 81 (2011). (14) B. Gnedenko and A. Kolmogorov, Limit distributions for sums of independent random variables (Addison-Wesley, Reading, MA, 1954). (15) F. Spitzer, Principles of Random Walks (Spriinger, NY, 1976). (16) J. Klafter and I.M. Sokolov, First Steps in Random Walks (Oxford University Press, 2011). (17) K. Pearson, Nature 72, 294 (1905). (18) See Supplemental Material, which includes Refs. si1 ; madrigas ; daniela ; geisel ; egg ; si3 . (19) A. Erdélyi, Tables of Integral Transforms, Vol. 1 (McGraw-Hill, New York, 1954). (20) D. Froemberg, M. Schmiedeberg, E. Barkai, and V. Zaburdaev, Phys. Rev. E 91, 022131 (2015). (21) M. Magdziarz, and T. Zorawik, Phys. Rev. E 94, 022130 (2016). (22) T. Geisel, A. Zacherl, and G. Radons G, Phys. Rev. Lett. 59, 2503 (1987). (23) J. Klafter and G. Zumofen, Phys. Rev. E 49, 4873 (1994). (24) J. Laskar, and P. Robutel, Celest. Mech. Dyn. Astron. 80, 39 (2001). (25) A. W. Van der Vaart, Asymptotic Statistics (Cambridge University Press, 1998). (26) A. A. Borovkov and K. A. Borovkov, Asymptotic Analysis of Random Walks: Heavy-tailed Distributions (Cambridge University Press, 2008). (27) V. Uchaikin and V. Zolotarev, Chance and Stability: Stable Distributions and their Applications, Modern Probability and Statistics (De Gruyter, Berlin/New York, 1999). (28) A. A. Dubkov, B. Spagnolo, and V. V. Uchaikin, Intern. J. Bifurcat. Chaos, 18, 2649 (2008). (29) J. Heyman, The Stone Skeleton (Cambridge University Press, 1997). (30) J. P. Taylor-King, R. Klages, S. Fedotov, and R. A. Van Gorder, Phys. Rev. E 94, 012104 (2016). (31) K. Pearson, Proc. Roy. Soc. London 58, 240 (1895). (32) J. F. Kenney and E. S. Keeping, Mathematics of Statistics, Pt. 2 (Princeton, NJ:Van Nostrand, 1951). (33) I. Fouxon, S. Denisov, V. Zaburdaev, and E. Barkai, http://www.mpipks-dresden.mpg.de/~denysov/ubprfinal11.pdf. (34) G. M. Zaslavsky, M. Edelman, and B. A. Niyazov, Chaos 7, 159 (1997). (35) S. Marksteiner, K. Ellinger, and P. Zoller, Phys. Rev. A 53, 3409 (1996). (36) Ya. G. Sinai, Russ. Math. Surveys 25, 137 (1970). (37) J.-P. Bouchaud and A. Georges A, Phys. Rep. 195, 127 (1990). (38) G. Cristadoro, T. Gilbert, M. Lenci, D. P. Sanders, Phys. Rev. E 90, 050102 (2014). (39) N. E. Humphries, H. Weimerskirch, and D. W. Sims, Methods Ecol. Evol. 4, 930 (2013). (40) G. C. Hays, T. Bastian, T. K. Doyle, S. Fossette, A. C. Gleiss, M. B. Gravenor, V. J. Hobson, N. E. Humphries, M. K. S. Lilley, N. G. Pade, and D. W. Sims, Proc. R. Soc. B 279, 465 (2012). (41) W. D. Luedtke and U. Landamn, Phys. Rev. Lett. 82, 3835 (1999). (42) T. V. Lebedeva et al., Phys. Rev. B 82, 155460 (2010). (43) G. M. Viswanathan, S. V. Buldyrev, S. Havlin, M. G. E. da Luz, E. P. Raposo, and H. E. Stanley, Nature 401, 911 (1999). (44) M. Chupeau, O. Bénichou, and R. Voituriez, Nature Phys. 11, 844 (2015). (45) A. La Cognata, D. Valenti, A. A. Dubkov, and B. Spagnolo, Phys. Rev. E 82, 011121 (2010). (46) C. Guarcello, D. Valenti, A. Carollo, B. Spagnolo, J. Stat. Mech. Theor. Exp. 054012 (2016). (47) D. Valenti, C. Guarcello, B. Spagnolo, Phys. Rev. B 89, 214510 (2014). I Supplemental Material I.1 Probability density functions of 2D Lévy walks: general solution The main mathematical tool we use to resolve the integral transport equations is the combined Fourier-Laplace transform with respect to space and time, defined as: $$\hat{f}(\mathbf{k},s)=\int\limits_{0}^{\infty}{\rm d}t\int{\rm d}\mathbf{r}\,\text{e}^{-i\mathbf{k}\mathbf{r}}\text{e}^{-st}f(\mathbf{r},t).$$ (S1) The coordinates in Fourier and Laplace spaces are $\mathbf{k}$ and $s$ respectively. The corresponding inverse transform is defined in the standard way [S1]. In the text and in the following we omit the hat sign and distinguish transformed functions by their arguments. To find a solution to the system in Eq. (2) in the main text, we apply Fourier-Laplace transform, use its shift property, $$\int{\rm d}\mathbf{r}\,\text{e}^{-i\mathbf{k}\mathbf{r}}f(\mathbf{r}-\mathbf{a})=\text{e}^{-i\mathbf{k}\mathbf{a}}\hat{f}(\mathbf{k});\quad\int\limits_{0}^{\infty}{\rm d}t\,\text{e}^{-st}\text{e}^{-at}f(t)=\hat{f}(s+a).$$ (S2) and obtain: $$\displaystyle\nu(\mathbf{k},s)$$ $$\displaystyle=$$ $$\displaystyle\nu(\mathbf{k},s)\int{\rm d}\mathbf{v}\,\psi(s+i\mathbf{k}\mathbf{v})h(\mathbf{v})+1,$$ (S3) $$\displaystyle P(\mathbf{k},s)$$ $$\displaystyle=$$ $$\displaystyle\nu(\mathbf{k},s)\int{\rm d}\mathbf{v}\,\Psi(s+i\mathbf{k}\mathbf{v})h(\mathbf{v}).$$ (S4) A system of integral equations in normal coordinates and time is reduced to a system of two linear equations for Fourier-Laplace transformed functions; it is easily resolved to give Eq. (3): $$P(\mathbf{k},s)=\frac{\int{\rm d}\mathbf{v}\,\Psi(s+i\mathbf{k}\mathbf{v})h(\mathbf{v})}{1-\int{\rm d}\mathbf{v}\,\psi(s+i\mathbf{k}\mathbf{v})h(\mathbf{v})}$$ (S5) It is important to note that this answer is valid for both $XY$- and uniform models, but not for the product model. The technical difficulty of finding the inverse Fourier-Laplace transform is the coupled nature of the problem, where space and time enter the same argument. One way to address the problem is to do asymptotic analysis. Instead of looking at full transformed functions we may consider their expansions with respect to small $\mathbf{k},s$, corresponding to large space and time scales in the normal coordinates. There are two such functions in the general answer Eq. (3) in the main text, $\psi(s+i\mathbf{k}\mathbf{v})$ and $\Psi(s+i\mathbf{k}\mathbf{v})$ (note that those are Laplace transforms only, where the Fourier coordinate $\mathbf{k}$ enters the same argument together with $s$). In fact their Laplace transforms are related $\Psi(s)=[1-\psi(s)]/s$, therefore it is sufficient to show the asymptotic expansion of $\psi(s)$ for a small argument: $$\displaystyle\psi(s+i\mathbf{k}\mathbf{v})\simeq 1$$ $$\displaystyle-$$ $$\displaystyle\frac{\tau_{0}}{\gamma-1}(s+i\mathbf{k}\mathbf{v})-\tau_{0}^{\gamma}\Gamma[1-\gamma](s+i\mathbf{k}\mathbf{v})^{\gamma}$$ (S6) $$\displaystyle+$$ $$\displaystyle\frac{\tau_{0}^{2}}{(\gamma-2)(\gamma-1)}(s+i\mathbf{k}\mathbf{v})^{2}+....$$ Depending on the value of $\gamma$ different terms have the dominating role in this expansion. Such for $\gamma>2$ zeroth, first and second order terms are dominant, leading to the classical diffusion. In the intermediate regime $1<\gamma<2$ zeroth, linear and a term with a fractional power of $s+i\mathbf{k}\mathbf{v}$ are dominant and lead to the Lévy distribution. Now by using this asymptotic expansions we can express the propagators. For example, the propagator of the uniform model in the sub-ballistic regime, after integration with respect to $h(\mathbf{v})$, has the asymptotic form: $$P_{\text{uniform}}(\mathbf{k},s)\simeq\frac{1}{s+|\mathbf{k}|^{\gamma}\frac{\tau_{0}^{\gamma-1}v_{0}^{\gamma}\sqrt{\pi}\Gamma[2-\gamma]}{\Gamma[1+\gamma/2]|\Gamma[(1-\gamma)/2]|}}.$$ (S7) The inverse Laplace of this expression yields an exponential function, and an additional inverse Fourier transform in polar coordinates leads to the 2D Lévy distribution discussed in the main text. We can also write down a similar asymptotic result for the central part of the PDF in the sub-ballistic regime $1<\gamma<2$ for arbitrary (but symmetric) velocity distribution $h(\mathbf{v})=h(-\mathbf{v})$, for arbitrary dimension $d$: $$P({\bf r},t)\simeq\int\exp\left[i{\bf k}\cdot{\bf r}-t\tilde{A}\left|\cos\left({\pi\gamma\over 2}\right)\right|\langle\left|{\bf k}\cdot{\bf v}\right|^{\gamma}\rangle\right]{{\rm d}{\bf k}\over(2\pi)^{d}}.$$ (S8) where $\tilde{A}=(\tau_{0})^{\gamma-1}\Gamma[2-\gamma]$. The spatial statistics $P({\bf r},t)$ is controlled by the average over the velocity PDF via a function $\langle\left|{\bf k}\cdot{\bf v}\right|^{\gamma}\rangle$ which depends on the direction of ${\bf k}$. This is very different from the Gaussian case where only the covariance matrix of the velocities enters in the asymptotic limit of $P({\bf r},t)$. In this sense the PDF of Lévy walks Eq. (S8) is non-universal if compared with one dimensional Lévy walks, or normal $d$ dimensional diffusion. I.2 Difference between the XY and product models in the sub-ballistic regime The asymptotic analysis shows that in the bulk, the $XY$ model is identical to the product model. Therefore the cross-section of the $XY$ PDF along the $x$ (or $y$) axis is well approximated by the standard 1D Lévy distribution (see Fig. 3a,b), similarly to the product model. There is, however, a significant deviation at the tail close to the ballistic front. Let us look at the $XY$ model closer to the front. Consider the density on the $x$ axis, at some point $x\lesssim vt$. If a particle is that far on $x$ axis, it means that this particle had spent at most time $t_{y}=t-x/v$ for its walks along the $y$ direction. Therefore, if we look at the PDF along the $y$ direction, it has been ’built’ by only those particles which spend time $t_{y}$ evolving along this direction (note that for the product model the time of walks along both directions is the same $t$ as both walks are independent). Therefore, for a given moment of time, the PDF along $x$-axis of the $XY$ model is not the 1D Lévy distribution uniformly scaled with the pre-factor $1/t^{1/\gamma}$ (as in the product model) but a product of the Lévy distribution and the non-homogeneous factor $t^{1/\gamma}/(t-x/v)^{1/\gamma}$; this pre-factor diverges as the particle gets closer to the front (see Fig. 3a) but it is integrable, so that the total number of particles is still conserved. I.3 Probability density functions of 2D Lévy walks in the ballistic regime For the product model, the PDF is given by the product of two PDFs of the one-dimensional Lévy walk. In the ballistic regime, PDF of the Lévy walk is the Lamperti distribution [S2]. For some particular values of $\gamma$ it has a compact form. For example, for $\gamma=1/2$ we have $P_{\text{LW}}^{(\gamma=1/2)}(x,t)=\pi^{-1}(v_{0}^{2}t^{2}-x^{2})^{-1/2}$, and therefore, for 2D we get $$P_{\text{prod}}^{(\gamma=1/2)}(x,y,t)=\frac{1}{\pi^{2}(v_{0}^{2}t^{2}-x^{2})^{1/2}(v_{0}^{2}t^{2}-y^{2})^{1/2}}.$$ (S9) For the $XY$-model, the asymptotic expression for the propagator in the ballistic regime $0<\gamma<1$ is $$P_{XY}(k_{x},k_{y},s)=\frac{(s+ik_{x}v_{0})^{\gamma-1}+(s-ik_{x}v_{0})^{\gamma-1}+(s+ik_{y}v_{0})^{\gamma-1}+(s-ik_{y}v_{0})^{\gamma-1}}{(s+ik_{x}v_{0})^{\gamma}+(s-ik_{x}v_{0})^{\gamma}+(s+ik_{y}v_{0})^{\gamma}+(s-ik_{y}v_{0})^{\gamma}}.$$ (S10) For the uniform model, the answer is more compact: $$P_{\text{uniform}}(\mathbf{k},s)=\frac{\int_{0}^{2\pi}(s+ikv_{0}\cos\varphi)^{\gamma-1}\mathrm{d}\varphi}{\int_{0}^{2\pi}(s+ikv_{0}\cos\varphi)^{\gamma}\mathrm{d}\varphi},$$ (S11) where $k=|\mathbf{k}|$. The integrals over the angle $\varphi$ can be calculated yielding hyper-geometric functions; the remaining technical difficulty is to calculate the inverse transforms. Recently, it was shown in the one-dimensional case, that the propagators of the ballistic regime can be calculated without explicitly performing the inverse transforms [S2]. This approach has been then generalized to 2D case for the uniform model [S3], where the density of particles was shown to be described by the following expression: $$P(\mathbf{r},t)_{\text{uniform}}=-\frac{1}{\pi^{3/2}t^{2}}D_{-}^{1/2}\Big{\{}\frac{1}{x^{1/2}}\operatorname{Im}\frac{{}_{2}F_{1}((1-\gamma)/2,1-\gamma/2;1;\frac{1}{x})}{{}_{2}F_{1}(-\gamma/2,(1-\gamma)/2;1;\frac{1}{x})}\Big{\}}\left(\frac{\mathbf{r}^{2}}{t^{2}}\right)$$ (S12) where $D_{-}^{1/2}$ is the right-side Riemann-Liouville fractional derivative of order 1/2 (see [S3] for further details). It would be interesting to try to extend this formalism to random walks without rotational symmetry, such as for example the $XY$ model. The PDFs for the three models in the ballistic regime are shown in Fig. S1. I.4 MSD and higher moments In case of an unbiased random walk that starts at zero, the MSD is defined as $$\left<\mathbf{r}^{2}(t)\right>=\int{\rm d}\mathbf{r}\,\,\mathbf{r}^{2}P(\mathbf{r},t),$$ (S13) The asymptotic behavior of the MSD can be calculated by differentiating the PDF in Fourier-Laplace space twice with respect to $\mathbf{k}$ and afterwards setting $\mathbf{k}=0$. $$\left<\mathbf{r}^{2}(t)\right>=-\nabla_{\mathbf{k}}^{2}\left.P(\mathbf{k},t)\right|_{\mathbf{k}=0}.$$ (S14) In fact we can see from Eq. (S5) that $\nabla_{\mathbf{k}}$ applied to $P(\mathbf{k},s)$ is equivalent to $-i\mathbf{v}\frac{{\rm d}}{{\rm d}s}$. Therefore all terms with the first order derivatives disappear due to the integration with a symmetric velocity distribution. Only the terms with second derivatives contribute to the answer: $$\left.-\nabla_{\mathbf{k}}^{2}P(\mathbf{k},s)\right|_{\mathbf{k}=0}=\int{\rm d}\mathbf{v}\,\mathbf{v}^{2}h(\mathbf{v})\left\{\frac{\Psi(s)\cdot\psi^{\prime\prime}(s)}{[1-\psi(s)]^{2}}+\frac{\Psi^{\prime\prime}(s)}{1-\psi(s)}\right\}$$ (S15) Now, to calculate the scaling of the MSD in real time for different regimes of diffusion, we need to take the corresponding expansions of $\psi(s)$ and $\Psi(s)$ for small $s$, Eq. (S6), and perform the inverse Laplace transform. As a result we obtain $$\left<\mathbf{r}^{2}(t)\right>=\left\{\begin{array}[]{cc}\vspace{5pt}v_{0}^{2}(1-\gamma)t^{2}&0<\gamma<1\\ \vspace{5pt}\frac{2v_{0}^{2}\tau_{0}^{\gamma-1}(\gamma-1)}{(3-\gamma)(2-\gamma)}t^{3-\gamma}&1<\gamma<2\\ \frac{2v_{0}^{2}\tau_{0}}{\gamma-2}t&\gamma>2\end{array}\right.$$ (S16) It is remarkable that for $0<\gamma<1$ the scaling exponent of the MSD is independent of the tail exponent $\gamma$ of the flight time distribution. An important observation is that the models are indistinguishable on the basis of their MSD behavior, see Fig S2 (left panel). Similarly, we compute the forth moment and find that they scale similarly for all three models (though the prefactors are model-specific, see second column in Table S1). The difference between the models become tangible in the cross-moments $\langle x^{2}y^{2}\rangle$. We use these moments to define a generalized Pearson coefficient which is denoted by $PC$, $$PC(t)=\frac{\langle x^{2}(t)y^{2}(t)\rangle}{\langle x^{2}(t)\rangle\langle y^{2}(t)\rangle}.$$ (S17) We concentrate on the sub-ballistic super-diffusive regime and summarize the corresponding results in Table S1. I.5 Angular dependence of the Pearson coefficient As mentioned in the main text, the value of $PC$ depends on the choice of the coordinate frame. To quantify the dependence of the $PC$ on the orientation of the frame of references we consider random walks $\{x^{\prime}(t),y^{\prime}(t)\}$ which are obtained by turning three original random walks (uniform, product and $XY$) by an arbitrary angle $0\leq\phi\leq\pi/2$ with respect to the $x$-axis and calculate the corresponding $PC[\phi]$. The coordinates in the turned and original frames are related via: $$\{x^{\prime}(t),y^{\prime}(t)\}=\{x(t)\cos\phi-y(t)\sin\phi,x(t)\sin\phi+y(t)\cos\phi)\}$$ (S18) After simple algebra and by discarding terms with odd powers of $x$ and $y$, which will disappear after averaging due to symmetry, we obtain: $$PC[\phi]=\frac{\langle x^{2}y^{2}\rangle(\cos^{4}\phi+\sin^{4}\phi-\sin^{2}2\phi)+(\langle x^{4}\rangle+\langle y^{4}\rangle)\cos^{2}\phi\sin^{2}\phi}{\langle x^{2}y^{2}\rangle(\cos^{4}\phi+\sin^{4}\phi)+(\langle x^{2}\rangle^{2}+\langle y^{2}\rangle^{2})\cos^{2}\phi\sin^{2}\phi}.$$ (S19) This expression can now be evaluated by using the fourth and second-order cross moments of the original random walks given in Table S1. It can be shown that for the product and $XY$ models and for any $\phi\neq\{0,\pi/2\}$, $PC(t;\phi)\propto t^{\gamma-1}$, whereas for the uniform model $PC(t;\phi)\propto t^{\gamma-1}$ is naturally independent of the angle $\phi$. The angle dependent Pearson coefficients are different for all three models, see Fig. S3a. Importantly, the Pearson coefficient contains only one “unknown” parameter $\tau_{0}$, the scaling parameter of the flight time PDF (recall that the exponent $\gamma$ of the power law tail in the flight time distribution can be determined from the scaling of the MSD). Thus by measuring the $PC(t;\phi)$, for example, at two different time points would allow for a unambiguous determination of $\tau_{0}$ and of the type of the random walk model. An interesting observation can be made by numerically calculating $PC[\phi]$ as a function of time. While the shape of the dependence remains the same for the uniform- (flat at any time) and $XY$-model (see Fig.S3c), for the product model it changes qualitatively. The minima of the $PC_{\text{prod}}(t;\phi)$ for short times transform into the maxima for large times passing through an intermediate flat shape, see Fig. S3b. This is a quantification of the transition we also observe by sampling individual trajectories. It is a consequence of the fact that for short times the preferred direction of motion are diagonals and anti-diagonals (inset in Fig.1d), whereas for large times the trajectories resemble those of the $XY$-model (Fig. 1d,e). Correspondingly the maximum of the $PC[\phi]$ “shifts” by $\pi/4$ as time progresses. At the same time, the positions of maxima and minima of $PC[\phi]$ for the $XY$ model are time-invariant. This effect can be used to distinguish between the two models. I.6 Numerical sampling procedure The statistical sampling of the model PDFs was performed on the GPU clusters of the Max Planck Institute for the Physics of Complex Systems (Dresden) (six NVIDIA M2050 cards) and the university of Augsburg (four TESLAK20XM cards). Together with the straightforward parallelization of the sampling process this allowed us to collect up to $10^{14}$ realizations for every set of parameters. I.7 Hamiltonian diffusion in an egg-crate potential We use model from Refs. [S4,S5]. The model Hamiltonian has the form $$\displaystyle H(x,y,p_{x},p_{y})$$ $$\displaystyle=$$ $$\displaystyle E=\frac{p_{x}^{2}}{2}+\frac{p_{y}^{2}}{2}+A$$ (S20) $$\displaystyle+$$ $$\displaystyle B(\cos x+\cos y)+C\cos x\cos y,$$ (S21) with particular value of the parameters $A=2.5$ (this parameter can be dropped since it does not enter the corresponding equations of motion; however, we preserve the original form of the Hamiltonian given in Refs. [S4,S5]), $B=1.5$, and $C=0.5$. To integrate the equations of motion, we use the high-order symplectic integrator $SBAB_{5}$ [S6] with time step $\triangle t=10^{-3}$. The energy of the system was conserved as $|\triangle E/E|<10^{-10}$ during the whole integration time. A typical trajectory of the system is shown on Fig. S4 (left panel). The PDF of the corresponding dispersal for time $t=10^{3}$ was sampled with $10^{8}$ realizations; the obtained result is shown in Fig. S4 (right panel). Figure S5 presents the results of numerical simulations performed to estimate Pearson coefficient $PC$ of the dispersals for two different values of energy $E$. References [S1] A. Erdélyi, Tables of Integral Transforms, Vol. 1 (McGraw-Hill, New York, 1954) [S2] D. Froemberg, M. Schmiedeberg, E. Barkai, and V. Zaburdaev, Asymptotic densities of ballistic Lévy walks. Phys. Rev. E 91, 022131 (2015). [S3] M. Magdziarz, and T. Zorawik, Explicit densities of multidimensional ballistic Lévy walks, Phys. Rev. E 94, 022130 (2016). [S4] T. Geisel, A. Zacherl, and G. Radons, Generic $1/f$ noise in chaotic Hamiltonian dynamics. Phys. Rev. Lett. 59, 2503–2506 (1987). [S5] J. Klafter and G. Zumofen, Lévy statistics in a Hamiltonian system. Phys. Rev. E 49, 4873–4877 (1994). [S6] J. Laskar, and P. Robutel, High order sympletic integrators for perturbed Hamiltonian systems. Celest. Mech. Dyn. Astron. 80, 39 (2001).
From interacting particle systems to random matrices Contribution to StatPhys24 special issue Patrik L. Ferrari Institute for Applied Mathematics, University of Bonn, Endenicher Allee 60, 53115 Bonn, Germany; E-mail: [email protected]. This work was supported by the DFG (German Research Foundation) through the SFB 611, project A12. (September 17, 2010) Abstract In this contribution we consider stochastic growth models in the Kardar-Parisi-Zhang universality class in $1+1$ dimension. We discuss the large time distribution and processes and their dependence on the class on initial condition. This means that the scaling exponents do not uniquely determine the large time surface statistics, but one has to further divide into subclasses. Some of the fluctuation laws were first discovered in random matrix models. Moreover, the limit process for curved limit shape turned out to show up in a dynamical version of hermitian random matrices, but this analogy does not extend to the case of symmetric matrices. Therefore the connections between growth models and random matrices is only partial. 1 Introduction In this paper we discuss results in the Kardar-Parisi-Zhang (KPZ) universality class of stochastic growth models. We focus on the connections with random matrices occurring in the one-dimensional case. Consider a surface described by a height function $x\mapsto h(x,t)$ with $x\in\mathbb{R}^{d}$ denoting space and $t\in\mathbb{R}$ being the time variable and subjected to a random dynamics. If the growth mechanism is local and there is a smoothing mechanism providing a deterministic macroscopic growth, then the macroscopic evolution of the interface will be governed by $$\frac{\partial h}{\partial t}=v(\nabla h)$$ (1) where $u\mapsto v(u)$ is the macroscopic growth velocity as a function of the surface slope $u$. In this context, we can also focus on a mesoscopic scale where the random nature of the dynamics is still visible. In the famous paper of Kardar Parisi and Zhang [35], the smoothing mechanism is related with the surface tension and it takes the form $\nu\Delta h$, while the local random dynamics enters as a space-time white noise $\eta$. Moreover, the Taylor expansion of $v$ for small slopes111The order $0$ and $1$ in the Taylor expansion can be set to be zero by a simple change of (moving) frame. results into the KPZ equation222In more than one dimension, $(\nabla h)^{2}$ should be replaced by $\langle\nabla h,C\nabla h\rangle$ with $C$ a matrix. Then one distinguish the isotropic class, if all the eigenvalues of $C$ have the same sign, and anisotropic class(es) otherwise. For instance, in $d=2$ the surface fluctuation are very different: for anisotropic they are normal distributed in the $\sqrt{\ln(t)}$ scale [54, 38] and correlation are the ones of the massless free field [9, 11]; for isotropic is it numerical known that growth as $t^{\alpha}$ for some $\alpha\simeq 0.240$ [48]. $$\frac{\partial h(x,t)}{\partial t}=\nu\Delta h(x,t)+\frac{1}{2}\lambda(\nabla h% (x,t))^{2}+\eta(x,t),$$ (2) where $\lambda=v^{\prime\prime}(0)\neq 0$ in the non-linear term is responsible for lateral spread of the surface and lack of time reversibility333When $\lambda=0$ we are in the Edwards-Wilkinson class [20] and the fluctuations are Gaussian with fluctuation exponent $1/4$.. From now on we consider the one-dimensional case, $d=1$. Denote by $h_{\rm ma}$ the limit shape, $$h_{\rm ma}(\xi):=\lim_{t\to\infty}\frac{h(\xi t,t)}{t}.$$ (3) The fluctuation exponent is $1/3$, the spatial correlation exponent is of order $2/3$ [29, 52]. This means that the height fluctuations grow in time as $t^{1/3}$ and spatial correlations are $t^{2/3}$, i.e., the rescaled height function at time $t$ around the macroscopic position $\xi$ (at which $h_{\rm ma}$ is smooth)444However, depending on the initial conditions, (2) can produce spikes in the macroscopic shape. If one looks at the surface gradient $u=\nabla h$, the spikes of $h$ corresponds to shocks in $u$, and it is known that the shock position fluctuates on a scale $t^{1/2}$. For particular models, properties of the shocks have been analyzed, but mostly for stationary growth (see [18, 22, 15] and references therein). $$h^{\rm resc}_{t}(u)=\frac{h(\xi t+ut^{2/3},t)-th_{\rm ma}(\xi+ut^{-1/3})}{t^{1% /3}}$$ (4) converges to a non-trivial limit process in the $t\to\infty$ limit. Concerning correlations in space-time, it is known that along special directions the decorrelation occurs only on the macroscopic scale (i.e., with scaling exponent $1$), while along any other direction the correlations are asymptotically like the spatial correlations at fixed time [24, 17]. The special directions are the characteristic solutions of the PDE of for the macroscopic height gradient. More precisely, denote by $\xi$ and $\tau$ the macroscopic variables for space and time. Also, let $$\bar{h}(\xi,\tau):=\lim_{t\to\infty}t^{-1}h(\xi t,\tau t)\quad\textrm{and}% \quad u(\xi,\tau):=\partial\bar{h}(\xi,\tau)/\partial\xi.$$ (5) Then, $u$ satisfies the PDE $$\partial u/\partial\tau+a(u)\partial u/\partial\xi=0\quad\textrm{where}\quad a% (u)=-\partial v(u)/\partial u$$ (6) with $v$ the macroscopic speed of growth555In the asymmetric exclusion process explained below, one usually considers the particle density $\rho$ instead of $u$. This is however just a rotation of the frame, since they are simply related by $u=1-2\rho$. The PDE for $\rho$ is the well known Burgers equation.. The characteristic solutions of (6) are the trajectories satisfying $\partial\xi/\partial\tau=a(u)$ and $\partial u/\partial\tau=0$ (see e.g. [21, 53] for more insights on characteristic solutions). The question is therefore to determine the limit process $$u\mapsto\lim_{t\to\infty}h^{\rm resc}_{t}(u)=\textbf{?}$$ (7) One might be tempted to think that the scaling exponents are enough to distinguish between classes of models and therefore that the result of our question is independent of the initial condition. However, as we will see, this is not true666For other observables the relevance of the initial condition was observed already in [51].. To have an intuition about the relevance of the initial condition, consider the fluctuations of $h(0,t)$ with (a) deterministic initial condition, $h(x,0)=0$ for all $x\in\mathbb{R}$, and (b) random but still macroscopically flat initial condition, $h(x,0)$ a two-sided Brownian motion with $h(0,0)=0$. For this case, the height function $h(0,t)$ is correlated with a neighborhood of $x=0$ of order $t^{2/3}$. In these region (at time $t=0$) for (a) fluctuations are absent while in (b) the fluctuations on the initial condition are of order $t^{1/3}$: this is the same scale as the fluctuations of $h(0,t)$ and therefore the fluctuation laws for (a) and (b) will be different. How should we proceed to answer to our question? Literally taken is the KPZ equation (2) ill-defined, because locally one will see a Brownian motion and the problem comes from the square of a white noise (in the non-linear term). However it is possible to give a sense of a solution of the KPZ equation as shown in [44, 2], and this solution agrees with the one coming from discrete approximations/models (weakly asymmetric simple exclusion process [7]). These works also provide an explicit solution of the finite time one-point distribution for an important initial condition, see [43] for more explanations. Another point of view is to see the KPZ equation as one of the models in the KPZ universality class of growth models. By universality it is expected that the limit processes do not depend on the model in the class (but they depend on the type of initial condition). From this perspective, we can take any of the models in the KPZ class and try to obtain the large time limit. In the rest of the paper we consider one of such model, the totally asymmetric simple exclusion process (TASEP) in which the asymptotic processes have been unraveled. Another model for which analogues results have been determined is the polynuclear growth (PNG) model777At least half of the limit results described below were first obtained for the PNG model.. In particular, we will discuss which limit distributions/processes also appears in random matrix theory and when the connection is only partial. 2 TASEP The totally asymmetric simple exclusion process (TASEP) in continuous time is the simplest non-reversible interacting stochastic particle system. In TASEP particles are on the lattice of integers, $\mathbb{Z}$, with at most one particle at each site (exclusion principle). The dynamics is defined as follows. Particles jump to the neighboring right site with rate $1$ provided that the site is empty. Jumps are independent of each other and take place after an exponential waiting time with mean $1$, which is counted from the time instant when the right neighbor site is empty, see Figure 1. Denote by $\eta_{x}(t)$ the occupation variable of site $x\in\mathbb{Z}$ at time $t\geq 0$, i.e., $\eta_{x}(t)$ is $1$ if there is a particle and $0$ if the site is empty. TASEP configurations are in bijection with the surface profile defined by setting the origin $h(0,0)=0$ and the discrete height gradient to be $1-2\eta_{x}(t)$. If we denote by $N_{t}$ the number of particles which have crossed the bond $0$ to $1$ during the time span $[0,t]$, then the height function is given by $$h(x,t)=\left\{\begin{array}[]{ll}2N_{t}+\sum_{y=1}^{x}(1-2\eta_{y}(t)),&% \textrm{for }x\geq 1,\\ 2N_{t},&\textrm{for }x=0,\\ 2N_{t}-\sum_{y=x+1}^{0}(1-2\eta_{y}(t)),&\textrm{for }x\leq-1.\end{array}\right.$$ (8) as illustrated in Figure 2. Let us verify that TASEP belongs to the KPZ universality class. Under hydrodynamical scaling, the particle density $\rho$ evolves according to the Burgers equation $\partial_{t}\rho+\partial_{x}(\rho(1-\rho))=0$. Thus, we have a deterministic limit shape. The second requirement, the locality of the growth dynamics is obviously satisfied. Finally, the speed of growth $v$ of the interface is twice the current density, which is given by $\rho(1-\rho)$. Being the gradient $u=1-2\rho$ it follows that $v(u)=(1-u^{2})/2$, which implies $v^{\prime\prime}(u)=-1\neq 0$. Now we discuss some of the large time results for the TASEP height function888Most of the results has been first computed for particle positions and the statements described below are obtained by a geometric transformation.. We consider two non-random initial conditions generating a curved and a flat macroscopic shape. The limit processes will be called the Airy${}_{2}$ and Airy${}_{1}$ processes, which are defined in Appendix A. 2.1 TASEP with step initial condition Consider the initial condition $\eta_{x}(0)=1$ for $x\leq 0$ and $\eta_{x}(0)=0$ for $x\geq 1$, see Figure 3. This is called step-initial condition. The macroscopic limit shape for this initial condition is a parabola continued by two straight lines: $$h_{\rm ma}(\xi)=\left\{\begin{array}[]{ll}\frac{1}{2}(1+\xi^{2}),&\textrm{for % }|\xi|\leq 1,\\ |\xi|,&\textrm{for }|\xi|\geq 1.\end{array}\right.$$ (9) From this we have the scaling999With respect to (4) we adjusted the coefficients to avoid having them in the asymptotic process. $$h^{\rm resc}_{t}(u):=\frac{h(2u(t/2)^{2/3},t)-\left(t/2+u^{2}(t/2)^{1/3}\right% )}{-(t/2)^{1/3}}.$$ (10) The large time results for the rescaled height function $h^{\rm resc}_{t}$ are the following. First, for the one-point distribution [31] $$\lim_{t\to\infty}\mathbbm{P}\left(h^{\rm resc}_{t}(0)\leq s\right)=F_{2}(s),$$ (11) where $F_{2}$ is known as the GUE Tracy-Widom distribution, first discovered in random matrices [49] (see Section 3). Moreover, concerning the joint distributions, it is proven [14, 10, 32] that (in the sense of finite-dimensional distribution101010In [32] Johansson the process was studied in a slightly different cut, but because of slow-decorrelation [24, 17] the present result can be proven from it. Remark also that the convergence in [32] is in a stronger sense.) $$\lim_{t\to\infty}h^{\rm resc}_{t}(u)={\cal A}_{2}(u),$$ (12) where ${\cal A}_{2}$ is called the Airy${}_{2}$ process, first discovered in the PNG model by Prähofer and Spohn [41] (see Appendix A)111111The height function of TASEP in discrete time with parallel update and step initial condition is the same as the arctic line in the Aztec diamond for which the Airy${}_{2}$ process was obtained by Johansson in [33]. Extensions to process on space-like path for the PNG model was made in [16]. Tagged particle problem was studied in [30], extension to space-like paths in [10, 13] and to any space-time paths except characteristic line in [24, 17].. In particular, the Airy${}_{2}$ process is stationary, locally looks like a Brownian motion and has correlations decaying slow: like $u^{-2}$ (see Figure 5). By universality it is expected that the Airy${}_{2}$ process describes the large time surface statistics for initial conditions121212Also for random initial conditions as in the case of Bernoulli-$\rho_{-}$ on $\mathbb{Z}_{-}$ and Bernoulli-$\rho_{+}$ on $\mathbb{Z}_{+}$, $\rho_{-}>\rho_{+}$, see [6, 40]. generating a smooth curved macroscopic shape for models in the KPZ class. This happens when the characteristic lines for space-time points on the curved limit shape go all together at a single point at time $t=0$ (for the TASEP and PNG are straight lines back to the origin). 2.2 TASEP with flat initial condition The second type of non-random initial condition we discuss here is called flat-initial condition. In terms of TASEP particles, it is given by $\eta_{x}(0)=1$ for $x$ even and $\eta_{x}(0)=0$ for $x$ odd, see Figure 4. The macroscopic limit shape is very simple, $h_{\rm ma}(\xi)=\frac{1}{2}$, so that the rescaled height function becomes $$h^{\rm resc}_{t}(u):=\frac{h(2ut^{2/3},t)-t/2}{-t^{1/3}}.$$ (13) In the large time limit, the one-point distribution of $h^{\rm resc}_{t}$ is given by131313For the geometric case corresponding to discrete time TASEP this result was proven by Baik and Rains in [5]. $$\lim_{t\to\infty}\mathbbm{P}\left(h^{\rm resc}_{t}(0)\leq s\right)=F_{1}(2s),$$ (14) where $F_{1}$ is known as the GOE Tracy-Widom distribution, first discovered in random matrices [50] (see Section 3). Moreover, as a process, it was discovered by Sasamoto [42, 12] and it is proven that (in the sense of finite-dimensional distribution) $$\lim_{t\to\infty}h^{\rm resc}_{t}(u)={\cal A}_{1}(u),$$ (15) where ${\cal A}_{1}$ is called the Airy${}_{1}$ process (see Appendix A). In particular, the Airy${}_{1}$ process is stationary, it behaves locally like a Brownian motion, but unlikely for the Airy${}_{2}$ process, the decorrelations decay superexponentially fast (see Figure 5), see the review [25] for more information and references. The Airy${}_{1}$ process is expected to describe the large time surface behavior for non-random initial conditions generating a straight limit shape for models in the KPZ class. Unlike for the curved limit shape, the characteristic lines141414For density $1/2$ the characteristic lines are all lines parallel to the time axis. For density $\rho$, they have the form $x=x_{0}+(1-2\rho)t$. for space-time points for flat limit shape do not join at initial time. This fact is at the origin of (a) the different fluctuation behavior between curved and flat and (b) the difference between random flat and non-random flat. 2.3 TASEP with stationary initial condition The only translation invariant stationary measure for continuous time TASEP are Bernoulli product measures with parameter $\rho$, $\rho\in[0,1]$, which is the density of particles [37]. The cases $\rho\in\{0,1\}$ are degenerate and nothing happens, so consider a fixed $\rho\in(0,1)$. The height function at time $0$ is a two-sided random walk with $h(0,0)=0$ and $\mathbbm{P}(h(x+1,0)-h(x,0)=1)=1-\rho$ and $\mathbbm{P}(h(x+1,0)-h(x,0)=-1)=\rho$. Unlike the deterministic initial conditions we need to consider the regions where the height function is non-trivially correlated with $h(0,0)$. The reason is that for the regions at time $t$ which are correlated with $h(\alpha t,0)$, $\alpha\neq 0$, the dynamical fluctuations (of order $t^{1/3}$) and are dominated by the fluctuations in the initial condition (of order $t^{1/2}$). The correlations in TASEP are carried by second-class particles, which move with speed $1-2\rho$. As predicted by the KPZ scaling, the limit $$\lim_{t\to\infty}\frac{h((1-2\rho)t+ut^{2/3},t)-(1-2\rho(1-\rho))t}{t^{1/3}}$$ (16) exists. The one-point distribution was derived in the PNG model with external source [4] and for TASEP in [27] (correctly conjectured using universality in [40]). The extension to joint distributions is worked out in [3]. So-far no connections with random matrices for this initial condition is known. Therefore we do not enter in further details. 3 Random Matrices We said that $F_{1}$ and $F_{2}$ appeared first in random matrices. We will explain it below and discuss whether the Airy processes also show up for random matrix models. 3.1 Hermitian matrices The Gaussian Unitary Ensemble (GUE) of random matrices consists of Hermitian matrices $H$ of size $N\times N$ distributed according to the probability measure151515The scaling of this paper is such that the asymptotic density of eigenvalues remains bounded, the macroscopic variable is $N$ and scaling exponents are easily compared with KPZ. In the literature there are other two standard normalization constants, which are just a rescaling of eigenvalues. The first one consists of replacing $1/(2N)$ by $N$: this is appropriate if one study spectral properties, since in the large $N$ limit the spectrum remains bounded. The second is to replace $1/(2N)$ by $1$ so that the measure does not depend on $N$: this is most appropriate if one looks at eigenvalues’ minors. $$p^{\rm GUE}(H)\mathrm{d}H=\frac{1}{Z_{N}}\exp\left(-\frac{1}{2N}\operatorname*% {Tr}(H^{2})\right)\mathrm{d}H,$$ (17) where $\mathrm{d}H=\prod_{i=1}^{N}\mathrm{d}H_{i,i}\prod_{1\leq i<j\leq N}\mathrm{d}% \mathrm{Re}(H_{i,j})\mathrm{d}\mathrm{Im}(H_{i,j})$ is the reference measure (and $Z_{N}$ the normalization constant). Denote by $\lambda_{N,\rm max}^{\rm GUE}$ the largest eigenvalue of a $N\times N$ GUE matrix. Then Tracy and Widom in [49] proved that the asymptotic distribution of the (properly rescaled) largest eigenvalue is $F_{2}$ (see Figure 6): $$\lim_{N\to\infty}\mathbbm{P}\left(\frac{\lambda^{\rm GUE}_{N,\rm max}-2N}{N^{1% /3}}\leq s\right)=F_{2}(s).$$ (18) The parallel between GUE and TASEP with step initial condition goes even further. In 1962 Dyson [19] introduced a matrix-valued Ornstein-Uhlenbeck process which is now called Dyson’s Brownian Motion (DBM). For hermitian matrices, GUE DBM is the stationary process on matrices $H(t)$ whose evolution is governed by $$\mathrm{d}H(t)=-\frac{1}{2N}H(t)\mathrm{d}t+\mathrm{d}B(t)$$ (19) where $\mathrm{d}B(t)$ is a (hermitian) matrix-valued Brownian motion. More precisely, the entries $B_{i,i}(t)$, $1\leq i\leq N$, $\mathrm{Re}(B_{i,j})(t)$ and $\mathrm{Im}(B_{i,j})(t)$, $1\leq i<j\leq N$, perform independent Brownian motions with variance $t$ for diagonal terms and $t/2$ for the remaining entries. Denote by $\lambda^{\rm GUE}_{N,\rm max}(t)$ the largest eigenvalue at time $t$ (when started from the stationary measure (17)). Its evolution is, in the large $N$ limit, governed by the Airy${}_{2}$ process: $$\lim_{N\to\infty}\frac{\lambda^{\rm GUE}_{N,\rm max}(2uN^{2/3})-2N}{N^{1/3}}={% \cal A}_{2}(u).$$ (20) Thus we have seen that the connection between GUE and TASEP extends to the process161616This connection extends partially to the evolution of minors, see [26, 1].. 3.2 Symmetric matrices The Gaussian Orthogonal Ensemble (GOE) of random matrices consists of symmetric matrices $H$ of size $N\times N$ distributed according to $$p^{\rm GOE}(H)\mathrm{d}H=\frac{1}{Z_{N}}\exp\left(-\frac{1}{4N}\operatorname*% {Tr}(H^{2})\right)\mathrm{d}H,$$ (21) where $\mathrm{d}H=\prod_{1\leq i\leq j\leq N}\mathrm{d}H_{i,j}$ is the reference measure (and $Z_{N}$ the normalization constant). Denote by $\lambda^{\rm GOE}_{N,\rm max}$ the largest eigenvalue of a $N\times N$ GOE matrix. The asymptotic distribution of the (properly rescaled) largest eigenvalue is $F_{1}$ (see Figure 6) [50]: $$\lim_{N\to\infty}\mathbbm{P}\left(\frac{\lambda^{\rm GOE}_{N,\rm max}-2N}{N^{1% /3}}\leq s\right)=F_{1}(s).$$ (22) DBM is defined also for symmetric matrices: GOE DBM is the stationary process on matrices $H(t)$ whose evolution is governed by $$\mathrm{d}H(t)=-\frac{1}{4N}H(t)\mathrm{d}t+\mathrm{d}B(t)$$ (23) where $\mathrm{d}B(t)$ is a symmetric matrix-valued Brownian motion. More precisely, the entries $B_{i,j}(t)$, $1\leq i\leq j\leq N$, perform independent Brownian motions with variance $t$ for diagonal terms and $t/2$ for the remaining entries. We consider now the evolution of the largest eigenvalue $\lambda^{\rm GOE}_{N,\rm max}(t)$ (when started from the stationary distribution (21)). By analogy with the GUE case, one might guess that in the large $N$ limit the limit process of a properly rescaled $\lambda^{\rm GOE}_{N,\rm max}(t)$ is the Airy${}_{1}$ process. However, as shown numerically in [8], this is not the case. To see this, we considered the scaling (24) where the coefficients are chosen such that the variance at $u=0$ is the same as the variance of the Airy${}_{1}$ process, and the covariance for $|u|\ll 1$ coincide at first order with the covariance of ${\cal A}_{1}$. The large time limit of the rescaled largest eigenvalue is denoted by ${\cal B}_{1}$. Comparing the covariances as shown in Figure 7 we conclude that ${\cal A}_{1}\neq{\cal B}_{1}$: $$\lim_{N\to\infty}\frac{\lambda^{\rm GOE}_{N,\rm max}(8uN^{2/3})-2N}{2N^{1/3}}=% :{\cal B}_{1}(u)\neq{\cal A}_{1}(u).$$ (24) Thus the connection between GOE and TASEP does not extend to multi-time distributions171717The connection extends to the top eigenvalues too [23], but not still at a fixed time only.. 4 Conclusion We saw that large time fluctuations in KPZ growth models depends on the initial conditions. By analyzing special models we determine the limiting distributions and processes, which by universality should be the same for whole KPZ universality class. The conjecture [39] is that when the limit shape is curved one gets the Airy${}_{2}$ process, when the limit shape is flat one has to further distinguish according to roughness exponent $\alpha$ of the initial condition ($|h(x,0)-h(0,0)|\sim|x|^{\alpha}$). For $\alpha=0$ one expects the Airy${}_{1}$ process, for $\alpha=1/2$ the result from stationary initial condition. Finally, for $0<\alpha<1/2$, at the characteristics it should still be Airy${}_{1}$ but there should be a region (away of order $t^{1/(3\alpha)}$ from the characteristic lines) with a different (yet unknown) process, which depends on the exponent $\alpha$. The two scheme in Figure 8 resume the main message of this contribution: (a) TASEP with step initial condition is a representant for models in the KPZ class with curved limit shape. For this model the large time limit distribution is $F_{2}$ and the limit process the Airy${}_{2}$ process, ${\cal A}_{2}$. The GUE (with DBM dynamics) is a representant of random matrices with hermitian symmetries and also for this model the $F_{2}$ distribution and the process ${\cal A}_{2}$ arises in the limit of large matrices ($N\to\infty$). (b) TASEP with flat initial condition is one of the KPZ growth models with deterministic initial conditions and straight limit shape. The long time fluctuations are governed by the $F_{1}$ distributions and the limit process is the Airy${}_{1}$ process. On the random matrix side, GOE (with DBM dynamics) is a representant for symmetric random matrices and $F_{1}$ is the distribution of its rescaled largest eigenvalue as $N\to\infty$. However, it is not the Airy${}_{1}$ process which describes its dynamical extension. The interested reader is referred to [28, 36] for more insights about random matrices and stochastic growth models. Therein a guide of literature is provided. Also, two summer schools lecture notes on the subject are available [45, 34]. On the experimental side, recently Takeuchi built up a very nice experimental set-up in which the theoretical prediction (scaling exponents, one-point distributions and covariance) have been verified with very good agreement both for curved limit shape [47] and flat limit shape [46]. Appendix A Definition of the limit processes In this short appendix we give the definitions of the Airy${}_{1}$ and Airy${}_{2}$ processes. More information about these processes like properties and references can be found for example in the reviews [28, 25]. Definition 1. The Airy${}_{1}$ process ${\mathcal{A}}_{\rm 1}$ is the process with $m$-point joint distributions at $u_{1}<u_{2}<\ldots<u_{m}$ given by the Fredholm determinant $$\mathbbm{P}\Big{(}\bigcap_{k=1}^{m}\{{\mathcal{A}}_{\rm 1}(u_{k})\leq s_{k}\}% \Big{)}=\det(\mathbbm{1}-\chi_{s}K_{1}\chi_{s})_{L^{2}(\{u_{1},\ldots,u_{m}\}% \times\mathbb{R})}$$ (25) where $\chi_{s}(u_{k},x)=\mathbbm{1}(x>s_{k})$ and the kernel $K_{1}$ is given by $$\displaystyle K_{1}(u,s;u^{\prime},s^{\prime})$$ $$\displaystyle=-\frac{1}{\sqrt{4\pi(u^{\prime}-u)}}\exp\left(-\frac{(s^{\prime}% -s)^{2}}{4(u^{\prime}-u)}\right)\mathbf{1}(u<u^{\prime})$$ (26) $$\displaystyle+\mathrm{Ai}(s+s^{\prime}+(u^{\prime}-u)^{2})\exp\left((u^{\prime% }-u)(s+s^{\prime})+\frac{2}{3}(u^{\prime}-u)^{3}\right)$$ Definition 2. The Airy${}_{2}$ process ${\mathcal{A}}_{\rm 2}$ is the process with $m$-point joint distributions at $u_{1}<u_{2}<\ldots<u_{m}$ given by the Fredholm determinant $$\mathbbm{P}\Big{(}\bigcap_{k=1}^{m}\{{\mathcal{A}}_{\rm 2}(u_{k})\leq s_{k}\}% \Big{)}=\det(\mathbbm{1}-\chi_{s}K_{2}\chi_{s})_{L^{2}(\{u_{1},\ldots,u_{m}\}% \times\mathbb{R})}$$ (27) where $\chi_{s}(u_{k},x)=\mathbbm{1}(x>s_{k})$ and $K_{2}$ is the extended Airy kernel given by $$K_{2}(u,s;u^{\prime},s^{\prime})=\left\{\begin{array}[]{ll}\int_{\mathbb{R}_{+% }}\mathrm{d}\lambda e^{(u^{\prime}-u)\lambda}\mathrm{Ai}(x+\lambda)\mathrm{Ai}% (y+\lambda),&u\geq u^{\prime},\\ -\int_{\mathbb{R}_{-}}\mathrm{d}\lambda e^{(u^{\prime}-u)\lambda}\mathrm{Ai}(x% +\lambda)\mathrm{Ai}(y+\lambda),&u<u^{\prime}.\end{array}\right.$$ (28) References [1] M. Adler, E. Nordenstam, and P. van Moerbeke, The Dyson Brownian minor process, arXiv:1006.2956 (2010). [2] G. Amir, I. Corwin, and J. Quastel, Probability distribution of the free energy of the continuum directed random polymer in $1+1$ dimensions, arXiv:1003.0443 (2010). [3] J. Baik, P.L. Ferrari, and S. Péché, Limit process of stationary TASEP near the characteristic line, Comm. Pure Appl. Math. 63 (2010), 1017–1070. [4] J. Baik and E.M. Rains, Limiting distributions for a polynuclear growth model with external sources, J. Stat. Phys. 100 (2000), 523–542. [5] J. Baik and E.M. Rains, Symmetrized random permutations, Random Matrix Models and Their Applications, vol. 40, Cambridge University Press, 2001, pp. 1–19. [6] G. Ben Arous and I. Corwin, Current fluctuations for TASEP: a proof of the Prähofer-Spohn conjecture, arXiv:0905.2993; to appear in Ann. of Probab. (2009). [7] L. Bertini and G. Giacomin, Stochastic Burgers and KPZ equations from particle system, Comm. Math. Phys. 183 (1997), 571–607. [8] F. Bornemann, P.L. Ferrari, and M. Prähofer, The Airy${}_{1}$ process is not the limit of the largest eigenvalue in GOE matrix diffusion, J. Stat. Phys. 133 (2008), 405–415. [9] A. Borodin and P.L. Ferrari, Anisotropic growth of random surfaces in $2+1$ dimensions, arXiv:0804.3035 (2008). [10] A. Borodin and P.L. Ferrari, Large time asymptotics of growth models on space-like paths I: PushASEP, Electron. J. Probab. 13 (2008), 1380––1418. [11] A. Borodin and P.L. Ferrari, Anisotropic KPZ growth in $2+1$ dimensions: fluctuations and covariance structure, J. Stat. Mech. (2009), P02009. [12] A. Borodin, P.L. Ferrari, M. Prähofer, and T. Sasamoto, Fluctuation properties of the TASEP with periodic initial configuration, J. Stat. Phys. 129 (2007), 1055–1080. [13] A. Borodin, P.L. Ferrari, and T. Sasamoto, Large time asymptotics of growth models on space-like paths II: PNG and parallel TASEP, Comm. Math. Phys. 283 (2008), 417–449. [14] A. Borodin, P.L. Ferrari, and T. Sasamoto, Transition between Airy${}_{1}$ and Airy${}_{2}$ processes and TASEP fluctuations, Comm. Pure Appl. Math. 61 (2008), 1603–1629. [15] A. Borodin, P.L. Ferrari, and T. Sasamoto, Two speed TASEP, J. Stat. Phys. 137 (2009), 936–977. [16] A. Borodin and G. Olshanski, Stochastic dynamics related to Plancherel measure, AMS Transl.: Representation Theory, Dynamical Systems, and Asymptotic Combinatorics (V. Kaimanovich and A. Lodkin, eds.), 2006, pp. 9–22. [17] I. Corwin, P.L. Ferrari, and S. Péché, Universality of slow decorrelation in KPZ models, preprint: arXiv:1001.5345 (2010). [18] B. Derrida, S.A. Janowsky, J.L. Lebowitz, and E.R. Speer, Exact solution of the totally asymmetric simple exclusion process: shock profiles, J. Stat. Phys. 73 (1993), 813–842. [19] F.J. Dyson, A Brownian-motion model for the eigenvalues of a random matrix, J. Math. Phys. 3 (1962), 1191–1198. [20] S.F. Edwards and D.R. Wilkinson, The surface statistics of a granular aggregate, Proc. R. Soc. A 381 (1982), 17–31. [21] L.C. Evans, Partial Differential Equations, Providence, RI, 1998. [22] P.A. Ferrari, Shock fluctuations in asymmetric simple exclusion, Probab. Theory Relat. Fields 91 (1992), 81–101. [23] P.L. Ferrari, Polynuclear growth on a flat substrate and edge scaling of GOE eigenvalues, Comm. Math. Phys. 252 (2004), 77–109. [24] P.L. Ferrari, Slow decorrelations in KPZ growth, J. Stat. Mech. (2008), P07022. [25] P.L. Ferrari, The universal Airy${}_{1}$ and Airy${}_{2}$ processes in the Totally Asymmetric Simple Exclusion Process, Integrable Systems and Random Matrices: In Honor of Percy Deift (J. Baik, T. Kriecherbauer, L-C. Li, K. McLaughlin, and C. Tomei, eds.), Contemporary Math., Amer. Math. Soc., 2008, pp. 321–332. [26] P.L. Ferrari and R. Frings, On the partial connection between random matrices and interacting particle systems, preprint: arXiv:1006.3946 (2010). [27] P.L. Ferrari and H. Spohn, Scaling limit for the space-time covariance of the stationary totally asymmetric simple exclusion process, Comm. Math. Phys. 265 (2006), 1–44. [28] P.L. Ferrari and H. Spohn, Random Growth Models, arXiv:1003.0881 (2010). [29] D. Forster, D.R. Nelson, and M.J. Stephen, Large-distance and long-time properties of a randomly stirred fluid, Phys. Rev. A 16 (1977), 732–749. [30] T. Imamura and T. Sasamoto, Dynamical properties of a tagged particle in the totally asymmetric simple exclusion process with the step-type initial condition, J. Stat. Phys. 128 (2007), 799–846. [31] K. Johansson, Shape fluctuations and random matrices, Comm. Math. Phys. 209 (2000), 437–476. [32] K. Johansson, Discrete polynuclear growth and determinantal processes, Comm. Math. Phys. 242 (2003), 277–329. [33] K. Johansson, The arctic circle boundary and the Airy process, Ann. Probab. 33 (2005), 1–30. [34] K. Johansson, Random matrices and determinantal processes, Mathematical Statistical Physics, Session LXXXIII: Lecture Notes of the Les Houches Summer School 2005 (A. Bovier, F. Dunlop, A. van Enter, F. den Hollander, and J. Dalibard, eds.), Elsevier Science, 2006, pp. 1–56. [35] K. Kardar, G. Parisi, and Y.Z. Zhang, Dynamic scaling of growing interfaces, Phys. Rev. Lett. 56 (1986), 889–892. [36] T. Kriecherbauer and J. Krug, A pedestrian’s view on interacting particle systems, KPZ universality, and random matrices, J. Phys. A: Math. Theor. 43 (2010), 403001. [37] T.M. Liggett, Coupling the simple exclusion process, Ann. Probab. 4 (1976), 339–356. [38] M. Prähofer and H. Spohn, An Exactly Solved Model of Three Dimensional Surface Growth in the Anisotropic KPZ Regime, J. Stat. Phys. 88 (1997), 999–1012. [39] M. Prähofer and H. Spohn, Universal distributions for growth processes in $1+1$ dimensions and random matrices, Phys. Rev. Lett. 84 (2000), 4882–4885. [40] M. Prähofer and H. Spohn, Current fluctuations for the totally asymmetric simple exclusion process, In and out of equilibrium (V. Sidoravicius, ed.), Progress in Probability, Birkhäuser, 2002. [41] M. Prähofer and H. Spohn, Scale invariance of the PNG droplet and the Airy process, J. Stat. Phys. 108 (2002), 1071–1106. [42] T. Sasamoto, Spatial correlations of the 1D KPZ surface on a flat substrate, J. Phys. A 38 (2005), L549–L556. [43] T. Sasamoto and H. Spohn, The 1+1-dimensional Kardar-Parisi-Zhang equation and its universality class, Contribution to StatPhys24 special issue; in preparation (2010). [44] T. Sasamoto and H. Spohn, Universality of the one-dimensional KPZ equation, arXiv:1002.1883 (2010). [45] H. Spohn, Exact solutions for KPZ-type growth processes, random matrices, and equilibrium shapes of crystals, Physica A 369 (2006), 71–99. [46] K. Takeuchi, private communication, (2010). [47] K. Takeuchi and M. Sano, Growing Interfaces of Liquid Crystal Turbulence: Universal Scaling and Fluctuations, Phys. Rev. Lett. 104 (2010), 230601. [48] L.-H. Tang, B.M. Forrest, and D.E. Wolf, Kinetic surface roughening. II. Hypercube stacking models, Phys. Rev. A 45 (1992), 7162–7169. [49] C.A. Tracy and H. Widom, Level-spacing distributions and the Airy kernel, Comm. Math. Phys. 159 (1994), 151–174. [50] C.A. Tracy and H. Widom, On orthogonal and symplectic matrix ensembles, Comm. Math. Phys. 177 (1996), 727–754. [51] H. van Beijeren, Fluctuations in the Motions of Mass and of Patterns in One-Dimensional Driven Diffusive Systems, J. Stat. Phys. 63 (1991), 47–58. [52] H. van Beijeren, R. Kutner, and H. Spohn, Excess noise for driven diffusive systems, Phys. Rev. Lett. 54 (1985), 2026–2029. [53] S.R.S. Varadhan, Large deviations for the asymmetric simple exclusion process, Adv. Stud. Pure Math. 39 (2004), 1–27. [54] D.E. Wolf, Kinetic roughening of vicinal surfaces, Phys. Rev. Lett. 67 (1991), 1783–1786.
Learning metrics for persistence-based summaries and applications for graph classification Qi Zhao${}^{*}$    Yusu Wang Computer Science and Engineering Department, The Ohio State University, Columbus, OH 43221, USA. () Abstract Recently a new feature representation and data analysis methodology based on a topological tool called persistent homology (and its corresponding persistence diagram summary) has started to attract momentum. A series of methods have been developed to map a persistence diagram to a vector representation so as to facilitate the downstream use of machine learning tools, and in these approaches, the importance (weight) of different persistence features are often pre-set. However often in practice, the choice of the weight-function should depend on the nature of the specific type of data one considers, and it is thus highly desirable to learn a best weight-function (and thus metric for persistence diagrams) from labelled data. We study this problem and develop a new weighted kernel, called WKPI, for persistence summaries, as well as an optimization framework to learn a good metric for persistence summaries. Both our kernel and optimization problem have nice properties. We further apply the learned kernel to the challenging task of graph classification, and show that our WKPI-based classification framework obtains similar or (sometimes significantly) better results than the best results from a range of previous graph classification frameworks on a collection of benchmark datasets. 1 Introduction In recent years a new data analysis methodology based on a topological tool called persistent homology has started to attract momentum in the learning community. The persistent homology is one of the most important developments in the field of topological data analysis in the past two decades, and there have been fundamental development both on the theoretical front (e.g, [20, 9, 11, 7, 12]), and on algorithms / efficient implementations (e.g, [38, 4, 13, 17, 26, 3]). On the high level, given a domain $X$ with a function $f:X\to{\mathbb{R}}$ defined on it, the persistent homology provides a way to summarize “features” of $X$ across multiple scales simultaneously in a single summary called the persistence diagram (see the lower left picture in Figure 1). A persistence diagram consists of a multiset of points in the plane, where each point $p=(b,d)$ intuitively corresponds to the birth-time ($b$) and death-time ($d$) of some (topological) feature of $X$ w.r.t. $f$. Hence it provides a concise representation of $X$, capturing multi-scale features of it in a concise manner. Furthermore, the persistent homology framework can be applied to complex data (e.g, 3D shapes, or graphs), and different summaries could be constructed by putting different descriptor functions on input data. Due to these reasons, a new persistence-based feature vectorization and data analysis framework (see Figure 1) has recently attracted much attention. Specifically, given a collection of objects, say a set of graphs modeling chemical compounds, one can first convert each shape to a persistence-based representation. The input data can now be viewed as a set of points in a certain persistence-based feature space. Equipping this space with appropriate distance or kernel, one can then perform downstream data analysis tasks, such as clustering or classification. The original distances for persistence diagram summaries unfortunately do not lend themselves easily to machine learning tasks. Hence in the last few years, starting from the persistence landscape [6], there have been a series of methods developed to map a persistence diagram to a vector representation to facilitate machine learning tools. Recent ones include Persistence Scale-Space kernel [37], Persistence Images [1], Persistence Weighted Gaussian kernel (PWGK) [28], Sliced Wasserstein kernel [10], and Persistence Fisher kernel [29]. In these approaches, when computing the distance or kernel between persistence summaries, the importance (weight) of different persistence features are often pre-determined (e.g, either with uniform weights, or weighted by its persistence). In persistence images [1] and PWGK [28], the importance of having a weight-function for the birth-death plane (containing the persistence points) has been emphasized and explicitly included in the formulation of their kernels. However, before using these kernels, the weight-function needs to be pre-set. On the other hand, as recognized by [23], the choice of the weight-function should depend on the nature of the specific type of data one considers. For example, for the persistence diagrams computed from atomic configurations of molecules, features with small persistence could capture the local packing patterns which are of utmost importance and thus should be given a larger weight; while in many other scenarios, small persistence typically leads to noise with low importance. However, in general researchers performing data analysis tasks may not have such prior insights on input data. Thus it is natural and highly desirable to learn a best weight-function from labelled data. Our work and contributions. In this paper, we study the problem of learning an appropriate metric (kernel) for persistence-based summaries from labelled data, as well as applying the learnt kernel to the challenging graph classification task. Our contributions are two-fold. Metric learning for persistence summaries: We propose a new weighted-kernel (called WKPI), for persistence summaries based on persistence images representations. Our new WKPI kernel is positive semi-definite and its induced distance has stability property. A weight-function used in this kernel can directly encodes the importance of different locations in the persistence diagram. We next model the metric learning problem for persistence summaries as the problem of learning (the parameters of) this weight-function from a certain function class. In particular, we develop a cost function and the metric-learning is then formulated as an optimization problem. Interestingly, we show that this cost function has a simple matrix view which helps to both conceptually clarify its meaning and simplify the implementation of its optimization. Graph classification application: Given a set of objects with class labels, we first learn a best WKPI-kernel as described above, and then use the learned WKPI to further classify objects. We implemented this WKPI-classification framework, and apply it to a range of graph data sets. Graph classification is an important problem, and there has been a large literature on developing effective graph representations (e.g, [22, 36, 2, 27, 39, 42, 34]) and graph neural networks (e.g, graph neural networks [43, 35, 41, 40, 30]) for classifying graphs. The problem is challenging as graph data are less structured and more complex. We perform our WKPI-classification framework on a range of benchmark graph data sets as well as new data sets (modeling neuron morphologies). We show that the learned WKPI is consistently much more effective than other persistence-based kernels. Most importantly, when compared with several existing state-of-the-art graph classification frameworks, our framework shows similar or (sometimes significantly) better performance in almost all cases than the best results by existing approaches 111Several datasets are attributed graphs, and our new framework achieves these results without even using those attributes.. Given the importance of the graph classification problem, we believe that this is an independent and important contribution of our work. We note that [23] is the first to recognize the importance of using labelled data to learn a task-optimal representation of topological signatures. They developed an end-to-end deep neural network for this purpose, using a novel and elegant design of the input layer to implicitly learn a task-specific representation. We instead explicitly formulate the metric-learning problem for persistence-summaries, and decouple the metric-learning (which can also be viewed as representation-learning) component from the downstream data analysis tasks. Also as shown in Section 4, our WKPI-classification framework (using SVM) achieves better results on graph classification datasets. 2 Persistence-based framework We first give an informal description of persistent homology below. See [19] for more detailed exposition on the subject. Suppose we are given a shape $X$ (in our later graph classification application, $X$ is a graph). Imagine we inspect $X$ through a filtration of $X$, which is a sequence of growing subsets of $X$: $X_{1}\subseteq X_{2}\subseteq\cdots\subseteq X_{n}=X$. As we scan $X$, sometimes a new feature appears in $X_{i}$, and sometimes an existing feature disappears upon entering $X_{j}$. Using the topological object called homology classes to describe these features (which intuitively capture components, independent loops / voids, and their high dimensional counter-parts), the birth and death of topological features can then be captured by the persistent homology, in the form of a persistence diagram $\mathrm{Dg}X$. Specifically, $\mathrm{Dg}X$ consists of a multi-set of points in the plane (which we call the birth-death plane ${\mathbb{R}}^{2}$), where each point ($b,d$) in it, called a persistence-point, indicates that a certain homological feature is created upon entering $X_{b}$ and destroyed upon entering $X_{d}$. A common way to obtain a meaningful filtration of $X$ is via the sublevel-set filtration induced by a descriptor function $f$ on $X$. More specifically, given a function $f:X\to{\mathbb{R}}$, let $X_{\leq a}:=\{x\in X\mid f(x)\leq a\}$ be its sublevel-set at $a$. Let $a_{1}<a_{2}<\cdots<a_{n}$ be $n$ real values. The sublevel-set filtration w.r.t. $f$ is: $X_{\leq a_{1}}\subseteq X_{\leq a_{2}}\subseteq\cdots\subseteq X_{\leq a_{n}};$ and its persistence diagram is denoted by $\mathrm{Dg}f$. Each persistence-point $p=(a_{i},a_{j})\in\mathrm{Dg}f$ indicates the function values when some topological features are created (when entering $X_{\leq a_{i}}$) and destroyed (in $X_{\leq a_{j}}$), and the persistence of this feature is its life-time $\mathrm{pers}(p)=|a_{j}-a_{i}|$. See Figure 2 (a) for a simple example where $X={\mathbb{R}}$. If one sweep $X$ top-down in decreasing function values, one gets the persistence diagram induced by the super-levelset filtration of $X$ w.r.t. $f$ in an analogous way. Finally, if one tracks the change of topological features in the levelset $f^{-1}(a)$, one obtains the so-called levelset zigzag persistence [8]. The persistent homology provides a generic yet powerful way to summarize a space $X$. Even when the space $X$ is complex, say a graph, we can still map it to a persistence diagram via appropriate descriptor functions. Furthermore, a different descriptor function $f$ provides a different perspective of $X$, and its persistence diagram $\mathrm{Dg}f$ summarizes features of $X$ at all scales w.r.t. this perspective. If we are given a collection of shapes $\Xi$, we can compute a persistence diagram $\mathrm{Dg}X$ for each $X\in\Xi$, which maps the set $\Xi$ to a set of points in the space of persistence diagrams. There are natural distances defined for persistence diagrams, including the bottleneck distance and the Wasserstein distance, both of which have been well studied (e.g, stability under these distances [14, 15, 12]) with efficient implementations available [24, 25]. However, to facilitate downstream machine learning / data analysis tasks, it is desirable to further map the persistence diagrams to another representation (e.g, in a Hilbert space). Below we introduce one such representation, called the persistence images [1], as our new kernel is based on it. Persistence images. Let $A$ be a persistent diagram (containing a multiset of persistence-points). Set $T:{\mathbb{R}}^{2}\to{\mathbb{R}}^{2}$ to be the linear transformation where for each $(x,y)\in{\mathbb{R}}^{2}$, $T(x,y)=(x,y-x)$. Let $T(A)$ be the transformed diagram of $A$. Let ${\phi}_{u}:{\mathbb{R}}^{2}\rightarrow{\mathbb{R}}$ be a differentiable probability distribution with mean $u\in{\mathbb{R}}^{2}$ (e.g, the normalized Gaussian where for any $z\in{\mathbb{R}}^{2}$, $\phi_{u}(z)=\frac{1}{2\pi\tau^{2}}e^{-\frac{\|z-u\|^{2}}{2\tau^{2}}})$. Definition 2.1 ([1]) Let ${\alpha}:{\mathbb{R}}^{2}\to{\mathbb{R}}$ be a non-negative weight-function for the persistent plane ${\mathbb{R}}^{2}$. Given a persistent diagram P, its persistence surface ${\rho}_{P}:{\mathbb{R}}^{2}\to{\mathbb{R}}$ (w.r.t. ${\alpha}$) is defined as: for any $z\in{\mathbb{R}}^{2}$, $\rho_{P}(z)=\sum_{u\in T(A)}{\alpha}(u)\phi_{u}(z).$ See Figure 2 (b) for an example. Adams et al. further “discretize” the 2D persistence surface to map it to a finite vector. In particular, fix a grid on a rectangular region in the plane with a collection ${\mathrm{P}}$ of $N$ rectangles (pixels). Definition 2.2 ([1]) Given a persistence diagram $A$, its persistence image ${\mathrm{PI}}_{A}=\{~{}{\mathrm{PI}}[{\mathrm{p}}]~{}\}_{{\mathrm{p}}\in{% \mathrm{P}}}$ consists of $N$ numbers, one for each pixel ${\mathrm{p}}$ in the grid ${\mathrm{P}}$ with ${\mathrm{PI}}[{\mathrm{p}}]:=\iint_{\mathrm{p}}{\rho}_{A}~{}dydx.$ The persistence image can be viewed as a vector in ${\mathbb{R}}^{N}$. One can then compute distance between two persistence diagrams $A_{1}$ and $A_{2}$ by the $L_{2}$-distance $\|{\mathrm{PI}}_{1}-{\mathrm{PI}}_{2}\|_{2}$ between their persistence images (vectors) ${\mathrm{PI}}_{1}$ and ${\mathrm{PI}}_{2}$. The persistence images have several nice properties, including stability guarantees; see [1] for more details. 3 Metric learning frameworks Suppose we are given a set of $n$ objects $\Xi$ (sampled from a hidden data space $\mathcal{S}$), classified into $k$ classes. We want to use these labelled data to learn a good distance for (persistence-image representations of) objects from $\Xi$ which hopefully is more appropriate at classifying objects in the data space $\mathcal{S}$. To do so, below we propose a new persistence-based kernel for persistence images, and then formulate an optimization problem to learn the best weight-function so as to obtain a good distance metric for $\Xi$ (and data space $\mathcal{S}$). 3.1 Weighted persistence image kernel (WKPI) From now on, we fix the grid ${\mathrm{P}}$ (of size $N$) to generate persistence images (so a persistence image is a vector in ${\mathbb{R}}^{N}$). Let $p_{s}$ denote the center of the $s$-th pixel ${\mathrm{p}}_{s}$ in ${\mathrm{P}}$, for $s\in[1,N]$. We now introduce a new kernel for persistence images. A weight-function refers to a non-negative real-valued function defined on ${\mathbb{R}}^{2}$. Definition 3.1 Let ${\omega}:{\mathbb{R}}^{2}\to{\mathbb{R}}$ be a weight-function. Given two persistence images ${\mathrm{PI}}$ and ${\mathrm{PI}}^{\prime}$, the (${\omega}$-)weighted persistence image kernel (WKPI) is defined as: $$\displaystyle{k_{w}}({\mathrm{PI}},{\mathrm{PI}}^{\prime})$$ $$\displaystyle:=\sum_{s=1}^{N}{\omega}(p_{s})e^{-\frac{({\mathrm{PI}}(s)-{% \mathrm{PI}}^{\prime}(s))^{2}}{2\sigma^{2}}}.$$ (1) Remark 1: We could use the persistence surfaces (instead of persistence images) to define the kernel (with the summation replaced by an integral). Since for computational purpose, one still needs to approximate the integral in the kernel via some discretization, we choose to present our work using persistence-images directly. Our Theorems 3.2 and 3.4 still hold (with slightly different stability bound) if we use the kernel defined for persistence surfaces. Remark 2: One can choose the weight-function from different function classes. Two popular choices are: mixture of $m$ 2D Gaussians; and degree-$d$ polynomials on two variables. Remark 3: There are other natural choices for defining a weighted kernel for persistence images. For example, we could use $k({\mathrm{PI}},{\mathrm{PI}}^{\prime})=\sum_{s=1}^{N}e^{-\frac{{\omega}(p_{s}% )({\mathrm{PI}}(s)-{\mathrm{PI}}^{\prime}(s))^{2}}{2\sigma^{2}}}$, which we refer this as altWKPI. Alternatively, one could use the weight function used in the persistent-weighted Gaussian kernel (PWGK) [28] directly. Indeed, we have implemented all these choices, and our experiments show that our WKPI kernel leads to better results than these choices for all datasets (see Appendix B.4). Furthermore, we note that the square of our WKPI-distance depends on ${\omega}$ linearly, which is much simpler when computing gradients for our cost function later when using the chain-rule. In addition, note that PWGK kernel [28] contains cross terms ${\omega}(x)\cdot{\omega}(y)$ in its formulation, meaning that there are quadratic number of terms (w.r.t the number of persistence points) to calculate the kernel. This makes it more expensive to compute and learn for complex objects (e.g, for the neuron data set, a single neuron tree could produce a persistence diagrams with hundreds persistence points). The simple proof of Theorem 3.2 is in Appendix A.1. Theorem 3.2 The WKPI kernel is positive semi-definite. By the above result, the WKPI kernel gives rise to a Hilbert space. We can now introduce the following WKPI-distance induced by the inner product on this Hilbert space. Definition 3.3 Given two persistence diagrams $A$ and $B$, let ${\mathrm{PI}}_{A}$ and ${\mathrm{PI}}_{B}$ be their corresponding persistence-images. Given a weight-function $\omega:{\mathbb{R}}^{2}\to{\mathbb{R}}$, the ($\omega$-weighted) WKPI-distance is defined as: $$\displaystyle{\mathrm{D_{\omega}}}($$ $$\displaystyle A,B):=$$ $$\displaystyle\sqrt{{k_{w}}({\mathrm{PI}}_{A},{\mathrm{PI}}_{A})+{k_{w}}({% \mathrm{PI}}_{B},{\mathrm{PI}}_{B})-2{k_{w}}({\mathrm{PI}}_{A},{\mathrm{PI}}_{% B})}.$$ Stability of WKPI-distance. Given two persistence diagrams $A$ and $B$, two traditional distances between them are the bottleneck distance $d_{B}(A,B)$ and the $p$-th Wasserstein distance $d_{W,p}(A,B)$. Stability of these two distances w.r.t. changes of input objects or functions defined on them have been studied [14, 15, 12]. Similar to the stability study on persistence images, below we prove WKPI-distance is stable w.r.t. small perturbation in persistence diagrams as measured by $d_{W,1}$. (Intuitively, view two persistence diagrams $A$ and $B$ as two (appropriate) measures, and $d_{W,1}(A,B)$ is then the “earth-mover” distance between them so as to convert the measure corresponding to $A$ to that for $B$.) To simplify the presentation of Theorem 3.4, we use unweighted persistence images w.r.t. Gaussian, meaning in Definition 2.1, (1) the weight function ${\alpha}$ is the constant function ${\alpha}=1$; and (2) the distribution $\phi_{u}$ is the Gaussian $\phi_{u}(z)=\frac{1}{2\pi\tau^{2}}e^{-\frac{\|z-u\|^{2}}{2\tau^{2}}}$. The proof of the following theorem can be found in Appendix A.2. Theorem 3.4 Given a weight-function ${\omega}:{\mathbb{R}}^{2}\to{\mathbb{R}}$, set $c_{w}=\|{\omega}\|_{\infty}=\sup_{z\in{\mathbb{R}}^{2}}{\omega}(z)$. Given two persistence diagrams $A$ and $B$, with corresponding persistence images ${\mathrm{PI}}_{A}$ and ${\mathrm{PI}}_{B}$, we have that: $${\mathrm{D_{\omega}}}(A,B)\leq\sqrt{\frac{20c_{w}}{\pi}}\cdot\frac{1}{\sigma% \cdot\tau}\cdot d_{W,1}(A,B),$$ where $\sigma$ is width of the Gaussian used to define our WKPI kernel (Definition 3.1), and $\tau$ is that for the Gaussian $\phi_{u}$ to define persistence images (Definition 2.1). Remarks: We can obtain a more general bound for the case where the distribution $\phi_{u}$ is not Gaussian. Furthermore, we can obtain a similar bound when our WKPI-kernel and its induced WKPI-distance is defined using persistence surfaces instead of persistence images. We omit these from this short version of the paper. 3.2 Optimization problem for metric-learning Suppose we are given a collection of objects $\Xi=\{X_{1},\ldots,X_{n}\}$ (sampled from some hidden data space $\mathcal{S}$), already classified (labeled) to $k$ classes ${\mathcal{C}}_{1},\ldots,{\mathcal{C}}_{k}$. In what follows, we say that $i\in{\mathcal{C}}_{j}$ if $X_{i}$ has class-label $j$. We first compute the persistence diagram $A_{i}$ for each object $X_{i}\in\Xi$. (The precise filtration we use to do so will depend on the specific type of objects. Later in Section 4, we will describe filtrations used for graph data). Let $\{A_{1},\ldots,A_{n}\}$ be the resulting set of persistence diagrams. Given a weight-function ${\omega}$, its induced WKPI-distance between $A_{i}$ and $A_{j}$ can also be thought of as a distance for the original objects $X_{i}$ and $X_{j}$; that is, we can set ${\mathrm{D_{\omega}}}(X_{i},X_{j}):={\mathrm{D_{\omega}}}(A_{i},A_{j})$. Our goal is to learn a good distance metric for the data space $\mathcal{S}$ (where $\Xi$ are sampled from) from the labels. We will formulate this as learning a best weight-function ${\omega}^{*}$ so that its induced WKPI-distance fits the class-labels of $X_{i}$’s best. Specifically, for any $t\in[1,k]$, set: $$\displaystyle{cost}_{\omega}(t,t)$$ $$\displaystyle=\sum_{i,j\in{\mathcal{C}}_{t}}{\mathrm{D_{\omega}}}^{2}(A_{i},A_% {j});~{}~{}\text{and}$$ $$\displaystyle{cost}_{\omega}(t,\cdot)$$ $$\displaystyle=\sum_{i\in{\mathcal{C}}_{t},j\in[1,n]}{\mathrm{D_{\omega}}}^{2}(% A_{i},A_{j}).$$ Intuitively, ${cost}_{\omega}(t,t)$ is the total in-class (square) distances for ${\mathcal{C}}_{t}$; while ${cost}_{\omega}(t,\cdot)$ is the total distance from objects in class ${\mathcal{C}}_{t}$ to all objects in $\Xi$. A good metric should lead to relatively smaller distance between objects from the same class, but larger distance between objects from different classes. We thus propose the following optimization problem: Definition 3.5 (Optimization problem) Given a weight-function ${\omega}:{\mathbb{R}}^{2}\to{\mathbb{R}}$, the total-cost of its induced WKPI-distance over $\Xi$ is defined as: ${TC}({\omega}):=\sum_{t=1}^{k}\frac{{cost}(t,t)}{{cost}(t,\cdot)}.$ The optimal distance problem aims to find the best weight-function ${\omega}^{*}$ from a certain function class $\mathcal{F}$ so that the total-cost is minimized; that is: $${TC}^{*}=\min_{{\omega}\in\mathcal{F}}{TC}({\omega});~{}~{}\text{and}~{}~{}{% \omega}^{*}=\mathrm{argmin}_{{\omega}\in\mathcal{F}}{TC}({\omega}).$$ Matrix view of optimization problem. We observe that our cost function can be re-formulated into a matrix form. This provides us with a perspective from the Laplacian matrix of certain graphs to understand the cost function, and helps to simplify the implementation of our optimization problem, as several programming languages popular in machine learning (e.g Python and Matlab) handle matrix operations more efficiently (than using loops). More precisely, recall our input is a set of $n$ objects with labels from $k$ classes. We set up the following matrices: $$\displaystyle\Lambda$$ $$\displaystyle=\big{[}\Lambda_{ij}\big{]}_{n\times n},~{}~{}\text{where}~{}% \Lambda_{ij}={\mathrm{D_{\omega}}}^{2}(A_{i},A_{j})~{}~{}\text{for}~{}i,j\in[1% ,n];$$ $$\displaystyle G$$ $$\displaystyle=\big{[}g_{ij}\big{]}_{n\times n},\quad\text{where}~{}~{}g_{ij}=% \begin{cases}\sum_{\ell=1}^{n}\Lambda_{i\ell}&\text{if}~{}i=j\\ 0&\text{if}~{}i\neq j\end{cases}$$ $$\displaystyle L$$ $$\displaystyle=G-\Lambda$$ $$\displaystyle H$$ $$\displaystyle=\big{[}h_{ti}\big{]}_{k\times n}\quad\text{where}~{}~{}h_{ti}=% \begin{cases}\frac{1}{\sqrt{{cost}_{\omega}(t,\cdot)}}&i\in{\mathcal{C}}_{t}\\ 0&otherwise\end{cases}$$ $$\displaystyle h_{t}$$ $$\displaystyle=\big{[}h_{t1},h_{t2},...,h_{tn}\big{]}~{}~{}\text{is the}~{}t% \text{-th row vector of}~{}H.$$ If we view $\Lambda$ as a distance matrix of objects $\{X_{1},\ldots,X_{n}\}$, $L$ is then its Laplacian matrix. The technical proof of the following main theorem can be found in Appendix A.3. Theorem 3.6 The total-cost can also be represented by ${TC}({\omega})=k-{\mathrm{Tr}}(HLH^{T})$, where ${\mathrm{Tr}}(\cdot)$ is the trace of a matrix. Furthermore, $HGH^{T}=\mathbf{I}$, where $\mathbf{I}$ is the $k\times k$ identity matrix. Note that all matrices, $L,G,\Lambda,$ and $H$, are dependent on the (parameters of) weight-function ${\omega}$, and in the following corollary of Theorem 3.6, we use the subscript of ${\omega}$ to emphasize this dependence. Corollary 3.7 Optimal distance problem is equivalent to $$\min_{{\omega}}\big{(}k-{\mathrm{Tr}}(H_{\omega}L_{\omega}H_{\omega}^{T})\big{% )},~{}~{}~{}\text{subject to}~{}~{}H_{\omega}G_{\omega}H_{\omega}^{T}=\mathbf{% I}.$$ Solving the optimization problem. In our implementation, we use (stochastic) gradient decent to find a (locally) optimal weight-function ${\omega}^{*}$ for the minization problem. We use the matrix view as given in Corollary 3.7, and minimizing $Tr(HLH^{T})=\sum_{t=1}^{k}h_{t}Lh_{t}^{T}$ subject to $HGH^{T}=I$. We briefly describe our procedure where we assume that the weight-function ${\omega}$ is from the class $\mathcal{F}$ of mixture of $m$ number of 2D non-negatively weighted (spherical) Gaussians. Each weight-function ${\omega}:{\mathbb{R}}^{2}\to{\mathbb{R}}\in\mathcal{F}$ is thus determined by $4m$ parameters $\{x_{r},y_{r},\sigma_{r},w_{r}\mid r\in[1,m]\}$ with ${\omega}(z)=w_{r}e^{-\frac{(z_{x}-x_{r})^{2}+(z_{y}-y_{r})^{2}}{\sigma_{r}^{2}}}$. From the proof of Theorem 3.6 (in the appendix), it turns out that condition $HGH^{T}=\mathbf{I}$ is satisfied as long as the multiplicative weight $w_{r}$ of each Gaussian in the mixture is non-negative. Hence during the gradient descent method, we only need to make sure that this holds 222 In our implementation, we add a penalty term $\sum_{r=1}^{m}\frac{c}{exp({w_{r}})}$ to total-cost $k-Tr(HLH^{T})$, to achieve this in a “soft” manner.. It is easy to write out the gradient of ${TC}({\omega})$ w.r.t. each parameter $\{x_{r},y_{r},\sigma_{r},w_{r}\mid r\in[1,m]\}$ in matrix form. For example, $$\bigtriangleup x_{r}=-(\sum_{t=1}^{k}\frac{\partial h_{t}}{\partial x_{r}}Lh_{% t}^{T}+h_{t}\frac{\partial L}{\partial x_{r}}h_{t}^{T}+h_{t}L\frac{\partial h_% {t}^{T}}{\partial x_{r}}).$$ While this does not improve the asymptotic complexity of computing the gradient (compared to using the formulation of cost function in Definition 3.5), programming languages such as Python and Matlab can implement these matrix operations much more efficiently than using loops. For large data sets, one can use stochastic gradient decent, by sampling a subset of $n^{\prime}<<n$ number of input persistence images, and compute the matrices $H,D,L,G$ as well as the cost using the subsampled data points. In our implementation, we use Armijo-Goldstein line search scheme to update the parameters in each (stochastic) gradient decent step. The optimization procedure terminates when the cost function converges or the number of iterations exceeds a threshold. 4 Experiments In this section, we show the effectiveness of our metric-learning framework and the usefulness of the learned metric via graph classification applications. In particular, given a set of graphs $\Xi=\{G_{1},\ldots,G_{n}\}$ coming from $k$ classes, we first compute the unweighted persistence images $A_{i}$ for each graph $G_{i}$, and apply the framework from Section 3.1 to learn the “best” weight-function ${\omega}^{*}:{\mathbb{R}}^{2}\to{\mathbb{R}}$ on the birth-death plane ${\mathbb{R}}^{2}$ from these persistence images $\{A_{1},\ldots,A_{n}\}$. Next, we perform graph classification using kernel-SVM with the learned ${\omega}^{*}$-WKPI kernel. We refer to this framework as WKPI-classification framework. We show two family of experiments below. In section 4.1 we show that our learned WKPI kernel significantly outperforms existing persistence-based representations. In Section 4.2, we compare the performance of WKPI-classification framework with various state-of-the-art methods for the graph classification task over a range of data sets. Setup for our WKPI-based framework. In all our experiments, we assume that the weight-function comes from the class $\mathcal{F}$ of mixture of $m$ 2D non-negatively weighted Gaussians as described in the end of Section 3.2. Furthermore, all Gaussians are isotropic with the same standard deviation (width) $\sigma_{r}$. We take $m$ and this width $\sigma_{r}$ as hyperparameters: Specifically, we search among $m\in\{3,4,5,6,7,8\}$ and $\sigma_{r}\in\{0.01,0.1,1,10,100\}$ and determine their final choices via 10 times 10 fold cross validation. We repeat the process 10 times that spliting each dataset into 10 folds, and performing 10-fold cross validation. In each 10-fold cross validation, 9 folds are used for training and 1 for testing, and we repeat the 9:1 train-test experiments 10 times. One important question is to initialize the centers of the Gaussians in our mixture. There are three strategies that we consider. (1) We simply sample $m$ centers in the domain of persistence images randomly. (2) We collect all points in the persistence diagrams $\{A_{1},\ldots,A_{n}\}$ derived from the training data $\Xi$, and perform a k-means algorithm to identify $m$ means. (3) We perform a k-center algorithm to those points to identify $m$ centers. Strategies (2) and (3) usually outperform strategy (1). Thus in what follows we only report results from using k-means and k-centers as initialization, referred to as WKPI-kM and WKPI-kC, respectively. 4.1 Comparison with other persistence-based methods We compare our methods with state-of-the-art persistence-based representations, including the Persistence Weighted Gaussian Kernel (PWGK) [28], original Persistence Image (PI) [1], and Sliced Wasserstein (SW) Kernel [10]. Furthermore, as mentioned in Remark 3 after Definition 3.1, we can learn weight functions in PWGK by the optimizing the same cost function (via replacing our WKPI-distance with the one computed from PWGK kernel); and we refer to this as trainPWGK. We can also use an alternative kernel for persistence images as described in Remark 3, and then optimize the same cost function using distance computed from this kernel; we refer to this as altWKPI. We will compare our methods both with existing approaches, as well as with these two alternative metric-learning approaches (trainPWGK and altWKPI). Neuron datasets. Neuron cells have natural tree morphology, rooted at the cell body (soma), with dendrite and axon branching outm, and it is common in the field of neuronscience to model a neuron as a (geometric) tree. See Figure 4 in the appendix for an example. Our Neuron-Binary dataset consists of 1126 neuron trees classified into two (primary) classes: interneuron and principal neurons (data partly from the Blue Brain Project [33] and downloaded from(http://neuromorpho.org/). The second Neuron-Multi dataset is a refinement of the 459 neurons from the interneuron class into four (secondary) classes – hence neurons in dataset Neuron-Multi all come from one class of dataset Neuron-Binary. Generation of persistence. Given a neuron tree $T$, following [31], we use the descriptor function $f:T\to{\mathbb{R}}$ where $f(x)$ is the geodesic distance from $x$ to the root of $T$ along the tree. In Neuron-Multi, to differentiate the dendrite and axon part of a neuron cell, we further negate the function value if a point $x$ is in the dendrite. We then use the union of persistence diagrams $A_{T}$ induced by both the sublevel-set and superlevel-set filtrations w.r.t. $f$. Under these filtrations, intuitively, each point $(b,d)$ in the birth-death plane ${\mathbb{R}}^{2}$ corresponds to the creation and death of certain branch feature for the input neuron tree. The set of persistence diagrams obtained this way (one for each neuron tree) is the input to our WKPI-classification framework. Results on neuron datasets. The classification accuracy of various methods is given in Table 1. To obtain these results, we split the training cases and testing cases with the ratio 1:1 for both datasets. As the number of trees is not large, we use all training data to compute the gradients for cost functions in the optimization process instead of mini-batch sampling. Our optimization procedure terminates when the change of the cost function remains smaller than $10^{-4}$ or the iteration number exceeds 2000. Persistence-images are both needed for the methodology of [1] and as input for our WKPI-distance, and its resolution is fixed at roughly $40\times 40$ (see Appendix B.2 for details). For persistence image (PI) approach of [1], we experimented both with the unweighted persistence images (PI-CONST), and one, denoted by (PI-PL), where the weight function ${\alpha}:{\mathbb{R}}^{2}\to{\mathbb{R}}$ is a simple piecewise-linear (PL) function adapted from what’s proposed in [1]; see Appendix B.2 for details. Since PI-PL performs better than PI-CONST on both datasets, Table 1 only shows the results of PI-PL. Note that while our WKPI-framework is based on persistence images (PI), our classification accuracy is much better. We also point out that in our results here as well as later for graph classification task, our method consistently outperforms all other persistence-based representations, often by a large margin; see Appendix B.4 for comparison of our methods with these existing persistence-based frameworks on graph classification. In Figure 3 we show the heatmap of the learned weight-function ${\omega}^{*}$ for both datasets. Interestingly, we note that the important branching features (points in the birth-death plane with high ${\omega}^{*}$ values) separating the two primary classes (i.e, for Neuron-Binary dataset) is different from those important for classifying neurons from one of the two primary classes (the interneuron class) into the four secondary classes (i.e, the Neuron-Multi dataset). Also high importance (weight) points may not have high persistence. In the future, it would be interesting to investigate whether the important branch features are also biochemically important. 4.2 Graph classification task Benchmark datasets and comparison methods. We use a range of benchmark datasets (collected from recent literature) including: (1) several datasets on graphs derived from small chemical compounds or protein molecules: NCI1 and NCI109 [39], PTC [21], PROTEIN [5], DD [18] and MUTAG [16]; (2) two datasets on graphs representing the response relations between users in Reddit: REDDIT-5K (5 classes) and REDDIT-12K (11 classes) [43]; and (3) two datasets on IMDB networks of actors/actresses: IMDB-BINARY (2 classes), and IMDB-MULTI (3 classes). See Appendix B.3 for descriptions of these datasets, and their statistics (sizes of graphs etc). Many graph classification methods have been proposed in the literature, with different methods performing better on different datasets. Thus we include a large number of approaches to compare with: six graph-kernel based approaches: RetGK[44], FGSD[40], Weisfeiler-Lehman kernel (WL)[39], Weisfeiler-Lehman optimal assignment kernel (WL-OA)[27], Graphlet kernel (GK)[34], and Deep Graphlet kernel (DGK)[43]; as well as three graph neural networks: PATCHYSAN (PSCN) [35], Graph Isomorphism Network (GIN)[41] and deep learning framework with topological signature (DL-TDA) [23]. Persistence generation. To generate persistence diagram summaries, we want to put a meaningful descriptor function on input graphs. We consider two choices in our experiments: (a) the Ricci-curvature function $f_{c}:G\to{\mathbb{R}}$, where $f_{c}(x)$ is a discrete Ricci curvature for graphs as introduced in [32]; and (b) Jaccard-index function $f_{J}:G\to{\mathbb{R}}$: In particular, the Jaccard-index of an edge $(u,v)\in G$ in the graph is defined as $\rho(u,v)=\frac{|NN(u)\cap NN(v)|}{|NN(u)\cup NN(v)|}$, where $NN(x)$ refers to the set of neighbors of node $x$ in $G$. The Jaccard index has been commonly used as a way to measure edge-similarity333We modify our persistence algorithm slightly to handle the edge-valued Jaccard index function. As in the case for neuron data sets, we take the union of the $0$-th persistence diagrams induced by both the sublevel-set and the superlevel-set filtrations of the descriptor function $f$, and convert it to a persistence image as input to our WKPI-classification framework 444We expect that using the $0$-th zigzag persistence diagrams will provide better results. However, we choose to use only $0$-th standard persistence as it can be easily implemented to run in $O(n\log n)$ time using a simple union-find data structure.. In results reported in Table 2, Ricci curvature function is used for the small chemical compounds data sets (NCI1, NCI9, PTC and MUTAG), while Jaccard function is used for the two proteins datasets (PROTEIN and DD) as well as the social/IMDB networks (IMDB’s and REDDIT’s). Classification results. The graph classification results by various methods are reported in Table 2 and 3. In particular, Table 2 compare the classification accuracy with a range of methods; while Table 3 also lists the variance of the prediction accuracy (via 10 times 10 fold cross validation). Note that not all these previous accuracy / variances in the literature are computed under the same 10 times 10 fold cross validation setup as ours. For instance, the results reported for RetGK are computed from only 10-fold cross validation. Setup for our method is the same as for Neuron data: the only difference is that if the input dataset has more than 1000 graphs, then we choose mini-batches of size $50$ to compute the gradient in each iteration. Results of other methods are taken from their respective papers. The results of DL-TDA [23] are not listed in the table, as only the classification accuracy for REDDIT-5K (accuracy $54.5\%$) and REDDIT-12K ($44.5\%$) are given in their paper (which contains more results on images as well). The comparison with other persistence-based methods can be found in Appendix B.4 (where we consistently perform the best), and we only include one of them, the SW [10], in this table, as it performs the best among existing persistence-based approaches. The last two columns in Table 2 and Table 3 are our results, with WKPI-kM stands for WKPI-kmeans, and WKPI-kC for WKPI-kcenter. Except for MUTAG and IMDB-MULTI, the performances of our WKPI-framework are same or better than the best of other methods. It is important to observe that our WKPI-framework performs well on both chemical graphs and social graphs, while some of the earlier work tend to work well on one type of the graphs. Furthermore, note that the chemical / molecular graphs usually have attributes associated with them. Some existing methods use these attributes in their classification [43, 35, 44]. Our results however are obtained purely based on graph structure without using any attributes. Finally, variance speaking (see Table 3, the variances of our methods tend to be on-par with graph kernel based previous approaches; and these variances are usually much better than the GNN based approaches (i.e, PSCN and GIN). 5 Concluding remarks This paper introduces a new weighted-kernel for persistence images (WKPI), together with a metric-learning framework to learn the best weight-function for WKPI-kernel from labelled data. Various properties of the kernel and the formulation of the optimization problem are provided. Very importantly, we apply the learned WKPI-kernel to the task of graph classification, and show that our new framework achieves similar or better results than the best results among a range of previous graph classification approaches. In our current framework, only a single descriptor function of each input object (e.g, a graph) is used to derive a persistence-based representation. It will be interesting to extend our framework to leverage multiple descriptor functions (so as to capture different types of information) simultaneously and effectively. Recent work on multidimensional persistence would be useful in this effort. Another important question is to study how to incorporate categorical attributes associated to graph nodes (or points in input objects) effectively. Indeed, real-valued attributed can potentially be used as a descriptor function to generate persistence-based summaries. But the handling of categorical attributes via topological summarization is much more challenging, especially when there is no (prior-known) correlation between these attributes (e.g, the attribute is simply a number from $[1,s]$, coming from $s$ categories. The indices of these categories may carry no meaning). Acknowledgement. The authors would like to thank Chao Chen and Justin Eldridge for useful discussions related to this project. We would also like to thank Giorgio Ascoli for helping provide the neuron dataset. References [1] H. Adams, T. Emerson, M. Kirby, R. Neville, C. Peterson, P. Shipman, S. Chepushtanova, E. Hanson, F. Motta, and L. Ziegelmeier. Persistence images: a stable vector representation of persistent homology. Journal of Machine Learning Research, 18:218–252, 2017. [2] L. Bai, L. Rossi, A. Torsello, and E. R. Hancock. A quantum jensen-shannon graph kernel for unattributed graphs. Pattern Recognition, 48(2):344–355, 2015. [3] U. Bauer. Ripser, 2016. [4] U. Bauer, M. Kerber, J. Reininghaus, and H. Wagner. Phat – persistent homology algorithms toolbox. In H. Hong and C. Yap, editors, Mathematical Software – ICMS 2014, pages 137–143, Berlin, Heidelberg, 2014. Springer Berlin Heidelberg. [5] K. M. Borgwardt, C. S. Ong, S. Schönauer, S. Vishwanathan, A. J. Smola, and H.-P. Kriegel. Protein function prediction via graph kernels. Bioinformatics, 21(suppl_1):i47–i56, 2005. [6] P. Bubenik. Statistical topological data analysis using persistence landscapes. The Journal of Machine Learning Research, 16(1):77–102, 2015. [7] G. Carlsson and V. de Silva. Zigzag persistence. Foundations of Computational Mathematics, 10(4):367–405, 2010. [8] G. Carlsson, V. de Silva, and D. Morozov. Zigzag persistent homology and real-valued functions. In Proc. 25th Annu. ACM Sympos. Comput. Geom., pages 247–256, 2009. [9] G. Carlsson and A. Zomorodian. The theory of multidimensional persistence. Discrete & Computational Geometry, 42(1):71–93, 2009. [10] M. Carrière, M. Cuturi, and S. Oudot. Sliced Wasserstein kernel for persistence diagrams. International Conference on Machine Learning, pages 664–673, 2017. [11] F. Chazal, D. Cohen-Steiner, M. Glisse, L. J. Guibas, and S. Oudot. Proximity of persistence modules and their diagrams. In Proc. 25th ACM Sympos. on Comput. Geom., pages 237–246, 2009. [12] F. Chazal, V. de Silva, M. Glisse, and S. Oudot. The structure and stability of persistence modules. SpringerBriefs in Mathematics. Springer, 2016. [13] M. Clément, J.-D. Boissonnat, M. Glisse, and M. Yvinec. The gudhi library: simplicial complexes and persistent homology, 2014. url: http://gudhi.gforge.inria.fr/python/latest/index.html. [14] D. Cohen-Steiner, H. Edelsbrunner, and J. Harer. Stability of persistence diagrams. Discrete & Computational Geometry, 37(1):103–120, 2007. [15] D. Cohen-Steiner, H. Edelsbrunner, J. Harer, and Y. Mileyko. Lipschitz functions have Lp-stable persistence. Foundations of computational mathematics, 10(2):127–139, 2010. [16] A. K. Debnath, R. L. Lopez de Compadre, G. Debnath, A. J. Shusterman, and C. Hansch. Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. Journal of medicinal chemistry, 34(2):786–797, 1991. [17] T. K. Dey, D. Shi, and Y. Wang. Simba: An efficient tool for approximating Rips-filtration persistence via simplicial batch-collapse. In 24th Annual European Symposium on Algorithms (ESA 2016), volume 57 of Leibniz International Proceedings in Informatics (LIPIcs), pages 35:1–35:16, 2016. [18] P. D. Dobson and A. J. Doig. Distinguishing enzyme structures from non-enzymes without alignments. Journal of molecular biology, 330(4):771–783, 2003. [19] H. Edelsbrunner and J. Harer. Computational Topology : an Introduction. American Mathematical Society, 2010. [20] H. Edelsbrunner, D. Letscher, and A. Zomorodian. Topological persistence and simplification. Discrete Comput. Geom., 28:511–533, 2002. [21] C. Helma, R. D. King, S. Kramer, and A. Srinivasan. The predictive toxicology challenge 2000–2001. Bioinformatics, 17(1):107–108, 2001. [22] S. Hido and H. Kashima. A linear-time graph kernel. In Data Mining, 2009. ICDM’09. Ninth IEEE International Conference on, pages 179–188. IEEE, 2009. [23] C. Hofer, R. Kwitt, M. Niethammer, and A. Uhl. Deep learning with topological signatures. In Advances in Neural Information Processing Systems, pages 1634–1644, 2017. [24] M. Kerber, D. Morozov, and A. Nigmetov. Geometry helps to compare persistence diagrams. J. Exp. Algorithmics, 22:1.4:1–1.4:20, Sept. 2017. [25] M. Kerber, D. Morozov, and A. Nigmetov. HERA: software to compute distances for persistence diagrams, 2018. URL: https://bitbucket.org/grey$\_$narn/hera. [26] M. Kerber and H. Schreiber. Barcodes of towers and a streaming algorithm for persistent homology. In 33rd International Symposium on Computational Geometry (SoCG 2017), page 57. Schloss Dagstuhl-Leibniz-Zentrum für Informatik GmbH, 2017. [27] N. M. Kriege, P. L. Giscard, and R. C. Wilson. On valid optimal assignment kernels and applications to graph classification. In Advances in Neural Information Processing Systems, pages 1623–1631, 2016. [28] G. Kusano, K. Fukumizu, and Y. Hiraoka. Kernel method for persistence diagrams via kernel embedding and weight factor. Journal of Machine Learning Research, 18(189):1–41, 2018. [29] T. Le and M. Yamada. Persistence Fisher kernel: A Riemannian manifold kernel for persistence diagrams. In Advances in Neural Information Processing Systems (NIPS), pages 10028–10039, 2018. [30] R. Levie, F. Monti, X. Bresson, and M. M. Bronstein. Cayleynets: Graph convolutional neural networks with complex rational spectral filters. IEEE Trans. Signal Processing, 67(1):97–109, 2019. [31] Y. Li, D. Wang, G. A. Ascoli, P. Mitra, and Y. Wang. Metrics for comparing neuronal tree shapes based on persistent homology. PloS one, 12(8):e0182184, 2017. [32] Y. Lin, L. Lu, and S.-T. Yau. Ricci curvature of graphs. Tohoku Mathematical Journal, Second Series, 63(4):605–627, 2011. [33] H. Markram, E. Muller, S. Ramaswamy, M. W. Reimann, M. Abdellah, C. A. Sanchez, A. Ailamaki, L. Alonso-Nanclares, N. Antille, S. Arsever, et al. Reconstruction and simulation of neocortical microcircuitry. Cell, 163(2):456–492, 2015. [34] M. Neumann, N. Patricia, R. Garnett, and K. Kersting. Efficient graph kernels by randomization. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 378–393. Springer, 2012. [35] M. Niepert, M. Ahmed, and K. Kutzkov. Learning convolutional neural networks for graphs. International conference on machine learning, pages 2014–2023, 2016. [36] S. Nino, V. SVN, P. Tobias, M. Kurt, and B. Karsten. Efficient graphlet kernels for large graph comparison. Artificial Intelligence and Statistics, pages 488–495, 2009. [37] J. Reininghaus, S. Huber, U. Bauer, and R. Kwitt. A stable multi-scale kernel for topological machine learning. In Computer Vision $\&$ Pattern Recognition, pages 4741–4748, 2015. [38] D. Sheehy. Linear-size approximations to the Vietoris-Rips filtration. In Proc. 28th. Annu. Sympos. Comput. Geom., pages 239–248, 2012. [39] N. Shervashidze, P. Schweitzer, E. J. v. Leeuwen, K. Mehlhorn, and K. M. Borgwardt. Weisfeiler-Lehman graph kernels. Journal of Machine Learning Research, 12:2539–2561, 2011. [40] S. Verma and Z.-L. Zhang. Hunt for the unique, stable, sparse and fast feature learning on graphs. Advances in Neural Information Proceeding Systems, pages 88–98, 2017. [41] K. Xu, W. Hu, J. Leskovec, and S. Jegelka. How powerful are graph neural networks? arXiv preprint arXiv:1810.00826, 2018. [42] L. Xu, X. Jin, X. Wang, and B. Luo. A mixed Weisfeiler-Lehman graph kernel. In International Workshop on Graph-based Representations in Pattern Recognition, pages 242–251, 2015. [43] P. Yanardag and S. Vishwanathan. Deep graph kernels. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1365–1374, 2015. [44] Z. Zhang, M. Wang, Y. Xiang, Y. Huang, and A. Nehorai. Retgk: Graph kernels based on return probabilities of random walks. In Advances in Neural Information Processing Systems, pages 3968–3978, 2018. Appendix A Missing details from Section 3 A.1 Proof of Theorem 3.2 Consider an arbitrary collection of $n$ persistence images $\{{\mathrm{PI}}_{1},\ldots,{\mathrm{PI}}_{n}\}$ (i.e, a collection of $n$ vectors in ${\mathbb{R}}^{N}$). Set $K=[k_{ij}]_{n\times n}$ to be the $n\times n$ kernel matrix where $k_{ij}={k_{w}}({\mathrm{PI}}_{i},{\mathrm{PI}}_{j})$. Now given any vector $v=(v_{1},v_{2},...,v_{n})^{T}$, we have that: $$\displaystyle v^{T}Kv$$ $$\displaystyle=\sum_{i,j=1}^{n}v_{i}v_{j}k_{ij}$$ $$\displaystyle=\sum_{i,j=1}^{n}v_{i}v_{j}\sum_{s=1}^{m}{\omega}(p_{s})e^{-\frac% {({\mathrm{PI}}_{i}(s)-{\mathrm{PI}}_{j}(s))^{2}}{2{\sigma}^{2}}}$$ $$\displaystyle=\sum_{s=1}^{m}{\omega}(p_{s})\sum_{i,j=1}^{n}v_{i}v_{j}e^{-\frac% {({\mathrm{PI}}_{i}(s)-{\mathrm{PI}}_{j}(s))^{2}}{2{\sigma}^{2}}}.$$ Because Gaussian kernel is positive semi-definite and the weight-function ${\omega}$ is non-negative, $v^{T}Kv\geq 0$ for any $v\in{\mathbb{R}}^{N}$. Hence the WKPI kernel is positive semi-definite. A.2 Proof of Theorem 3.4 By Definitions 3.1 and 3.3, combined with the fact that $1-e^{-x}\leq x$ for any $x\in{\mathbb{R}}$, we have that: $$\displaystyle{\mathrm{D_{\omega}}}^{2}(A,B)$$ $$\displaystyle={k_{w}}({\mathrm{PI}}_{A},{\mathrm{PI}}_{A})+{k_{w}}({\mathrm{PI% }}_{B},{\mathrm{PI}}_{B})-2{k_{w}}({\mathrm{PI}}_{A},{\mathrm{PI}}_{B})$$ $$\displaystyle=~{}2\sum_{s=1}^{N}{\omega}(p_{s})-2\sum_{s=1}^{n}{\omega}(p_{s})% e^{-\frac{({\mathrm{PI}}_{A}(s)-{\mathrm{PI}}_{B}(s))^{2}}{{\sigma}^{2}}}$$ $$\displaystyle=~{}2\sum_{s=1}^{N}{\omega}(p_{s})(1-e^{-\frac{({\mathrm{PI}}_{A}% (s)-{\mathrm{PI}}_{B}(s))^{2}}{{\sigma}^{2}}})$$ $$\displaystyle~{}\leq 2c_{w}\sum_{s=1}^{N}(1-e^{-\frac{({\mathrm{PI}}_{A}(s)-{% \mathrm{PI}}_{B}(s))^{2}}{{\sigma}^{2}}})$$ $$\displaystyle~{}\leq 2\frac{c_{w}}{{\sigma}^{2}}\sum_{s=1}^{n}({\mathrm{PI}}_{% A}(s)-{\mathrm{PI}}_{B}(s))^{2}$$ $$\displaystyle~{}\leq 2\frac{c_{w}}{{\sigma}^{2}}||{\mathrm{PI}}_{A}-{\mathrm{% PI}}_{B}||_{2}^{2}$$ Furthermore, by Theorem 10 of [1], when the distribution $\phi_{u}$ to in Definition 2.1 is the normalized Gaussian $\phi_{u}(z)=\frac{1}{2\pi\tau^{2}}e^{-\frac{\|z-u\|^{2}}{2\tau^{2}}}$, and the weight function ${\alpha}=1$, we have that $\|{\mathrm{PI}}_{A}-{\mathrm{PI}}_{B}\|_{2}\leq\sqrt{\frac{10}{\pi}}\cdot\frac% {1}{\tau}\cdot d_{W,1}(A,B)$. (Intuitively, view two persistence diagrams $A$ and $B$ as two (appropriate) measures, and $d_{W,1}(A,B)$ is then the “earth-mover” distance between them so as to convert the measure corresponding to $A$ to that for $B$, where the cost is measured by the total $L_{1}$-distance that all mass have to travel.) Combining this with the inequalities for ${\mathrm{D_{\omega}}}^{2}(A,B)$ above, the theorem then follows. A.3 Proof of Theorem 3.6 We first show the following properties of matrix $L$ which will be useful for the proof later. Lemma A.1 The matrix L is symmetric and positive semi-definite. Furthermore, for every vector $f\in{\mathbb{R}}^{n}$, we have $$f^{T}Lf=\frac{1}{2}\sum_{i,j=1}^{n}\Lambda_{ij}(f_{i}-f_{j})^{2}$$ (2) Proof. By construction, it is easy to see that $L$ is symmetric as matrices $\Lambda$ and $G$ are. The positive semi-definiteness follows from Eqn (2) which we prove now. $$\displaystyle f^{T}Lf$$ $$\displaystyle=f^{T}Gf-f^{T}\Lambda f~{}~{}=\sum_{i=1}^{n}f_{i}^{2}g_{ii}-\sum_% {i,j=1}^{n}f_{i}f_{j}\Lambda_{ij}$$ $$\displaystyle=\frac{1}{2}\big{(}\sum_{i=1}^{n}f_{i}^{2}g_{ii}+\sum_{j=1}^{n}f_% {j}^{2}g_{jj}-\sum_{i,j=1}^{n}2f_{i}f_{j}\Lambda_{ij}\big{)}$$ $$\displaystyle=\frac{1}{2}\big{(}\sum_{i=1}^{n}f_{i}^{2}\sum_{j=1}^{n}\Lambda_{% ij}+\sum_{j=1}^{n}f_{j}^{2}\sum_{i=1}^{n}\Lambda_{ji}$$ $$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}-\sum_{i,j=1}^{n}2f_{i}f_{j}% \Lambda_{ij}\big{)}$$ $$\displaystyle=\frac{1}{2}\sum_{i,j=1}^{n}\Lambda_{ij}\cdot(f_{i}^{2}+f_{j}^{2}% -2f_{i}f_{j})$$ $$\displaystyle=\frac{1}{2}\sum_{i,j=1}^{n}\Lambda_{ij}(f_{i}-f_{j})^{2}$$ The lemma then follows. ∎ We now prove the statement in Theorem 3.6. Recall that the definition of various matrices, and that $h_{t}$’s are the row vectors of matrix $H$. For simplicity, in the derivations below, we use $D(i,j)$ to denote the ${\omega}$-induced WKPI-distance ${\mathrm{D_{\omega}}}(A_{i},A_{j})$ between persistence diagrams $A_{i}$ and $A_{j}$. Applying Lemma A.1, we have: $$\displaystyle{\mathrm{Tr}}$$ $$\displaystyle(HLH^{T})=\sum_{t=1}^{k}(HLH^{T})_{tt}~{}~{}=\sum_{t=1}^{k}h_{t}% Lh_{t}^{T}$$ (3) $$\displaystyle=\sum_{t=1}^{k}\frac{1}{2}\cdot\sum_{j_{1},j_{2}=1}^{n}D^{2}(j_{1% },j_{2})(h_{t,j_{1}}-h_{t,j_{2}})^{2}$$ $$\displaystyle=\sum_{t=1}^{k}\frac{1}{2}\cdot\sum_{j_{1},j_{2}=1}^{n}D^{2}(j_{1% },j_{2})(h_{t,j_{1}}^{2}+h_{t,j_{2}}^{2}-2h_{t,j_{1}}h_{t,j_{2}}).$$ Now by definition of $h_{ti}$, it is non-zero only when $i\in{\mathcal{C}}_{t}$. Combined with Eqn (3), it then follows that: $$\displaystyle{\mathrm{Tr}}$$ $$\displaystyle(HLH^{T})=\sum_{t=1}^{k}\frac{1}{2}\cdot\big{(}\sum_{j_{1}\in{% \mathcal{C}}_{t},j_{2}\in[1,n]}\frac{D^{2}(j_{1},j_{2})}{{cost}_{\omega}(t,% \cdot)}$$ $$\displaystyle+\sum_{j_{1}\in[1,n],j_{2}\in{\mathcal{C}}_{t}}\frac{D^{2}(j_{1},% j_{2})}{{cost}_{\omega}(t,\cdot)}-2\sum_{j_{1},j_{2}\in{\mathcal{C}}_{t}}\frac% {D^{2}(j_{1},j_{2})}{{cost}_{\omega}(t,\cdot)}\big{)}$$ $$\displaystyle=\sum_{t=1}^{k}\frac{1}{2}\big{(}\sum_{j_{1}\in{\mathcal{C}}_{t},% j_{2}\notin{\mathcal{C}}_{t}}\frac{D^{2}(j_{1},j_{2})}{{cost}_{\omega}(t,\cdot)}$$ $$\displaystyle+\sum_{j_{1}\notin{\mathcal{C}}_{t},j_{2}\in{\mathcal{C}}_{t}}% \frac{D^{2}(j_{1},j_{2})}{{cost}_{\omega}(t,\cdot)}\big{)}$$ $$\displaystyle=\sum_{t=1}^{k}\sum_{j_{1}\in A_{t},j_{2}\notin A_{t}}\frac{D^{2}% (j_{1},j_{2})}{{cost}_{\omega}(t,\cdot)}$$ $$\displaystyle=\sum_{t=1}^{k}\frac{{cost}_{\omega}(t,\cdot)-{cost}_{\omega}(t,t% )}{{cost}_{\omega}(t,\cdot)}$$ $$\displaystyle=k-{TC}({\omega})$$ This proves the first statement in Theorem 3.6. We now show that the matrix $HGH^{T}$ is the $k\times k$ identity matrix $\mathbf{I}$. Specifically, first consider $s\neq t\in[1,k]$; we claim: $$\displaystyle(HGH^{T})_{st}$$ $$\displaystyle=h_{s}Gh_{t}^{T}=\sum_{j_{1},j_{2}=1}^{n}h_{sj_{1}}G_{j_{1}j_{2}}% h_{tj_{2}}\quad=0.$$ It equals to $0$ because $h_{sj_{1}}$ is non-zero only for $j_{1}\in{\mathcal{C}}_{s}$, while $h_{tj_{2}}$ is non-zero only for $j_{2}\in{\mathcal{C}}_{t}$. However, for such a pair of $j_{1}$ and $j_{2}$, obviously $j_{1}\neq j_{2}$, which means that $G_{j_{1}j_{2}}=0$. Hence the sum is $0$ for all possible $j_{1}$ and $j_{2}$’s. Now for the diagonal entries of the matrix $HGH^{T}$, we have that for any $t\in[1,k]$: $$\displaystyle(HGH^{T})_{tt}$$ $$\displaystyle=h_{t}Gh_{t}^{T}=\sum_{j_{1},j_{2}=1}^{n}h_{tj_{1}}G_{j_{1},j_{2}% }h_{tj_{2}}$$ $$\displaystyle=\sum_{j_{1},j_{2}\in{\mathcal{C}}_{t}}\frac{G_{j_{1}j_{2}}}{{% cost}_{\omega}(t,\cdot)}\quad=\sum_{j_{1}\in{\mathcal{C}}_{t}}\frac{G_{j_{1}j_% {1}}}{{cost}_{\omega}(t,\cdot)}$$ $$\displaystyle=\sum_{j_{1}\in{\mathcal{C}}_{t}}\frac{\sum_{\ell=1}^{n}D^{2}(j_{% 1},\ell)}{{cost}_{\omega}(t,\cdot)}$$ $$\displaystyle=\frac{\sum_{j_{1}\in{\mathcal{C}}_{t},\ell\in[1,n]}D^{2}(j_{1},% \ell)}{{cost}_{\omega}(t,\cdot)}$$ $$\displaystyle=\frac{{cost}_{\omega}(t,\cdot)}{{cost}_{\omega}(t,\cdot)}\quad=1.$$ This finishes the proof that $HGH^{T}=\mathbf{I}$, and completes the proof of Theorem 3.6. Appendix B More details for Experiments B.1 Description of Neuron datasets Neuron cells have natural tree morphology (see Figure 4 (a) for an example), rooted at the cell body (soma), with dentrite and axon branching out. Furthermore, this tree morphology is important in understanding neurons. Hence it is common in the field of neuronscience to model a neuron as a (geometric) tree (see Figure 4 (b) for an example downloaded from NeuroMorpho.Org). Our NeuBin dataset consists of 1126 neuron trees classified into two (primary) classes: interneuron and principal neurons (data partly from the Blue Brain Project [33] and downloaded from http://neuromorpho.org/). The second NeuMulti dataset is a refinement of the 459 interneuron class into four (secondary) classes: basket-large, basket-nest, neuglia and martino. B.2 Setup for persistence images For each dataset, the persistence image for each object inside is computed within the rectangular bounding box of the points from all persistence diagrams of input trees. The $y$-direction is then discretized to $40$ uniform intervals, while the $x$-direction is discretized accordingly so that each pixel is a square. For persistence image (PI) approach of [1], we show results both for the unweighted persistence images (PI-CONST), and one, denoted by PI-PL, where the weight function ${\alpha}:{\mathbb{R}}^{2}\to{\mathbb{R}}$ (for Definition 2.1) is the following piecewise-linear function (modified from one proposed by Adams et al. [1]) where $b$ the largest persistence for any persistent-point among all persistence diagrams. $${\alpha}(x,y)=\begin{cases}\frac{|y-x|}{b}&|y-x|<b~{}and~{}y>0\\ \frac{|-y-x|}{b}&|-y-x|<b~{}and~{}y<0\\ 1&otherwise\end{cases}$$ (4) B.3 Benchmark datasets for graph classification Below we first give a brief description of the benchmark datasets we used in our experiments. These are collected from the literature. NCI1 and NCI109 [39] consist of two balanced subsets of datasets of chemical compounds screened for activity against non-small cell lung cancer and ovarian cancer cell lines, respectively. PTC [21] is a dataset of graph structures of chemical molecules from rats and mice which is designed for the predictive toxicology challenge 2000-2001. DD [18] is a data set of 1178 protein structures. Each protein is represented by a graph, in which the nodes are amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart. They are classified according to whether they are enzymes or not. PROTEINS [5] contains graphs of protein. In each graph, a node represents a secondary structure element (SSE) within protein structure, i.e. helices, sheets and turns. Edges connect nodes if they are neighbours along amino acid sequence or neighbours in protein structure space. Every node is connected to its three nearest spatial neighbours. MUTAG [16] is a dataset collecting 188 mutagenic aromatic and heteroaromatic nitro compounds labelled according to whether they have a mutagenic effect on the Gramnegtive bacterium Salmonella typhimurium. REDDIT-5K and REDDIT-12K [43] consist of graph representing the discussions on the online forum Reddit. In these datasets, nodes represent users and edges between two nodes represent whether one of these two users leave comments to the other or not. In REDDIT-5K, graphs are collected from 5 sub-forums, and they are labelled by to which sub-forums they belong. In REDDIT-12K, there are 11 sub-forums involved, and the labels are similar to those in REDDIT-5K. IMDB-BINARY and IMDB-MULTI [43] are dataset consists of networks of 1000 actors or actresses who played roles in movies in IMDB. In each graph, a node represents an actor or actress, and an edge connects two nodes when they appear in the same movie. In IMDB-BINARY, graphs are classified into Action and Romance genres. In IMDB-MULTI, they are collected from three different genres: Comedy, Romance and Sci-Fi. The statistics of these datasets are provided in Table LABEL:tbl:datastats. B.4 Topological-based methods on graph data Here we compare our WKPI-framework with the performance of several state-of-the-art persistence-based classification frameworks, including: PWGK [28], SW [10], PI [1]. We also compare it with two alternative ways to learn the metric for persistence-based representations: trainPWGK is the version of PWGK [28] where we learn the weight function in its formulation, using the same cost-function as what we propose in this paper for our WKPI kernel functions. altWKPI is the alternative formulation of a kernel for persistence images where we set the kernel to be $k({\mathrm{PI}},{\mathrm{PI}}^{\prime})=\sum_{s=1}^{N}e^{-\frac{{\omega}(p_{s}% )({\mathrm{PI}}(s)-{\mathrm{PI}}^{\prime}(s))^{2}}{2\sigma^{2}}}$, instead of our WKPI-kernel as defined in Definition 3.1. We use the same setup as our WKPI-framework to train these two metrics, and use their resulting kernels for SVM to classify the benchmark graph datasets. WKPI-framework outperforms the existing approaches and alternative metric learning methods on all datasets except MUTAG. WKPI-kM (i.e, WKPI-kmeans) and WKPI-kC (i.e, WKPI-kcenter) improve the accuracy by $3.9\%-11.9\%$ and $5.4\%-13.5\%$, respectively. The classification accuracy of all these methods are reported in Table LABEL:tbl:persbenchmark.
Covariant formulation of pion-nucleon scattering 111Presented at the 16th European Conference on Few-Body Problems in Physics, June 1-6, 1998, Autrans (France) A. D. Lahiff and I. R. Afnan Department of Physics, The Flinders University of South Australia, GPO Box 2100, Adelaide 5001, Australia Abstract A covariant model of elastic pion-nucleon scattering based on the Bethe-Salpeter equation is presented. The kernel consists of $s$- and $u$-channel $N$ and $\Delta(1232)$ poles, along with $\rho$ and $\sigma$ exchange in the $t$-channel. A good fit is obtained to the $s$- and $p$-wave phase shifts up to the two-pion production threshold. Pion-nucleon ($\pi N$) scattering is an important example of a strong interaction, and as such plays a significant role in many other nuclear reactions involving pions, for example pion photoproduction. Ideally a theory describing $\pi N$ scattering should be derived from Quantum Chromodynamics (QCD), but since QCD is not at present solvable for low energies, it is necessary to use a chirally invariant effective Lagrangian, where the degrees of freedom are baryons and mesons, rather than quarks and gluons. Following the success of meson-exchange models in describing the nucleon-nucleon interaction, a number of meson-exchange models for $\pi N$ scattering have been developed over the last few years [1]. These models invariably begin with an effective Lagrangian which describes the couplings between the various mesons and baryons. The tree-level diagrams obtained from this Lagrangian are then unitarized in a 3-dimensional approximation to the Bethe-Salpeter (BS) equation [2]. Convergence is guaranteed by the introduction of phenomenological form factors at each vertex. There are an infinite number of 3-dimensional reductions to the BS equation, and there is no overwhelming reason to choose one particular approximation over any other. Here we describe a covariant model of elastic $\pi N$ scattering in which the BS equation is solved without any reduction to 3-dimensions. In principle, the exact $\pi N\leftarrow\pi N$ amplitude for a given Lagrangian can be obtained from the BS equation, with a potential consisting of all 1- and 2-particle irreducible diagrams, and dressed propagators in the $\pi N$ intermediate state. Since it is impossible to construct such a potential, as it would contain an infinite number of diagrams, it is common practice to truncate this kernel and include only the tree-level diagrams. Furthermore, if only two-body unitarity is required to be maintained, then the dressed nucleon propagator is replaced by a bare propagator with a pole at the physical mass. Our kernel consists of $s$- and $u$-channel $N$ and $\Delta(1232)$ exchanges, along with $\rho$ and $\sigma$ exchange in the $t$-channel. We do not include any higher baryon resonances because these contributions are not expected to be significant for $\pi N$ scattering below the two-pion production threshold. The couplings to the pion field are always through derivative couplings, as required by chiral symmetry. There is an ambiguity as to the choice of propagator for a particle with spin-3/2. The most commonly used propagator is the Rarita-Schwinger propagator, which is known to have both spin-3/2 as well as background spin-1/2 components [3]. Other forms have been introduced by Williams [4] and Pascalutsa [5], which each have only a spin-3/2 component. In the present paper we use the Rarita-Schwinger propagator. To guarantee the convergence of all integrals, we need to associate with each vertex a cut-off function. We take this cut-off function to be the product of form factors that depend on the 4-momentum squared of each particle present at the vertex [1]. Each form factor is chosen to be of the form $$f(q^{2})=\left({\Lambda^{2}-m^{2}\over\Lambda^{2}-q^{2}}\right),$$ where $q^{2}$ is the 4-momentum squared of the particle and $m$ is the mass. A different cutoff mass $\Lambda$ is used for each particle. The $s$-channel pole terms present in the potential become dressed when the BS equation is iterated. Therefore bare coupling constants and masses are used in the $N$ and $\Delta$ $s$-channel pole diagrams in the potential. The bare nucleon parameters are determined by requiring that in the $P_{11}$ partial wave, there is a pole at the physical nucleon mass with a residue related to the physical $\pi NN$ coupling constant. Since the dressed $\Delta$ has a width, it would be necessary to analytically continue the BS equation into the complex $s$-plane in order to carry out the renormalization for the $\Delta$. Rather than doing this we treat the bare $\Delta$ mass and the bare $\pi N\Delta$ coupling constant as free parameters. Since the $P_{33}$ partial wave is dominated by the $s$-channel $\Delta$ pole diagram, the bare $\Delta$ parameters are essentially fixed by the $P_{33}$ phase shifts. While only the $s$-channel diagrams in the potential are dressed by the ladder BS equation, Pearce and Afnan [6] have shown that if 3-body unitarity is satisfied, then the $u$-channel diagrams also become dressed. We approximate this by using physical masses and coupling constants in the $u$-channel $N$ and $\Delta$ diagrams. We solve the BS equation by first expanding the nucleon propagator in the $\pi N$ intermediate state into positive and negative energy components, and then sandwiching the resulting equation between Dirac spinors. This gives two coupled 4-dimensional integral equations which are reduced to 2-dimensional integral equations after a partial wave decomposition. A Wick rotation [7] is carried out in order to obtain equations suitable for numerical solution. With our choice of form factors, there is no interference from the form factors when carrying out the Wick rotation, provided the cutoff masses are large enough. The cutoff masses also have to be chosen large enough so that unphysical thresholds, which are generated by form factor singularities and propagator singularities pinching the integration contour, are far above the two-pion production threshold. More details are given in [8]. The renormalized $\pi NN$ coupling constant is fixed at its physical value of $g_{\pi NN}^{2}/4\pi=13.5$. The nucleon renormalization procedure fixes the bare $N$ parameters. The remaining parameters are determined in a fit to the phase shifts and scattering lengths from the SM95 partial wave analysis of Arndt et al [9]. For the cutoff masses we obtain $\Lambda_{N}=3.12$, $\Lambda_{\pi}=1.73$, $\Lambda_{\rho}=4.67$, $\Lambda_{\Delta}=4.86$, and $\Lambda_{\sigma}=1.4$ (all in GeV). The coupling constants are $g_{\rho\pi\pi}g_{\rho NN}/4\pi=3.2$, $\kappa_{\rho}=2.57$, $g_{\sigma\pi\pi}g_{\sigma NN}/4\pi=-0.3$, $f_{\pi N\Delta}^{2}/4\pi=0.44$, and $f_{\pi N\Delta}^{(0)2}/4\pi=0.35$. The remaining masses are $m^{(0)}_{\Delta}=2.13$ GeV and $m_{\sigma}=700$ MeV. With these parameters, the renormalization procedure gives $m_{N}^{(0)}=1.37$ GeV and $g_{\pi NN}^{(0)2}/4\pi=3.8$. We obtain a good fit to the $s$- and $p$-wave phase shifts. The resulting phase-shifts are shown in Figure 1, and the scattering lengths and volumes are shown in Table 1. In general our coupling constants are consistent with those used in the other $\pi N$ models. Assuming universality ($g_{\rho}\equiv g_{\rho\pi\pi}=g_{\rho NN}$), our value of $g_{\rho}^{2}/4\pi$ is close to that obtained from the decay $\rho\rightarrow 2\pi$, i.e. $g_{\rho}^{2}/4\pi=2.8$, and $\kappa_{\rho}$ is close to the value $\kappa_{\rho}=3.7$ arising from vector meson dominance. Our physical $\pi N\Delta$ coupling constant is slightly larger than the value $f_{\pi N\Delta}^{2}/4\pi=0.36$ obtained from calculations of the width of the $\Delta$ (the use of the larger coupling was necessary in order to obtain a good fit to the $P_{13}$ and $P_{31}$ phase shifts). The contribution of $\sigma$-exchange is very small, while the $u$-channel $\Delta$ diagram provides a large amount of attraction in all partial waves except $S_{31}$ and $P_{33}$. The attraction in the $P_{11}$ partial wave is dominated by $\rho$-exchange and the $u$-channel $\Delta$. Notice that some additional attraction is required for high energies in the $S_{11}$ partial wave. The cutoff masses turn out to be quite large, with the result that the dressing is significant, as is evident from the large size of the bare $N$ and $\Delta$ masses. The baryon self energies are dominated by the one-pion loop diagrams. In view of the significance of the dressing, it is interesting to examine the effect of the dressing on the $\pi NN$ form factor. We can calculate a renormalized cutoff mass by comparing the dressed $\pi NN$ vertex, with the nucleons on-mass-shell and the pion off-shell, to a monopole form factor. We find $\Lambda_{\pi}^{R}=1.17$ GeV (recall that the bare cutoff mass is 1.73 GeV). Therefore the vertex dressing softens the $\pi NN$ form factor. References [1] B.C. Pearce and B.K. Jennings: Nucl. Phys. A528, 655 (1991); F. Gross and Y. Surya: Phys. Rev. C47, 703 (1993); C. Schütz et al.: Phys. Rev. C49, 2671 (1994); C.T. Hung, S.N. Yang, T.-S.H. Lee: J. Phys. G20, 1531 (1994); V. Pascalutsa and J.A. Tjon: Nucl. Phys. A631, 534c (1998). [2] E.E. Salpeter and H.A. Bethe: Phys. Rev. 84, 1232 (1951). [3] M. Benmerrouche, R.M. Davidson, N.C. Mukhopadhyay: Phys. Rev. C39, 2339 (1989). [4] H.T. Williams: Phys. Rev. C31, 2297 (1985). [5] V. Pascalutsa: hep-ph/9802288. [6] B.C. Pearce and I.R. Afnan: Phys. Rev. C34, 991 (1986). [7] G.C. Wick: Phys. Rev. 96, 1124 (1954). [8] A.D. Lahiff and I.R. Afnan: in preparation. [9] R.A. Arndt et al.: Phys. Rev. C52, 2120 (1995)
Using Resource-Rational Analysis to Understand Cognitive Biases in Interactive Data Visualizations Ryan Wesslen UNC Charlotte e-mail: [email protected]    Douglas Markant UNC Charlotte e-mail: [email protected]    Alireza Karduni UNC Charlotte e-mail: [email protected]    Wenwen Dou UNC Charlotte e-mail: [email protected] Abstract Cognitive biases are systematic errors in judgment. Researchers in data visualizations have explored whether cognitive biases transfer to decision-making tasks with interactive data visualizations. At the same time, cognitive scientists have reinterpreted cognitive biases as the product of resource-rational strategies under finite time and computational costs. In this paper, we argue for the integration of resource-rational analysis through constrained Bayesian cognitive modeling to understand cognitive biases in data visualizations. The benefit would be a more realistic “bounded rationality” representation of data visualization users and provides a research roadmap for studying cognitive biases in data visualizations through a feedback loop between future experiments and theory. \onlineid 0 \vgtccategoryResearch \vgtcinsertpkg \CCScatlist \CCScatTwelveHuman-centered computingVisualizationVisualization theory, concepts and paradigms \firstsection Introduction Recently data visualization researchers have investigated whether cognitive biases transfer to decision-making with interactive visual interfaces and explored strategies to mitigate them [41, 2, 4, 1, 42]. A large portion of this work involves collecting and analyzing empirical evidence on the effect of different cognitive biases through user experiments. These studies generally are motivated by the classical psychological approach to cognitive biases, i.e., the “heuristic and biases” framework [31] introduced by Tversky and Kahneman [32, 33]. While these studies provide great value to the visualization community on illuminating the effect of cognitive biases on visual analysis tasks, they do not include quantitative cognitive models that yield explicit testable hypotheses predicting users’ behavior under different experiment conditions. Moreover, they tend to ignore critiques that heuristics which lead to biased judgments actually reflect the rational use of limited cognitive resources [8]. Adopting cognitive modeling for data visualization research can provide many opportunities to accelerate innovation, improve validity, and facilitate replication efforts [22]. Early examples to understand cognitive processes while using data visualizations consider sensemaking approaches [9] or visual attention coupled with decision-making [23]. However, a drawback of past Vis cognitive frameworks [9, 25, 23] is they are typically descriptive or “process” diagrams. As such, they lack the level of detail necessary to make detailed quantitative predictions or to generate strong hypotheses about behavior [7]. This complicates efforts to predict when cognitive biases will impact how people interpret and make decisions from visualizations. One area of opportunity is Bayesian cognitive modeling for data visualizations [43, 14, 12]. These models rest on the claim that people reason under uncertainty in accordance with the principles of Bayesian inference [10]. This approach is appealing because it provides a normative framework for how people should reason and make decisions from information under uncertainty. However, in practice people may behave in ways that are inconsistent with the predictions of Bayesian models, often due to well-known limitations in cognitive capacity, including constraints in time, which are common in many visualization studies and tasks. Existing applications of Bayesian cognitive modeling to information visualization have not yet acknowledged these limitations like bounded rationality [29, 30]. Specifically, we’d like to note gaps and disconnect among current efforts in studying the effect of cognitive biases in interactive data visualizations. A promising approach from cognitive science is the resource-rational analysis of cognitive biases as a way to understand rational trade-offs between judgment accuracy and the mind’s limited resources, including fixed time [10, 17, 16]. In this paper, we argue that resource-rational analysis can provide a framework for many cognitive biases in data visualizations while providing a quantitative theoretical framework, or “research roadmap,” that enables a feedback loop to add realism through further constraints. Such a roadmap may not only better identify cognitive biases’ effects in data visualization decision-making but may also provide a means for mitigating these biases before they occur. 1 Cognitive Bias in Interactive Data Visualizations Cognitive biases are systematic errors (or deviations) in judgment [33, 16]. They have been studied by cognitive psychologists and social scientists to understand how and why individuals sometimes make consistent errors in decision-making. Recently, data visualization researchers have explored the role of cognitive biases transfer to data visualization decision-making [41, 4, 36] and, if such biases can be identified, how these findings could inform the design of visualizations systems that can debias or mitigate such effects [26, 5, 2, 24]. If a well-designed system can help users to find the right explore-exploit mix [11], ideally such a system would safeguard against possible forking path problems [27, 44] and mitigate systematic errors and enable better decision-making. Data visualization research in cognitive biases tend to focus on either empirical studies or frameworks with little interaction between them. Empirical studies try to demonstrate evidence of traditional cognitive biases through data visualization user studies, typically analyzing user’s interaction behaviors or decisions [3, 2, 1, 42, 35, 13]. Alternatively, general descriptive frameworks (like taxonomies) have been introduced for cognitive biases [4, 41]; however, these tend to broadly cover many human biases [36, 40] and are limited in their ability to provide testable predictions for empirical studies. Cognitive science has a long history of studying visualization cognition as a subset of visuospatial reasoning, in how individuals derive meaning from visual (external) representations [34]. Typically, these models either focused on perception and/or prior knowledge [23]. More recently, data visualization researchers have integrated similar ideas to understand visualization cognitive processes through insight-based approaches [9] and top-down modeling [19, 25]. However, past visualization cognitive models tend to be based on verbal “process” diagrams, and are not quantitative models that yield explicit testable hypotheses. Without such quantitative predictions, implications of the models can be vague, difficult to simulate, and even more difficult to test and refine. Bayesian cognitive modeling is a promising approach to studying cognitive biases [43]. Building on work in cognitive science, Wu et al. first argued that Bayesian cognitive modeling provides a means to model many irrational behaviors like cognitive biases in a “principled” way. Building from their work, Kim et al. [14] and Karduni et al. [12] have provided further extensions on studying Bayesian cognitive modeling for data visualizations. In particular, by eliciting each user’s prior belief about an uncertain relationship, these studies have used Bayesian models to predict how people should update those beliefs in response to data visualizations. Although these two studies provide novel elicitation methods with Bayesian cognitive models in data visualization, they do not directly connect such approaches with experiment designs to identify cognitive biases. Moreover, they do not incorporate realistic constraints on users (e.g., time or memory limits) in their modeling or experiment. This is where resource-rational analysis may remedy these shortcomings. 2 Resource-Rational Analysis Classical approaches to understand rationality [45, 21] assume individuals incorporate utility theory [37] to maximize their expected utility. Simon [29, 30] challenged this notion with bounded rationality, the idea that rational decisions must be framed in the context of the environment and one’s limited cognitive resources. Whereas normative rational models exist on Marr’s computational level [20] (i.e., on the structure of the problem), bounded rationality connects Marr’s computational level and the algorithmic level (e.g., representation and transformation) as human cognition involves making approximations from a normative rational model [10, 15]. The problem is studying each level separately is insufficient to explain the underlying mechanisms in human intelligence [15]. To address this problem, Lieder and Griffiths [17, 15, 16] introduce resource-rational analysis as rational models that bridge the idealized, unbounded computational level to a more realistic, highly resource-constrained algorithmic level. As an iterative process, a rational model can be modified over time to move closer towards a more realistic model of individuals’ true cognitive resources and processes. Figure 1 outlines the five steps in resource-rational analysis. Like other rational theories, resource-rational theory posits that there exists some optimal solution yielded by the rules of expected utility theory, Bayesian inference, and standard rules of logic (Step 1 in Fig. 1). However, bounded rationality limits the space of feasible decisions that are possible given the cognitive constraints which lead to approximate models of rationality (Step 2). Instead, resource rationality is the optimal algorithm under this constraint (Step 3) which then yield testable predictions (Step 4). In this way, resource-rational analysis reinterprets cognitive biases as an optimal (rational) tradeoff between external task demands and internal cognitive constraints (e.g., cost of error in judgment vs. time cost to reduce this error) [18]. This rational interpretation reconciles with Gigerenzer’s criticism of cognitive biases as irrational use of heuristics as rational [8]. 2.1 Example: Anchoring Bias One popular cognitive bias that has been studied in multiple visualization experiments is anchoring bias [1, 42, 39, 35]. Anchoring bias is the tendency for an initial piece of information, relevant or not, to effect a decision-making process [33]. Typically, this is followed by an adjustment in response to new information which falls short of the normative judgment (the anchoring-and-adjustment effect). Anchoring-and-adjustment approach posits a two step process [6]. In the first step, a person will develop their estimate, or anchor, of an open-ended question. In the second step, the person will adjust her estimate as new information is processed. Error occurs when she does fails to make a sufficient adjustment to the correct answer. Lieder and Griffiths [17] examined anchoring bias through the lens of resource-rational analysis. Following Figure 1, they formulate the problem through Bayesian decision theory for numerical estimation, the classical task associated with anchoring-and-adjustment [33, 6]. They assume that the mind approximates Bayesian inference through sampling algorithms, which represent probabilistic beliefs through a small number of randomly selected hypotheses proportional to their actual prevalence [38]. More specifically, they posit that sampling occurs through Markov Chain Monte Carlo (MCMC), a popular algorithm in statistics and artificial intelligence. The advantage of this approach is that it provides testable predictions that can be considered empirically through controlled experimentation (Step 4 in Fig. 1). The model predicts that scenarios in which there are high time costs and no error costs, result in the highest degree of anchoring bias as participants have a much higher cost for each adjustment but less concern for accuracy (or error). Therefore, in such situations participants will tend to have more bias (absolute distance). This occurs as participants have zero adjustments and favor their anchor (provided or self-generated) as time costs are critical. To test this model, Lieder et al. [18] developed an empirical experiment on MTurk for estimating bus arrival under four different scenarios. They find strong evidence for resource rationality adjustment as the degree of anchoring bias varied based on different time and error costs. Moreover, they find that incentives can be effective at reducing anchoring bias even with self-generated and provided anchors, contrary to Epley and Gilovich [6]. 3 Future Work and Conclusion Resource-rational analysis could be beneficial in data visualization studies in which users are faced with meaningful cost-benefit tradeoffs in interpreting the visualization. In other words, experiments where additional effort leads to a more accurate decisions from the data. This would especially be the case for system in which sampling occurs over time, either directly sampling information from a display, or sampling alternative states/outcomes in the user’s mental model. In the context of visualizing hurricane paths [28], users might at first overweight the risks of salient negative outcomes (e.g., a direct hit on New Orleans), but with more time (or different type of visualization?) arrive at a better calibrated estimate of the risk. References [1] I. Cho, R. Wesslen, A. Karduni, S. Santhanam, S. Shaikh, and W. Dou. The anchoring effect in decision-making with visual analytics. In IEEE Conference on Visual Analytics Science and Technology (VAST), 2017. [2] E. Dimara, G. Bailly, A. Bezerianos, and S. Franconeri. Mitigating the attraction effect with visualizations. IEEE transactions on visualization and computer graphics, 2018. [3] E. Dimara, A. Bezerianos, and P. Dragicevic. The attraction effect in information visualization. IEEE transactions on visualization and computer graphics, 23(1), 2017. [4] E. Dimara, S. Franconeri, C. Plaisant, A. Bezerianos, and P. Dragicevic. A task-based taxonomy of cognitive biases for information visualization. IEEE Transactions on Visualization and Computer Graphics, 2018. [5] G. Ellis. So, what are cognitive biases? In Cognitive Biases in Visualizations, pp. 1–10. Springer, 2018. [6] N. Epley and T. Gilovich. The anchoring-and-adjustment heuristic: Why the adjustments are insufficient. Psychological science, 17(4):311–318, 2006. [7] S. Farrell and S. Lewandowsky. Computational Models as Aids to Better Reasoning in Psychology. Current Directions in Psychological Science, 19(5):329–335, Oct. 2010. doi: 10 . 1177/0963721410386677 [8] G. Gigerenzer and W. Gaissmaier. Heuristic decision making. Annual review of psychology, 62:451–482, 2011. [9] T. M. Green, W. Ribarsky, and B. Fisher. Building and applying a human cognition model for visual analytics. Information visualization, 8(1):1–13, 2009. [10] T. L. Griffiths, F. Lieder, and N. D. Goodman. Rational use of cognitive resources: Levels of analysis between the computational and the algorithmic. Topics in cognitive science, 7(2):217–229, 2015. [11] T. T. Hills, P. M. Todd, D. Lazer, A. D. Redish, I. D. Couzin, C. S. R. Group, et al. Exploration versus exploitation in space, mind, and society. Trends in cognitive sciences, 19(1):46–54, 2015. [12] A. Karduni, D. Markant, R. Wesslen, and W. Dou. A bayesian cognition approach for belief updating of correlation judgments through uncertainty visualizations. In IEEE Conference on Information Visualization (InfoVis), 2020. [13] A. Karduni, R. Wesslen, S. Santhanam, I. Cho, S. Volkova, D. Arendt, S. Shaikh, and W. Dou. Can you verifi this? studying uncertainty and decision-making about misinformation in visual analytics. The 12th International AAAI Conference on Web and Social Media (ICWSM), 2018. [14] Y.-S. Kim, L. A. Walls, P. Krafft, and J. Hullman. A bayesian cognition approach to improve data visualization. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, pp. 1–14, 2019. [15] F. Lieder and T. L. Griffiths. Resource-rational analysis: understanding human cognition as the optimal use of limited. Psychological Science, 2(6):396–408, 2018. [16] F. Lieder and T. L. Griffiths. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43, 2020. [17] F. Lieder, T. L. Griffiths, Q. J. Huys, and N. D. Goodman. The anchoring bias reflects rational use of cognitive resources. Psychonomic bulletin & review, 25(1):322–349, 2018. [18] F. Lieder, T. L. Griffiths, Q. J. Huys, and N. D. Goodman. Empirical evidence for resource-rational anchoring and adjustment. Psychonomic Bulletin & Review, 25(2):775–784, 2018. [19] Z. Liu and J. Stasko. Mental models, visual reasoning and interaction in information visualization: A top-down perspective. IEEE Transactions on Visualization & Computer Graphics, (6):999–1008, 2010. [20] D. Marr. Vision: A Computational Investigation into the Human Representation ond Processing of Visual Information. WH Freeman, 1982. [21] R. Nozick. The nature of rationality. Princeton University Press, 1994. [22] L. M. Padilla. A case for cognitive models in visualization research: Position paper. In 2018 IEEE Evaluation and Beyond-Methodological Approaches for Visualization (BELIV), pp. 69–77. IEEE, 2018. [23] L. M. Padilla, S. H. Creem-Regehr, M. Hegarty, and J. K. Stefanucci. Decision making with visualizations: a cognitive framework across disciplines. Cognitive research: principles and implications, 3(1):29, 2018. [24] P. Parsons. Promoting representational fluency for cognitive bias mitigation in information visualization. In Cognitive Biases in Visualizations, pp. 137–147. Springer, 2018. [25] R. E. Patterson, L. M. Blaha, G. G. Grinstein, K. K. Liggett, D. E. Kaveney, K. C. Sheldon, P. R. Havig, and J. A. Moore. A human cognition framework for information visualization. Computers & Graphics, 42:42–58, 2014. [26] M. Pohl, L.-C. Winter, C. Pallaris, S. Attfield, and B. W. Wong. Sensemaking and cognitive bias mitigation in visual analytics. In Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp. 323–323. IEEE, 2014. [27] X. Pu and M. Kay. The garden of forking paths in visualization: A design space for reliable exploratory visual analytics. 2018. [28] I. T. Ruginski, A. P. Boone, L. M. Padilla, L. Liu, N. Heydari, H. S. Kramer, M. Hegarty, W. B. Thompson, D. H. House, and S. H. Creem-Regehr. Non-expert interpretations of hurricane forecast uncertainty visualizations. Spatial Cognition & Computation, 16(2):154–172, 2016. [29] H. A. Simon. A behavioral model of rational choice. The quarterly journal of economics, 69(1):99–118, 1955. [30] H. A. Simon. Rational choice and the structure of the environment. Psychological review, 63(2):129, 1956. [31] D. Streeb, M. Chen, and D. A. Keim. The biases of thinking fast and thinking slow. In Cognitive Biases in Visualizations, pp. 97–107. Springer, 2018. [32] A. Tversky and D. Kahneman. Availability: A heuristic for judging frequency and probability. Cognitive psychology, 5(2):207–232, 1973. [33] A. Tversky and D. Kahneman. Judgment under uncertainty: Heuristics and biases. science, 185(4157):1124–1131, 1974. [34] B. Tversky. Visuospatial reasoning. The Cambridge handbook of thinking and reasoning, pp. 209–240, 2005. [35] A. C. Valdez, M. Ziefle, and M. Sedlmair. Priming and anchoring effects in visualization. IEEE transactions on visualization and computer graphics, 24(1):584–594, 2018. [36] A. C. Valdez, M. Ziefle, and M. Sedlmair. Studying biases in visualization research: Framework and methods. In Cognitive Biases in Visualizations, pp. 13–27. Springer, 2018. [37] J. von Neumann, O. Morgenstern, H. W. Kuhn, and A. Rubinstein. Theory of Games and Economic Behavior (60th Anniversary Commemorative Edition). Princeton University Press, 1944. [38] E. Vul. Sampling in human cognition. PhD thesis, Massachusetts Institute of Technology, 2010. [39] E. Wall, L. Blaha, C. Paul, and A. Endert. A formative study of interactive bias metrics in visual analytics using anchoring bias. In IFIP Conference on Human-Computer Interaction, pp. 555–575. Springer, 2019. [40] E. Wall, L. Blaha, C. L. Paul, K. Cook, and A. Endert. Four perspectives on human bias in visual analytics. In DECISIVe: Workshop on Dealing with Cognitive Biases in Visualizations, 2017. [41] E. Wall, L. M. Blaha, L. Franklin, and A. Endert. Warning, bias may occur: A proposed approach to detecting cognitive bias in interactive visual analytics. In IEEE Conference on Visual Analytics Science and Technology (VAST), 2017. [42] R. Wesslen, S. Santhanam, A. Karduni, I. Cho, S. Shaikh, and W. Dou. Investigating effects of visual anchors on decision-making about misinformation. In Computer Graphics Forum, vol. 38, pp. 161–171, 2019. [43] Y. Wu, L. Xu, R. Chang, and E. Wu. Towards a bayesian model of data visualization cognition, 2017. [44] E. Zgraggen, Z. Zhao, R. Zeleznik, and T. Kraska. Investigating the effect of the multiple comparisons problem in visual analysis. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, p. 479. ACM, 2018. [45] M. Zouboulakis. The varieties of economic rationality: From Adam Smith to contemporary behavioural and evolutionary economics. Routledge, 2014.
Role of loop entropy in the force induced melting of DNA hairpin Garima Mishra${}^{1}$, Debaprasad Giri${}^{2}$, M. S. Li${}^{3}$ and Sanjay Kumar${}^{1}$ ${}^{1}$Department of Physics, Banaras Hindu University, Varanasi 221 005, India ${}^{2}$Department of Applied Physics, IT, Banaras Hindu University, Varanasi 221 005, India ${}^{3}$ Institute of Physics, Polish Academy of Sciences, Al. Lotnikow 32/46, 02-668 Warsaw, Poland (November 25, 2020) Abstract Dynamics of a single stranded DNA, which can form a hairpin have been studied in the constant force ensemble. Using Langevin dynamics simulations, we obtained the force-temperature diagram, which differs from the theoretical prediction based on the lattice model. Probability analysis of the extreme bases of the stem revealed that at high temperature, the hairpin to coil transition is entropy dominated and the loop contributes significantly in its opening. However, at low temperature, the transition is force driven and the hairpin opens from the stem side. It is shown that the elastic energy plays a crucial role at high force. As a result, the phase diagram differs significantly with the theoretical prediction. pacs: 87.15.A-,64.70.qd,05.90.+m,82.37.Rs I Introduction In recent years, considerable experimental, theoretical and numerical efforts have been made to understand the dynamics and kinetics of DNA/RNA hairpin wallace ; bonnet ; Goddard ; zhang ; kim ; nivon ; lin ; kenward ; errami . This is because the hairpin participates in many biological functions e.g. replication, transcription, recombination, protein recognition, gene regulation and in understanding the secondary structure of RNA molecules Zazopoulos ; Froelich ; Trinh ; Tang . Moreover it has been used as a tool in the form of Molecular Beacon, which provides increased specificity of target recognition in DNA and RNA bonnet ; Goddard ; tan ; broude ; ysli ; drake . The hairpin is made up of a single stranded DNA (ssDNA)/RNA, which carries sequence of bases that are complementary to each other in each of its two terminal regions. When the base pairing of these two remote sequences are formed, it gives rise to the structure of hairpin, consisting of two segments: a stem, which comprises of a short segment of the DNA helix and, a loop of single strand carrying the bases that are not paired (Fig. 1). Structures of the hairpin are not static, as they fluctuate among many different conformations. Broadly speaking, all of these conformations may be classified into the two states: (i) the open state and (ii) the closed state (Fig. 1). The open state has high entropy because of the large number of conformations accessible to the ssDNA, whereas the closed state is a low-entropy state, where enthalpy is involved in the base pairing of the stem. It was shown that the closed-to-open transition requires a sufficiently large amount of energy to unzip all of the base pairs of the stem, whereas the open-to-close transition involves a lower energy barrier bonnet ; Goddard ; Chen ; asain . The thermodynamics and kinetics of the hairpin are well studied bonnet ; Goddard . The melting temperature ($T_{m}$) of the hairpin, at which the stem is denaturated and the hairpin behaves like a single polymer chain, is measured by Fluorescence Correlation Spectroscopy (FCS) and in good agreement with theory bonnet ; Goddard ; Chen . It was also found that the rate of closing strongly depends on the properties of the hairpin loop, such as length and rigidity, whereas the rate of opening remains relatively insensitive to these properties bonnet ; Goddard ; Chen . Single molecule force measuring techniques can manipulate biomolecules (DNA, protein etc) by applying a force on the pico-newton (pN) scale Bockelmann ; Smith ; ubm ; kumarphys ; woodside . These techniques such as, atomic force microscope, optical tweezers, magnetic tweezers etc. have enhanced our understanding about the structural and functional properties of biomolecules and shed important information about the molecular forces involved in the stability of biomolecules Bockelmann ; Smith ; ubm ; kumarphys . In the case of the DNA hairpin, if the applied force is close to a critical value, then the hairpin fluctuates between the closed and open state. Therefore, efforts were mainly made to understand the kinetics of the hairpin in presence of the applied force woodside ; hanne ; mossa . These studies suggest that the average dissociation force increases logarithmically with the pulling speed mossa . To have a better understanding of the biological processes, theoretical works montanari ; zhou1 ; zhou2 ; Hugel focused on simple models, which are either analytically solvable or accurate solution is possible through the extensive numerical simulations. Theoretical analysis of the elasticity of a polymer chain with hairpins as secondary structures montanari reproduces the experimental force-extension curve measured on the ssDNA chains, whose nucleotide bases are arranged in a relatively random order. The force induced transition in the hairpin is found to be of second order and characterized by a gradual decrease in the number of base pairs as the external force increases. Zhou et al. zhou1 ; zhou2 studied the secondary structure formation of the ssDNA (or RNA) both analytically as well as by the Monte Carlo simulations. They showed that the force induced transition is continuous from the hairpin-I (small base stacking interaction) to the coil, while a first order for the hairpin-II (large base stacking interaction). Hugel et al. Hugel studied three different chains, namely ssDNA, poly vinylamine and peptide at very high force ($\sim$ 2 nN). At such a high force, conformational entropy does not play a significant role, therefore, zero temperature ab initio calculation has been applied to compare the experimental results. In many biological reactions, a slight change in temperature causes a large change in the reaction coordinates danilow1 . Therefore, efforts of SMFS experiments have now been shifted to study the effect of temperature keeping the applied force constant. In this respect, Danilowicz et al. have measured the critical force required for the unzipping of double stranded DNA (dsDNA) by varying temperature and determined the force-temperature phase diagram prentiss . In another work danilow1 , it was shown that the elastic properties of ssDNA, which can form a hairpin, have significant temperature dependence. It was found that at the low force, the extension increases with temperature, whereas at the high force, it decreases with temperature. It was argued that the increase in the extension is the result of the disruption of hairpins. However, there is no clear understanding about the decrease in the extension with temperature, at high force. Moreover, the experimental force-temperature diagram of DNA hairpin remains elusive in the literature. As pointed above, effect of loop length, sequence, nature of transition and the force-extension curve of DNA/RNA hairpin are well studied at room temperature. In one of the earlier studies bonnet , fluorescence and quencher were attached to the two ends of the stem with the assumption that the hairpin opens from the stem-end side Tinoco . However, if the applied force is constant and temperature varies, the loop entropy may play a crucial role, whose effect to the best of our knowledge has not been addressed so far. In this paper, we focus mainly on two issues: (i) whether the force-temperature diagram of DNA hairpin differs significantly with the dsDNA or not (Ref. km ; bhat99 ; nsingh ; maren2k2 ; rkp_prl yielded qualitatively similar diagram), and (ii) precise effect of loop on the opening of DNA hairpin. In order to study such issues, we introduce a model of polymer with suitable constrain to model the dsDNA and DNA hairpin and performed Langevin Dynamics (LD) simulations Allen ; Smith ; janke to obtain the thermodynamic observables, which have been discussed in section II. In section III, we validate the model, which describes some of the essential properties of DNA melting. In this section, we also show that the force induced melting of the hairpin differs significantly from the dsDNA. Section IV deals with the semi-microscopic mechanism involved in the opening of the hairpin. Here, we discuss various properties of loop e.g. entropy, length and stiffness and its consequence on the opening. We also revisited lattice models in this section and discussed a possible reason for discrepancies in the force-temperature diagram. Finally in section V, we summarize our results and discuss the future perspective. II Model and Method The typical time scale involved in the hairpin fluctuations varies from $ns$ to $\mu s$, therefore, an all-atom simulation of the longer base sequences is not amenable computationally Hugel ; pm . In view of this, we adopt a minimal off-lattice coarse grained model of the DNA, where a bead represents a few bases associated with sugar and phosphate groups. In order to study the consequences of the loop entropy on the force-temperature diagram, we have considered a ssDNA which can form either a zipped conformation with no bubble/loop (Fig. 2a) or a hairpin (Fig. 2b ) in the chain depending on the base sequence. For the zipped conformation, the first half of the chain (say made up of adenine C) interacts with the complimentary other half of the chain (made up of thymine G). The ground-state conformations resemble the zipped state of the dsDNA with no loop (Fig. 2a). However, if first few beads of the chain are complimentary to the last beads and remaining nucleotides non-complimentary, we have the possibility of the formation of a hairpin (Fig. 2b) consisting of a stem and a loop at low temperature. In high force limit, the ssDNA acquires the conformation of the stretched state (Fig. 2c). Further refinements in the model requires inclusion of the excluded volume effect and proper base pairing. In addition, the elastic energy and the loop entropy, which play a crucial role in high force and high temperature regimes of the phase diagram, should also be incorporated in the description of the model. The energy of the model system is defined as Li $$E=\sum_{i=1}^{N-1}k(d_{i,i+1}-d_{0})^{2}+\sum_{i=1}^{N-2}\sum_{j>i+1}^{N}4(% \frac{B}{d_{i,j}^{12}}-\frac{A_{ij}}{d_{i,j}^{6}})+k_{\theta}(\theta-\theta_{0% })^{2},$$ (1) where $N$ is the number of beads. The distance between beads $d_{ij}$, is defined as $|\vec{r}_{i}-\vec{r}_{j}|$, where $\vec{r}_{i}$ and $\vec{r}_{j}$ denote the position of bead $i$ and $j$, respectively. The harmonic term with spring constant $k$ couples the adjacent beads along the chain. We fixed $k=100$, because the strength of the covalent bond is almost 100 times stronger than the hydrogen bonding hbond . The second term corresponds to the Lennard-Jones (LJ) potential. The first term of LJ potential takes care of the ”excluded volume effect”, where we set $B=1$. It should be noted that the hydrogen bonding is directional in nature bloom and only one pairing is possible between two complementary bases. However, the model developed here is for the polymer, where a bead can interact with all the neighboring beads. In order to model DNA, one can assign the base pairing interaction $A_{ij}=1$ for the native contacts (Table 1) and 0 for the non-native ones Li , which corresponds to the Go model go ; pablo . By the ’native’, we mean that the first base interacts with the $N^{th}$ (last one) base only and the second base interacts with the $(N-1)^{th}$ base and so on as shown in Fig. 2(a & b). This ensures that the two complimentary bases can form at most one base pair. The remaining term of the Eq.(1) is the bending energy term, which is assigned to successive bonds in the loop only. Here, $k_{\theta}$ is the bending constant and $\theta$ is the angle between two consecutive bonds. $\theta_{0}$ is its equilibrium value. In our subsequent analysis, we will consider two cases namely $k_{\theta}$ = 0, which corresponds to a loop made up of thymine and $k_{\theta}\neq 0$ that corresponds to adenine Goddard ; ke ; gk . The Go model go ; pablo built on the assumption that the ”energy” of each conformation is proportional to the number of native contacts, it poses and non-native contacts incur no energetic cost. By construction, the native state is the lowest energy conformation of the zipped state of DNA or DNA hairpin. Since, Go model exhibits a large energy gap between closed to open state and folds rapidly to its ground state, therefore, it saves computational time. It should be noted here that the model does not include the energetic of the slipped and partially mismatched conformations. A more sophisticated model dg , which includes directionality of bases and takes care of proper base pairing (non-native interaction), gives rise to the existence of intermediate states in the form of slipped and partially mismatched states. However, for both the models, it was shown that the thermodynamic observables are almost the same for the DNA unzipping dg . In view of this, we prefer native base pairing interaction in the present model. The parameter $d_{0}(=1.12)$ corresponds to the equilibrium distance in the harmonic potential, which is close to the equilibrium position of the average LJ potential. In the Hamiltonian (Eq. (1)), we use dimensionless distances and energy parameters. The major advantage of this model is that the ground state conformation is known. Therefore, equilibration is not an issue here, if one wants to study the dynamics steered by force at low $T$ Li . We obtained the dynamics by using the following Langevin equation Allen ; Smith ; Li ; MSLi_BJ07 $$m\frac{d^{2}r}{dt^{2}}=-{\zeta}\frac{dr}{dt}+F_{c}+\Gamma$$ (2) where $m$ and $\zeta$ are the mass of a bead and the friction coefficient, respectively. Here, $F_{c}$ is defined as $-\frac{dE}{dr}$ and the random force $\Gamma$ is a white noise Smith i.e. $<{\Gamma(t)\Gamma(t^{\prime})}>=2\zeta T\delta(t-t^{\prime})$. The choice of this dynamics keeps $T$ constant throughout the simulation for a given $f$. The equation of motion is integrated using the $6^{th}$ order predictor corrector algorithm with a time step $\delta t$ = 0.025 Smith . We add an energy $-\vec{f}.\vec{x}$ to the total energy of the system given by Eq. (1). We calculate the thermodynamic quantities after the equilibration using the native state as a starting configuration. The equilibration has been checked by calculating the auto-correlation function of any observable $q$, which is defined as box ; berg $$\hat{S}_{q}(t)=\frac{\langle q(0)q(t)\rangle-\langle q(0)\rangle^{2}}{\langle q% ^{2}(0)\rangle-\langle q(0)\rangle^{2}},$$ (3) Here, $q$ can be the end-to-end distance square or the radius of gyration square. The asymptotic behavior of $\hat{S}_{q}(t)$ for large $t$ is $$\hat{S}_{q}(t)\sim\exp(-\frac{t}{\tau_{exp}})$$ (4) where $\tau_{exp}$ is so called the (exponential) auto-correlation time. In general, the equilibration can be achieved after $2\tau_{exp}$ box ; berg . In our simulation, we have chosen the equilibration time which is ten times more than the $\tau_{exp}$. The data has been sampled over four times of the equilibration time. We have used $2\times 10^{9}$ time steps out of which the first $5\times 10^{8}$ steps have not been taken in the averaging. The results are averaged over many trajectories which are almost the same within the standard deviation. We also notice that at low $T$, it is difficult to achieve the equilibrium and the applied force probes the local minima instead of global minima. III Results III.1 Equilibrium property of dsdNA and DNA hairpin at $f=0$ Before we discuss the underlying physics behind the force-induced transition, we would like first to validate our model based on Eq. (1) and show that the model does include some of the essential features of DNA and melting show the two state behavior. The reaction co-ordinate i.e. absolute value of end-to-end distance ($|x|$) of DNA hairpin and dsDNA have been plotted as a function of temperature in Fig. 3. At low temperature, the thermal energy is too small to overcome the binding energy and DNA hairpin(dsDNA) remains in the closed state(zipped state), and end-to-end distance remains quite small. At high temperature, the thermal energy is quite large compare to its binding energy and the chain acquires the conformations corresponding to the swollen state (open state), where $|x|$ scales with $N^{\nu}$. Here $\nu$ is the end-to-end distance exponent and its value is given by $\frac{3}{d+2}$ degennes . The variation $\frac{|x|}{N^{\nu}}$ can be fitted by the sigmoidal distribution kenward with the melting temperature $T_{m}$ = 0.21 & 0.23 for DNA hairpin and dsDNA, respectively. The melting temperature can also be obtained by monitoring the energy fluctuation ($\Delta E$) or the specific heat ($C$) with temperature, which are given by the following relations Li : $$\displaystyle<\Delta E>$$ $$\displaystyle=$$ $$\displaystyle<E^{2}>-<E>^{2}$$ (5) $$\displaystyle C$$ $$\displaystyle=$$ $$\displaystyle\frac{<\Delta E>}{T^{2}}.$$ (6) The peak in the specific heat curve gives the melting temperature that matches with the one obtained by the sigmoidal distribution of end-to-end distance. This shows that for a small system transition is well described by the two state model. III.2 Force induced melting In many biological reactions involve a large conformational change in the mechanical reaction coordinate i.e. $|x|$ or the number of native contacts, that can be used to follow the progress of the reaction busta . As discussed above, the two state model describes these processes quite effectively in the absence of force. The applied force ”tilts” the free energy surface along the reaction coordinate by an amount linearly dependent on the reaction coordinates busta ; kumarphys . One of the notable aspects of the force experiments on single biomolecules is that $|x|$ is directly measurable or controlled by the instrumentation, therefore, $|x|$ becomes a natural reaction coordinate for describing the mechanical unzipping. The critical unzipping force, which is a measure of the stability of the DNA hairpin (dsDNA) compared to the open state, has been determined by using Eqs. 5 and 6 as a function of temperature. The phase boundary in the force-temperature diagrams (Fig. 4a & b) separate the regions where DNA hairpin (dsDNA) exists in a closed state (zipped for dsDNA) from the region where it exists in the open state (unzipped state). It is evident from the plots (Fig. 4 a & b) that the melting temperature decreases with the applied force in accordance with the earlier studies bhat99 ; nsingh ; maren2k2 ; rkp_prl . The peak position of $\Delta E$ and $C$ coincides, and the peak height increases with the chain length ($26$, $32$ and $42$, where the ratio of stem to loop has been kept constant). However, the transition temperature (melting temperature) remains almost the same in the entire range of $f$. Since, experiments on hairpin are mostly confined to small chains wallace ; bonnet ; Goddard ; woodside , here, we shall confine our-self to the chain of $32$ beads to study the phase diagram and related issues. For the sake of comparison, in Fig.4c and 4d, we also show the phase diagrams of DNA hairpin (stem length =3, loop length =12 and bond length =2) and dsDNA of 9 base pairs (bps) on the cubic lattice using exact enumeration techniquekm . The force-temperature diagrams were found to be in qualitative agreement with earlier theoretical predictions bhat99 ; maren2k2 ; rkp_prl ; gk . Despite the simplicity involved in the lattice model, it provides a deeper insight in the mechanism involved in the force induced transitions kumarphys . A self attracting self self-avoiding walk (SASAW) on appropriate lattice (here cubic lattice) may be used to model a DNA hairpin (stem and loop) and dsDNA (zipped). The base pairing is assigned ($\epsilon=-1$), when the two complimentary nucleotides are on the nearest neighbor. The nearest neighbor interaction mimics the short range nature of the hydrogen bonds, which are qualitatively similar to the model adopted here. It is pertinent to mention here that for the small base pair sequences ($<100$), the differential melting curve of the dsDNA shows a single peak Wartel , indicating a sudden unbinding of two strands. Since for the short dsDNA, the entropy contributions of spontaneous bubbles are not very important as loops are rare, therefore, transition is well described by the two state model Wartel . One would expect that the force-temperature diagram of the short dsDNA presented here should then match with the two state model. The free energy of the unzipped chain ($g_{u}$) under the applied force can be obtained through the freely jointed chain (FJC) model strick $$g_{u}^{FJC}=\frac{l}{b}k_{b}T\ln[\frac{k_{b}T}{fb}\sinh(\frac{fb}{k_{b}T})],$$ (7) which can be compared with the bound state energy ($g_{b}$) of the dsDNA to get the force-temperature diagram, which is shown in Fig.4b. Here, $b$ and $l$ are the Kuhn length and bond length of the chain, respectively. In the reduced unit for the dsDNA, one can notice a nice agreement between the off-lattice simulation presented here and the two state model (Eq. 7) over a wide range of $f$ and $T$. However, at low $T$ and high $f$, the phase boundary deviates from the FJC model. One should recall that the first term of the Hamiltonian defined in Eq. (1) may induce the possibility of the stretched state at high force, where the bond length may exceed than its equilibrium value. In fact in the FJC model, the bond length is taken as a constant. It should be emphasized here that the modified freely jointed chain (mFJC) model does include the possibility of the stretching of bonds in its description prentiss ; Dessinges . Therefore, if this deviation is because of the elastic energy then the two state model based on the mFJC model should show a similar behavior as seen in the simulation. The free energy of the unbound state of the mFJC may be obtained by including $\frac{l}{2b}\frac{(f{\ell})^{2}}{k_{b}T}$ term in Eq. (5) prentiss ; Dessinges , where $\ell$ is the increase in the bond length. The corresponding force-temperature phase diagram in the reduced unit shown in Fig.4b is in excellent agreement with the simulation in the entire range of $f$ and $T$. The off-lattice simulation performed here, clearly shows the effect of loop of the hairpin on the melting profile. The phase diagram of the hairpin (Fig.4a) differs significantly from the dsDNA (Fig.4b) as well as deviates considerably with the counter lattice model of the hairpin (Fig.4c). Two major differences can be noticed from the figures: (i) a change in the slope at the intermediate value of force for the hairpin (Fig. 4a), which is absent (Fig.4b) in the case of dsDNA (no loop) as well as in the previous theoretical models bhat99 ; nsingh ; maren2k2 ; rkp_prl ; (ii) an abrupt increase in the force for both the hairpin and the dsDNA at low temperature (Fig.4a and 4b), whereas the lattice model and other theoretical studies bhat99 ; nsingh ; maren2k2 ; rkp_prl show the re-entrance for both hairpin and DNA (Fig. 4c and 4d). To have further understanding of it, we have monitored the reaction coordinate ($|x|$) of the hairpin with temperature at different values of $f$ (Fig.5). At low $f$, transition is weak due to the finite width of the melting profile, which decreases with $f$. At high $f$, however, it shows a strong first order characteristic. At high force, the non-monotonic behavior of the extension with temperature as observed in a recent experiment danilow1 is also evident from Fig.5. As $T$ increases, the chain acquires the stretched state (Fig.2c) because of the applied force. A further rise in $T$ results an increase in the contribution of the configurational entropy of the chain. The applied force is not enough to hold the stretched conformation and thus the extension falls km ; kumar_prl . At intermediate $f$ $(\sim 0.19)$ where the change in the slope is observed (Fig. 4a), we find that the entropic ($T\;dS$) and the force ($f\;dx$) contributions to the free energy are nearly the same. At constant temperature, by varying $f$, the system attains the extended states from the closed state of the hairpin. With further rise of $f$, one finds the stretched state. i.e the extension approaches the contour length of the $ssDNA$. At high $T$, when $f$ increases hairpin goes to the extended state. At the same T, if we decrease the applied force, the system retraces the path to the closed state without any hysteresis. Because of high entropy, it is possible that monomers of two segments of strand come close to each other and re-zipping takes place (Fig. 7a). As we decrease the $T$ below $0.15$, by increasing $f$, hairpin acquires the stretched state. Interestingly, now it doesn’t retrace the path if $f$ decreases at that $T$ (Fig. 7b). This is the clear signature of hysteresis. It is because of the contribution of entropy, which is not sufficient enough to bring two ends close to each other so that re-zipping can take place. The hysteresis has been measured recently in unzipping and re-zipping of DNA Hatch . It was found that the area of hysteresis loop increases with decreasing temperature, which is also consistent with our findings (Fig. 6b). IV Effect of loop on the opening of hairpin The thermodynamics of DNA melting is well studied using the following equation woodson : $$\Delta G=\Delta H-T\Delta S_{stem}-T\Delta S_{loop},$$ (8) where $G,H,S_{stem}$ and $S_{loop}$ are the free energy, enthalpy, entropy associated with stem and loop, respectively. To determine the entropic force of the loop, we fix the stem length and vary the loop length from 0 to N/2 at fixed T. Since the contribution of first two term of Eq. 8 remain constant as stem length is kept fixed, the decrease in the applied force is the result of the loop entropy. In Fig. 7, we plot the force as a function of loop length at two different temperatures. At high temperature, the loop entropy reduces the applied force (Fig. 7a), and hence, there is a decreases in the force, which is in accordance with recent experiment woodside . However, at low temperature, the applied force remain constant (Fig. 7b) indicating that the loop entropy does not play any role in the opening of the hairpin and opening is mainly force driven. This is because at low temperature, applied force probes the local minima and hence remain independent of global minima. However, rezipping force does depend on the loop length. In order to substantiate our findings, we tracked the first ($1-32$, stem-end) and the last ($10-23$, loop-end) base pairs of the stem (Fig. 2b) in different regimes of the $f-T$ diagram. In Fig. 8, we show the probability of opening ($P_{o}$) of the stem-end and the loop-end base pairs with $f$ at a given $T$. Near the phase boundary, at high $T$, transition from the hairpin to the coil state is entropy driven as discussed above. This is apparent from Fig. 8a, where $P_{o}$ for the loop-end is higher than that of the stem-end. This indicates that in the opening of the hairpin, the loop contributes significantly compared to the applied force. At intermediate $T\;(\sim 0.12)$, $P_{o}$ of the stem-end and the loop-end are comparable, reflecting that the hairpin can open from any sides (Fig. 8b). However, at low $T$, the dominant contribution in the opening of hairpin comes from the stem-end side and the transition is force driven (Fig. 8c). It remains a matter of quest that why the exact solution of the lattice models of hairpin km could not exhibit this feature. This prompted us to revisit the lattice model studied in Ref. km . We calculated $P_{o}$ of the stem-end and the loop-end for the lattice model of the hairpin. At high $T$, opening of the hairpin, the dominant contribution comes from the stem side of the hairpin. This should not be taken as surprise as lattice models do not take the loop entropy properly in their description. In fact, the discrete nature of the lattice (co-ordination number) reduces the loop entropy. The model studied here is the off-lattice one, where the loop entropy has been taken properly into account assuming that ssDNA is well described by the FJC model Smith ; Dessinges . Therefore, this behavior occurs in our model system. If the change in the slope (Fig. 4a) is because of the loop entropy (underestimated in the lattice models), then the change in the slope should also vanish even in the off-lattice model, if the loop entropy is reduced somehow. This can be achieved by putting $k_{\theta}\neq 0$ in the loop [Eq. (1)]. We repeated the simulation with the bending constant $k_{\theta}$ in between 10 to 20, which have been used in earlier simulations kouza ; Noy ; alex . Because of this term, the change in the slope seen at intermediate value of $f$ vanishes (Fig. 4a). The most striking observation here is that the probability distribution analysis shows that the opening of hairpin is contributed by the stem-end side, even at high temperature. V Conclusions In conclusion, we have studied a simple model of a polymer, which can form a dsDNA or a DNA hairpin depending on the interaction matrix given in Table 1. The model includes the possibility of the stretched state in its description. Since, simulation is performed on the off-lattice, therefore, the loop entropy has been taken properly into account in case of the hairpin. The $f-T$ diagrams of the hairpin differs significantly from the dsDNA as well as previous studies km ; bhat99 ; nsingh ; maren2k2 ; rkp_prl . We show that the loop entropy plays an important role, which has been underestimated in previous studies. This conclusion is based on the probability analysis of the extreme bases of the stem (loop-end & stem-end), which rendered the semi-microscopic mechanism involved in the force induced melting. Our results provide support for the existence of the apparent change in the slope in the phase diagram. In the intermediate range of $f$ and $T$, a chain can open from both sides, reflecting that the entropic and force contributions are nearly the same, which may be verified experimentally. At high $T$, the transition is entropy dominated and the loop contributes in the opening of hairpin (Fig. 6a). At low $T$, it is force driven and it opens from the stem-end side (Fig. 6b). It should be noted here that, in the description of the lattice model, the critical force for DNA hairpin near $T=0$ differs significantly with dsDNA (Fig. 4c & d), where as in the simulation the unzipping forces for both the cases were found nearly the same (Fig. 4a $\&$ b). This should not be taken as a surprise because in the exact enumeration, one has the complete information of the partition function and hence the global minimum can be obtained to calculate the critical force at low T. However, in present simulation, we follow the experiments and report the dynamical force required to unzip the chain in Figs. 4 a $\&$ b and Fig. 7 Hatch ; PNAS , which probes the local minima at low T. This force may differs with the critical force at low $T$. For example, for the 50000 bases $\lambda-$DNA, the critical force was found to be 15.5 pN to overcome the energy barrier $\sim 3000K_{B}T$. Full opening of the molecule never happens in the experiment as time required to open the barrier is quite large. Thus a larger force (17 pN) than its critical force is require to open a finite fraction EPJE_cocco of the chain. It is important to mention here that a change in the phase boundary has also been observed in the force induced unzipping of $\lambda$-phage dsDNA consisting of approximately 1500 base pairs. For such a long chain, differential melting curve shows several peaks corresponding to the partial melting of dsDNA, which form spontaneous bubbles in the chain Wartel . This suggests that for a longer chain, formation of spontaneous bubble should be incorporated in the model. Using Peyrard-Bishop (PBD) model Peyrard , Voulgarakis et al. Voulgarakis probed the mechanical unzipping of dsDNA with bubble. One of the important results they derived, is that the single strand as it extends between the dsDNA causes a decrease in the measured force but the $f-T$ diagram remains significantly different than the experiment. Recalling that in the PBD model, a bead can move only in one-dimension, and hence the entropy of the loop is also underestimated here, which may be the reason for such difference. Therefore, at this stage of time a simulation of longer base sequences is required, which should have spontaneous bubbles in its description with proper entropy to settle this issue. The another interesting finding of our studies is the absence of re-entrance at low temperature. The basic assumption behind the theoretical models or phenomenological argument is that the bond length in the model system is a constant. However, recent experiments prentiss ; ke suggests that there is a net increase in the bond length at high force. Consequently model studied here incorporates the simplest form of the elastic energy (i.e. Gaussian spring). We have kept the spring constant quite large in our simulation to model the covalent bond. The experimental force-temperature diagram of $\lambda-$phage DNA prentiss shows the similar behavior at high force. The observed decrease in the slope at low temperature in the experiment is attributed to a thermally induced change in the dsDNA conformation prentiss . However, present simulation (DNA hairpin and dsDNA) along with the two state model based on the mFJC suggests that such deviation may be because of the elastic energy. In earlier studies, either this energy has not been included in the models Peyrard ; Voulgarakis ; kafri or the bond length has been taken as a constant bhat99 ; nsingh ; maren2k2 ; km , thus precluding the existence of the stretched state with the increased bond length in both cases (dsDNA and hairpin). Consequently, these models could not describe this feature of abrupt increase in the force at low $T$. In future theoretical studies, one may use finite extensible nonlinear elastic (FENE) potential FENE1 and proper loop entropy e.g one used in Ref. bundschuh to have further understanding of the force-temperature diagram of a longer dsDNA. VI acknowledgments We would like to thank S. M. Bhattacharjee, M. Prentiss, D. Mukamel, D. Marenduzzo and Y. Kafri for their comments and suggestions. Financial supports from the CSIR, DST, India, Ministry of Science and Informatics in Poland (grant No 202-204-234), and the generous computer support from MPIPKS, Dresden, Germany are acknowledged. References (1) MI. Wallace et al. PNAS 98, 5584 (2001). (2) G. Bonnet et al. PNAS 95, 8602 (1998). (3) N. L. Goddard et al. Phys. Rev. Lett. 85, 2400 (2000). (4) W. B. Zhang, S. J. Chen, PNAS 99, 1931 (2002). (5) J. Kim, S. Doose, H. Neuweiler, M. Sauer, Nucl. Acid Res. 34, 2516 (2006). (6) L. G. Nivon,E. I. Shakhnovich, J. Mol. Biol. 344, 29 (2004). (7) M. M. Lin, L. Meinhold, D. Shorokhov, A. H. Zewail, Phys. Chem. Chem. Phys. 10, 4227 (2008). (8) M. Kenward, K. D. Dorfman, J. Chem. Phys. 130, 095101 (2009). (9) J. Errami , M. Peyrard, N. Theodorakopoulos, European Phys. J. 23, 397 (2007). (10) E. Zazopoulos et al. Nature (London) 390, 311 (1997). (11) S. Froelich-Ammon et al. J. Biol .Chem. 269, 7719 (1994). (12) T. Q. Trinh and R. R. Sinden Genetics 134, 409 (1993). (13) J. Tang et al. Nucleic Acids Res. 21, 2729 (1993). (14) W. H. Tan, K. M. Wang, T. J. Drake, Curr. Opin. in Chem. Biol. 8, 547 (2004). (15) N. E. Broude, Trends in biotechnology 20, 249 (2002). (16) Y. S. Li, X. Y. Zhou, D. Y. Ye, Biochem. and Biophys. Res. Comm. 373, 457 (2008). (17) T. J. Drake, W. H. Tan, Appl. Spectroscopy 58, 269A (2004). (18) Y. J. Sheng et al. , Macromolecules 35, 9624 (2002); Y. J. Sheng YJ, P. H. Hsu, J. Z. Y. Chen et al., Macromolecules 37, 9257 (2004). (19) A. Sain et al., Phys. Rev. E 69, 061913 (2004); Y. J. Sheng, H. J. Lin, J. Z. Y. Chen, H. K. Tsao, J. Chem. Phys. 118, 8513 (2003). (20) U. Bockelmann et al., Phys. Rev. Lett. 79, 4489 (1997). (21) S. B. Smith et al., Science, 271, 795 (1996). (22) U. Bockelmann, Curr. Opin. Struc. Biol. 14, 368 (2004). (23) S. Kumar, M. S. Li, Phys. Rep. 486, 1 (2010). (24) M. T. Woodside et al., PNAS103, 6190 (2006). (25) J. Hanne et. al, Phys. Rev. E., 76, 011909 (2007). (26) A Mossa et. al, J. Stat. Mech., P0206 (2009). (27) A. Montanari and M. Mezard, Phys. Rev. Lett. 86, 2178 (2001). (28) H. Zhou, Y. Zhang, Z.-C. Ou-Yang, Phys. Rev. Lett. 86, 356 (2001). (29) Y. Zhang, H. Zhou, Z.-C. Ou-Yang, Biophys. J 81, 1133 (2001). (30) T. Hugel, M. Rief, M. Seitz, H. E. Gaub, R. R. Netz, Phys. ReV. Lett. 94, 48301 (2005) (31) C. Danilowicz et al., Phys. Rev E, 75, 030902(R) (2007). (32) C. Danilowicz et al., Phys. Rev. Lett. 93, 078101 (2004). (33) J. Ignacio Tinoco, Annu. Rev. Biophys. Biomol. Struct. 33, 363 (2004). (34) S. Kumar, G. Mishra, Phys. Rev E 78, 011907 (2008). (35) S. M. Bhattacharjee, J. Phys. A, 33, L423 (2000). (36) N. Singh and Y. Singh, Euro. Phys. J E 17, 7 (2005); ibid, Phys. Rev. E 64, 042901 (2001). (37) D. Marenduzzo et al., Phys. Rev. Lett. 88, 028102 (2001). (38) R. Kapri, S. M. Bhattacharjee and F. Seno, Phys. Rev. Lett., 93, 248102 (2004). (39) M. P. Allen, D. J. Tildesley, Computer Simulations of Liquids (Oxford Science, Oxford, UK, 1987). (40) D. Frenkel, B. Smit Understanding Molecular Simulation (Academic Press UK, 2002). (41) J. Schluttig, M. Bachmann, W. Janke, J. Comput. Chem. 29, 2603 (2008). (42) M. Santosh and P. K Maiti, J. Phys.: Condens. Matter 21, 034113 (2009). (43) M. S. Li, M. Cieplak, Phys. Rev E 59, 970 (1999). (44) A. M-Andrews, J. Rottler, and S. S. Plotkin, J. Chem. Phys. 132, 035105 (2010); E. J. Sambriski, V. Ortiz and J. J. de Pablo, J. Phys.: Condens. Matter 21, 034105 (2009). (45) I. Rouzina and V. A. Bloomfleld, Biophys. J. 80, 894 (2001); ibid. 80, 894 (2001). (46) N. Go and H. Abe, Biopolymers 20, 991 (1981). (47) E. J. Sambriski, V. Ortiz and J. J. de Pablo, J. Phys.: Cond. Mat. 21 034105 (2009). (48) C. Ke et al. Phys. Rev. Lett. 99, 018302 (2007). (49) S. Kumar, D. Giri and S. M Bhattacahrjee, Phys. Rev. E: 71, 51804 (2005). (50) D. Giri and S. Kumar Phys. Rev. E 73, 050903 (R), 2006. (51) M. S. Li, Biophys. J. 93, 2644 (2007). (52) G. E. P. Box and G. M. Jenkins, Time series analysis: Forecasting and Control, p28, Holden-day, San Francisco, (1970); Y. S. lee and T. Ree, Bull. Korean Chem. Soc. 3, 44 (1982). (53) B. A. Berg in Markov Chain Monte Carlo: Innovations and Applications, Eds. W. S. Kendall, F. Liang and J-S. Wang, Page 1, World Scientific, Singapore (2005). (54) P. G. de Gennes, Scaling Concepts in Polymer Physics (Cornell University Press, Ithaca, NY, 1979). (55) C. Bustamante et al. Annu. Rev. Biochem. 73, 705(2004). (56) R. Wartell, A. Benight, Phys. Rep. 126, 67 (1985). (57) T. R. Strick et al. Rep. Prog. Phys. 66, 1 (2003). (58) M. N. Dessinges et al. Phys. Rev. Lett. 89, 248102 (2002). (59) S. Kumar et al Phys. Rev. Lett. 98, 128101 (2007). (60) K. Hatch et al, Phys. Rev E 75, 051908 (2007). (61) S. V. Kuznetsov et al. Nucl. Acid Res. 36, 098 (2007). (62) M. Kouza, C-K Hu and M. S. Li, J. Chem. Phys. 128, 045103 (2008). (63) A. Noy et al. J. Mol. Biol. 343, 627 (2004). (64) A. Morriss-Andrews, J. Rottler, and S. S. Plotkin, J. Chem. Phys. 132, 035105 (2010). (65) C. Danilowicz et al., PNAS 100, 1694 (2003). (66) S. Cocco, J.F. Marko, and R. Monasson, Eur. Phys. J. E 10, 153 (2003). (67) T. Dauxois, Phys. Rev. E. 47, 684 (1993). (68) N.K. Voulgarakis et al., Phys. Rev. Lett. 96, 248101 (2006). (69) Y. Kafri, D. Mukamel, and L. Peliti, Europhys. J. B 27, 135 (2002). (70) A. Peterlin, Polymer, 2, 257(1961). (71) R. Bundschuh and U. Gerland, Phys. Rev. Lett. 95, 208104 (2005).
KEK-TH-484 KEK Preprint 97-3 OU-HET 261 TU-519 April 1997 $b\rightarrow s\,\ell\overline{\ell}$ process and multi-Higgs doublet model Yasuhiro Okada${}^{1}$ Yasuhiro Shimizu${}^{2}$, and Minoru Tanaka${}^{3}$ ${}^{1}~{}^{,}$ ${}^{2}$Theory Group, KEK, Tsukuba, Ibaraki, 305 Japan ${}^{2}$Department of Physics, Tohoku University Sendai 980-77 Japan ${}^{3}$ Department of Physics, Osaka University Toyonaka, Osaka, 560 Japan Rare $b$ decay processes are analyzed in the multi-Higgs doublet model. Taking account of the constraint from the $b\rightarrow s\,\gamma$ process, the branching ratio and the forward-backward asymmetry of the final leptons for the $b\rightarrow s\,\ell^{+}\ell^{-}$ process are calculated. It is shown that the branching ratio can be a few times larger than the standard model prediction and the asymmetry can be significantly different from that in the standard model. Combining these observable quantities it is possible to determine complex coupling constants associated with the charged Higgs mixing matrix. The CP violating charge asymmetry in the $b\rightarrow s\,\ell^{+}\ell^{-}$ process and the branching ratio of the $b\rightarrow s\,\nu\overline{\nu}$ process are also calculated. In search for new physics beyond the Standard Model (SM), flavor changing neutral current (FCNC) processes can play an important role. Since the source of the flavor changing processes only lies in the Cabbibo-Kobayashi-Maskawa (CKM) matrix in the SM, it is possible to make precise predictions for various observable quantities in FCNC processes within the SM. An important example is the branching ratio of the $b\rightarrow s\,\gamma$ process, which was reported to be ${\rm Br}(b\rightarrow s\,\gamma)=(2.32\pm 0.57\pm 0.35)\times 10^{-4}$ from the CLEO experiment [1]. Using the observed top quark mass, the measured branching ratio is consistent with the SM prediction. Therefore this process becomes a very strong constraint on models beyond the SM such as two Higgs doublet model [2] and supersymmetric extensions of the SM [3]. In addition to the $b\rightarrow s\,\gamma$ process, the $b\rightarrow s\,\ell^{+}\ell^{-}$ and $b\rightarrow s\,\nu\overline{\nu}$ processes can be important constraints on new physics. Current experimental bounds for these processes are only by one order of magnitudes above the SM predictions [4, 5]. In this letter we consider possible constraints on parameters in the multi-Higgs doublet model from these rare $b$ decay processes. When the number of the Higgs doublets is more than three we can introduce complex phases in the charged Higgs mixing matrix. In this sense the multi-Higgs doublet model is a natural extension of the SM which involves a new source of the CP violation. Present phenomenological constraints on coupling constants in this model are considered, for example, in Ref.[6]. Remarkably it is pointed out in Ref.[7] that the strongest constraint on the imaginary part of coupling constant comes from the $b\rightarrow s\,\gamma$ process, not from the CP violating quantities such as neutron electric dipole moment. Here we study the $b\rightarrow s\,\ell^{+}\ell^{-}$ and $b\rightarrow s\,\nu\bar{\nu}$ processes in the same model. We found that within the present experimental constraints including the $b\rightarrow s\,\gamma$ process the branching ratio can be a few times larger than the SM predictions and the forward-backward asymmetry of the final leptons for $b\rightarrow s\,\ell^{+}\ell^{-}$ process can significantly differ from the SM. Combining information from the branching ratio and the forward-backward asymmetry, it is possible to determine the complex coupling constants in future experiments. We also calculate the CP violating charge asymmetry between the $b\rightarrow s\,\ell^{+}\ell^{-}$ and ${\overline{b}}\rightarrow{\overline{s}}\,\ell^{+}\ell^{-}$ decays, which turns out to be a few percent in the allowed region. We consider the multi-Higgs doublet model with the following Yukawa couplings, $${\cal L}={\overline{q}}_{L}\,y_{d}\,d_{R}\,H_{d}+{\overline{q}}_{L}\,y_{u}\,u_% {R}\,H_{u}+{\overline{\ell}}_{L}\,y_{\ell}\,e_{R}\,H_{\ell}+h.c.,$$ (1) where $y_{d},y_{u},y_{\ell}$ are $3\times 3$ matrices. We have assumed that up-type quarks, down-type quarks and leptons have Yukawa couplings with only one Higgs doublet, $H_{u},H_{d},H_{\ell}$ respectively. In this way we can avoid FCNC effects at the tree level [8]. The couplings of physical charged Higgs bosons with fermions are then given by $$\displaystyle{\cal L}=(2\sqrt{2}G_{F})^{1/2}\sum_{i=1}^{n-1}(X_{i}\,{\overline% {u}}_{L}\,V\,M_{D}\,d_{R}\,H_{i}^{+}+Y_{i}\,{\overline{u}}_{R}\,M_{U}\,V\,d_{L% }\,H_{i}^{+}+Z_{i}\,{\overline{\nu}}_{L}\,M_{E}\,e_{R}\,H_{i}^{+}),$$ (2) where $n$ is the number of Higgs doublets, $H_{i}^{+}(i=1\sim n-1)$ represent mass eigen states of charged Higgs bosons and $V$ is the CKM matrix. New CP violating complex phases arise in the charged Higgs mixing matrix if there are three or more Higgs doublets. In such cases the coupling constants $X_{i}$, $Y_{i}$, $Z_{i}$ are in general complex numbers. These are several relations among $X_{i}$, $Y_{i}$, $Z_{i}$ from the requirement of unitarity of the mixing matrix [6]. The charged Higgs interactions in Eq.(2) can induce extra contribution to FCNC processes in this model. Here, we are interested in FCNC processes related to the $b$ quark. Inclusive branching ratios of $b\rightarrow s\,\gamma$, $b\rightarrow s\,\ell^{+}\ell^{-}$ and $b\rightarrow s\,\nu\bar{\nu}$ are calculated through the following weak effective Hamiltonian at the bottom scale as described in [9, 10]; $$\displaystyle{\cal H}=-\frac{4G_{F}}{\sqrt{2}}V_{tb}V_{ts}^{\ast}\sum_{i=1}^{1% 0}C_{i}O_{i}.$$ (3) The relevant operators at the bottom scale are $$\displaystyle O_{7}$$ $$\displaystyle=$$ $$\displaystyle\frac{e}{16\pi^{2}}\,m_{b}\,{\overline{s}}_{L}\sigma_{\mu\nu}b_{R% }F^{\mu\nu},$$ (4) $$\displaystyle O_{8}$$ $$\displaystyle=$$ $$\displaystyle\frac{g_{s}}{16\pi^{2}}\,m_{b}\,{\overline{s}}_{L}T^{a}\sigma_{% \mu\nu}b_{R}G^{a\mu\nu},$$ (5) $$\displaystyle O_{9}$$ $$\displaystyle=$$ $$\displaystyle\frac{e^{2}}{16\pi^{2}}{\overline{s}}_{L}\gamma^{\mu}b_{L}\,{% \overline{\ell}}\gamma_{\mu}\ell,$$ (6) $$\displaystyle O_{10}$$ $$\displaystyle=$$ $$\displaystyle\frac{e^{2}}{16\pi^{2}}{\overline{s}}_{L}\gamma^{\mu}b_{L}\,{% \overline{\ell}}\gamma_{\mu}\gamma^{5}\ell,$$ (7) $$\displaystyle O_{11}$$ $$\displaystyle=$$ $$\displaystyle\frac{e^{2}}{16\pi^{2}\sin^{2}\theta_{W}}{\overline{s}}_{L}\gamma% ^{\mu}b_{L}\,\sum_{i=e,\mu,\tau}{\overline{\nu}_{i}}\gamma_{\mu}(1-\gamma^{5})% \nu_{i},$$ (8) where $e$, $g_{s}$ are QED and strong coupling constants respectively, and $\theta_{W}$ is the weak mixing angle. Using renormalization group equations of QCD the Wilson coefficients $C_{i}$’s at the bottom scale are related to those at the weak scale. New physics effects enter through the Wilson coefficients at the weak scale. From the charged Higgs interactions these coefficients receive the following new contributions, $$\displaystyle C_{7}^{H}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{2}\sum_{i=1}^{n-1}\,x_{th_{i}}\left[|Y_{i}|^{2}\left(% \frac{2}{3}F_{1}(x_{th_{i}})+F_{2}(x_{th_{i}})\right)\right.$$ $$\displaystyle\left.+X_{i}Y_{i}^{\ast}\left(\frac{2}{3}F_{3}(x_{th_{i}})+F_{4}(% x_{th_{i}})\right)\right],$$ $$\displaystyle C_{8}^{H}$$ $$\displaystyle=$$ $$\displaystyle-\frac{1}{2}\sum_{i=1}^{n-1}\,x_{th_{i}}\left[|Y_{i}|^{2}F_{1}(x_% {th_{i}})+X_{i}Y_{i}^{\ast}F_{3}(x_{th_{i}})\right],$$ (10) $$\displaystyle C_{9}^{H}$$ $$\displaystyle=$$ $$\displaystyle-D^{H}+\frac{1-4\sin^{2}\theta_{W}}{\sin^{2}\theta_{W}}C^{H},$$ (11) $$\displaystyle C_{10}^{H}$$ $$\displaystyle=$$ $$\displaystyle-\frac{C^{H}}{\sin^{2}\theta_{W}},$$ (12) $$\displaystyle C_{11}^{H}$$ $$\displaystyle=$$ $$\displaystyle-C^{H}.$$ (13) Here $C^{H}$ and $D^{H}$ are given by $$\displaystyle C^{H}$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}^{n-1}\frac{1}{8}|Y_{i}|^{2}x_{th_{i}}x_{tW}\left(F_{3}% (x_{th_{i}})+F_{4}(x_{th_{i}})\right),$$ (14) $$\displaystyle D^{H}$$ $$\displaystyle=$$ $$\displaystyle\sum_{i=1}^{n-1}x_{th_{i}}|Y_{i}|^{2}\left(\frac{2}{3}F_{5}(x_{th% _{i}})-F_{6}(x_{th_{i}})\right),$$ (15) where $x_{tW}=m_{t}^{2}/m_{W}^{2},x_{th_{i}}=m_{t}^{2}/m_{H_{i}}^{2}$. The functions $F_{1}$-$F_{6}$ are defined as follows: $F_{1}(x)=\frac{1}{12(x-1)^{4}}(x^{3}-6x^{2}+3x+2+6x\ln x)$, $F_{2}(x)=\frac{1}{12(x-1)^{4}}(2x^{3}+3x^{2}-6x+1-6x^{2}\ln x)$, $F_{3}(x)=\frac{1}{2(x-1)^{3}}(x^{2}-4x+3+2\ln x)$, $F_{4}(x)=\frac{1}{2(x-1)^{3}}(x^{2}-1-2x\ln x)$, $F_{5}(x)=\frac{1}{36(x-1)^{4}}(7x^{3}-36x^{2}+45x-16+(18x-12)\ln x)$, $F_{6}(x)=\frac{1}{36(x-1)^{4}}(-11x^{3}+18x^{2}-9x+2+6x^{3}\ln x)$. The Wilson coefficients at the weak scale are then given by $$\displaystyle C_{i}(m_{W})=C_{i}^{SM}(m_{W})+C_{i}^{H}(m_{W}),$$ (16) where $C_{i}^{SM}$ is contribution from the SM [9, 10]. After taking account of the QCD corrections at the leading logarithmic order, $C_{i}(m_{b})$ can be expressed as, $$\displaystyle C_{7}(m_{b})$$ $$\displaystyle=$$ $$\displaystyle C_{7}(m_{W})\eta^{\frac{16}{23}}+C_{8}(m_{W})\frac{8}{3}(\eta^{% \frac{14}{23}}-\eta^{\frac{16}{23}})+{\tilde{C}}_{7},$$ (17) $$\displaystyle C_{8}(m_{b})$$ $$\displaystyle=$$ $$\displaystyle C_{8}(m_{W})\eta^{\frac{14}{23}}+{\tilde{C}}_{8},$$ (18) $$\displaystyle C_{9}(m_{b})$$ $$\displaystyle=$$ $$\displaystyle C_{9}(m_{W})+{\tilde{C}}_{9},$$ (19) $$\displaystyle C_{10}(m_{b})$$ $$\displaystyle=$$ $$\displaystyle C_{10}(m_{W}),$$ (20) $$\displaystyle C_{11}(m_{b})$$ $$\displaystyle=$$ $$\displaystyle C_{11}(m_{W}),$$ (21) where $\eta=\alpha_{s}(m_{W})/\alpha_{s}(m_{b})$. ${\tilde{C}}_{7}\sim{\tilde{C}}_{9}$ are constants which depend on the QCD coupling constant. Detailed formulas are found in [10, 11]. Numerically these are given by ${\tilde{C}}_{7}=-0.17$, ${\tilde{C}}_{8}=-0.077$, ${\tilde{C}}_{9}=1.9$ for $\alpha_{s}(m_{Z})=0.12$. The $b\rightarrow s\,\gamma$ branching ratio is then given by $$\displaystyle{\rm Br}(b\rightarrow s\,\gamma)={\rm Br}(b\rightarrow c\ e\ % \overline{\nu})\frac{6\alpha_{em}}{\pi g(m_{c}/m_{b})}\left|\frac{V_{ts}V_{tb}% ^{\ast}}{V_{cb}}\right|^{2}|C_{7}(m_{b})|^{2},$$ (22) where the phase space factor $g(z)$ is given by $g(z)=1-8z^{2}+8z^{6}-z^{8}-24z^{4}\ln z$. For the $b\rightarrow s\,\ell^{+}\ell^{-}$ process, the differential branching ratio and the forward-backward asymmetry of leptons in the lepton-center-of-mass frame are given by $$\displaystyle\frac{d{\rm Br}(b\rightarrow s\,\ell^{+}\ell^{-})}{d\,{\hat{s}}}=% {\rm Br}(b\rightarrow c\,e{\overline{\nu}})\frac{\alpha_{em}^{2}}{4\pi}\left|% \frac{V_{ts}V_{tb}^{\ast}}{V_{cb}}\right|^{2}\frac{1}{g(m_{c}/m_{b})}(1-\hat{s% })^{2}$$ (23) $$\displaystyle\times\left[(|C_{9}+Y(\hat{s})|^{2}+|C_{10}|^{2})(1+2\hat{s})+% \frac{4}{\hat{s}}|C_{7}|^{2}(2+\hat{s})^{2}+12\mbox{Re}C_{7}^{\ast}(C_{9}+Y(% \hat{s}))\right],$$ $$\displaystyle A_{FB}(\hat{s})$$ (24) $$\displaystyle=\frac{-3\mbox{Re}C_{10}^{\ast}[(C_{9}+Y(\hat{s}))\hat{s}+2C_{7}]% }{(|C_{9}+Y(\hat{s})|^{2}+|C_{10}|^{2})(1+2\hat{s})+\frac{4}{\hat{s}}|C_{7}|^{% 2}(2+\hat{s})^{2}+12\mbox{Re}C_{7}^{\ast}(C_{9}+Y(\hat{s}))},$$ where $\hat{s}=s/m_{b}^{2}=(p_{+}+p_{-})^{2}/m_{b}^{2}$ and $p_{+}(p_{-})$ is the four momentum of $\ell^{+}(\ell^{-})$. $Y(\hat{s})$ represents the contribution from the charm quark loop at the $m_{b}$ scale. See Ref.[11] for details. Here we neglect the $J/\psi$ and $\psi^{\prime}$ resonance contributions. The branching ratio for the $b\rightarrow s\,\nu\overline{\nu}$ is given by $$\displaystyle\sum_{i=e,\mu,\tau}{\rm Br}(b\rightarrow s\,\nu_{i}\overline{\nu}% _{i})=3\ {\rm Br}(b\rightarrow c\ e\ \overline{\nu})\frac{\alpha^{2}}{4\pi^{2}% \sin\theta_{W}^{4}}\left|\frac{V_{tb}V_{ts}^{\ast}}{V_{cb}}\right|^{2}\frac{1}% {g(m_{c}/m_{b})}|C_{11}|^{2}.$$ (25) Using the above formulas, it is now straight-forward to evaluate the branching ratios and the asymmetry numerically in the multi-Higgs doublet model. In the followings, we assume that only one of the physical charged Higgs boson is light and neglect the effects of other physical charged Higgs bosons. Then we keep only one term in the summation in the expressions for $C_{i}^{H}$’s. Dropping the index $i$ for the lightest charged Higgs bosons, the relevant parameters are $|Y|^{2}$, $XY^{\ast}$ and the mass of the charged Higgs boson. It should be noted that the coefficients $C_{7}$ and $C_{8}$ depend on both $|Y|^{2}$ and $XY^{\ast}$ whereas $C_{9}$ and $C_{10}$ only contain $|Y|^{2}$. Since the $b\rightarrow s\,\gamma$ branching ratio is already observed and is consistent with the SM prediction, we use ${\rm Br}(b\rightarrow s\,\gamma)$ to solve $\mbox{Im}XY^{\ast}$ in terms of $|Y|^{2}$ and $\mbox{Re}XY^{\ast}$ for each value of the charged Higgs mass. Then other observable quantities for $b\rightarrow s\,\ell^{+}\ell^{-}$ and $b\rightarrow s\,\nu\overline{\nu}$ processes can be calculated as functions of two parameters. We should also take into account constraints from other processes. Besides the $b\rightarrow s\,\gamma$ process, the $B^{0}$-${\overline{B}}^{0}$ mixing and the $Z\rightarrow b\overline{b}$ process give the most strong constraints on the possible value of $|Y|^{2}$ [6]. The contribution to the $B^{0}$-${\overline{B}}^{0}$ mixing from the charged Higgs boson is expressed as $$\displaystyle M_{12}^{H}$$ $$\displaystyle=$$ $$\displaystyle\frac{G_{F}^{4}}{64\pi^{2}}m_{W}^{2}\eta_{B}(V_{td}V_{tb}^{\ast})% ^{2}\frac{4}{3}B_{B}f_{B}^{2}m_{B}x_{t}\left[y_{t}I_{1}(y_{t})|Y|^{4}\right.$$ $$\displaystyle\left.+x_{t}(2I_{2}(x_{t},x_{H})-8I_{3}(x_{t},x_{H})|Y|^{2})% \right],$$ where $y_{t}=m_{t}^{2}/m_{H}^{2}$, $x_{H}=m_{H}^{2}/m_{W}^{2}$, and the functions $I_{1}$-$I_{3}$ are defined as follows. $$\displaystyle I_{1}(x)$$ $$\displaystyle=$$ $$\displaystyle\frac{1+x}{(1-x)^{2}}+\frac{2x\ln x}{(1-x)^{3}},$$ (27) $$\displaystyle I_{2}(x,y)$$ $$\displaystyle=$$ $$\displaystyle\frac{x}{(x-y)(x-1)}+\frac{y^{2}\ln(y)}{(y-1)(x-y)^{2}}+\frac{x(-% x-xy+2y)\ln(x)}{(1-x)^{2}(x-y)^{2}},$$ (28) $$\displaystyle I_{3}(x,y)$$ $$\displaystyle=$$ $$\displaystyle\frac{1}{(x-y)(x-1)}+\frac{y\ln{y}}{(y-1)(x-y)^{2}}+\frac{(-x^{2}% +y)\ln(x)}{(1-x)^{2}(x-y)^{2}}.$$ (29) Here we have retained only the relevant terms which are proportional to $|Y|^{4}$ and $|Y|^{2}$. Since this quantity depends on the CKM matrix element $V_{td}$ which has not been known well, the constraint from $B^{0}$-${\overline{B}}^{0}$ mixing is not very strong. Using the constraint on $V_{td}$ from the charmless $b$ decay ($0.005<|V_{td}|<0.012$) and taking account of uncertainties from $f_{B}$ and $B_{B}$ ($f_{B}\sqrt{B_{B}}=200\pm 40$MeV) we can deduce $|Y|\mathrel{\mbox{\raisebox{-4.3pt}{$\stackrel{{\scriptstyle\textstyle<}}{{% \textstyle\sim}}$}}}1.5(2.1)$ for $m_{H}=100(300)$ GeV *** The CP violating parameter in the $K^{0}$-${\overline{K}}^{0}$ mixing, $\epsilon_{K}$, also receives a similar contribution from the charged Higgs loop as the $B^{0}$-${\overline{B}}^{0}$ mixing. In this case, however, the relevant CKM matrix element is different and the constraint on $|Y|$ from $\epsilon_{K}$ only is not strong. If we combine the $B^{0}$-${\overline{B}}^{0}$ mixing and the $\epsilon_{K}$ constraints we can exclude a slightly larger parameter space but the allowed region of $|Y|$ is numerically almost the same as above.. Here and in the followings we fix the top quark mass as 175 GeV. For the $Z\rightarrow b\overline{b}$ process we calculated the charged Higgs contribution to $R_{b}=\Gamma(Z\rightarrow b\overline{b})/\Gamma(Z\rightarrow{\rm hadrons})$ following Ref.[12]. We define the deviation from the SM contribution ($R_{b}^{SM}=0.2158$) by $\delta R_{b}=R_{b}-R_{b}^{SM}$. The charged Higgs contribution to $\delta R_{b}$ is shown in Fig.1 as a function of $|Y|^{2}$. In this figure we also show the 3 $\sigma$ lower bound of $\delta R_{b}$ from the world average value $R_{b}=0.2178\pm 0.0011$ [13]. From Fig.1 we can derive $|Y|\mathrel{\mbox{\raisebox{-4.3pt}{$\stackrel{{\scriptstyle\textstyle<}}{{% \textstyle\sim}}$}}}0.9(1.3)$ for $m_{H}=100(300)$ GeV. Note that if we use the value $R_{b}=0.2159\pm 0.0009({\mbox{s}tat})\pm 0.0011({\mbox{s}yst})$ reported by ALEPH [14] the lower bound of $\delta R_{b}$ shifts to $-0.0041$, which corresponds to the upper bound $|Y|\mathrel{\mbox{\raisebox{-4.3pt}{$\stackrel{{\scriptstyle\textstyle<}}{{% \textstyle\sim}}$}}}1.6(2.3)$ for $m_{H}=100(300)$ GeV. Since experimental situation is not conclusive, to be conservative, we present the result in a rather wider range of $|Y|$. Fig.2(a) shows that the ratio of ${\rm Br}(b\rightarrow s\,\mu^{+}\mu^{-})$ branching ratio for the multi-Higgs doublet model normalized by the SM branching ratio in the space of $|Y|^{2}$ and $\mbox{Re}(XY^{\ast})$ for $m_{H}=100$ GeV. For numerical calculations we use $m_{b}=4.7$ GeV and $m_{c}=1.5$ GeV. To avoid the large effect of the $J/\psi$ resonance we integrate the differential branching ratio for the kinematical range $4m_{\mu}^{2}<s<(m_{J/\psi}-\delta)^{2}$ where $m_{\mu}$ is muon mass, $m_{J/\psi}$ is $J/\psi$ mass and $\delta=100$ MeV. The interference effect between the $J/\psi$ resonance and the short distance contribution in this kinematical region is still sizable $(\mathrel{\mbox{\raisebox{-4.3pt}{$\stackrel{{\scriptstyle\textstyle<}}{{% \textstyle\sim}}$}}}20\%)$ for the SM case [11]. However as we can see in Fig.2 the interference effect is expected to be smaller than the charged Higgs contribution to the short distance part. In order to solve $\mbox{Im}(XY^{\ast})$ in terms of $|Y|^{2}$ and $\mbox{Re}(XY^{\ast})$, we assume ${\rm Br}(b\rightarrow s\,\gamma)=2.8\times 10^{-4}$. Since there is a sizable experimental error on this quantity we have just used the SM prediction for the illustration. We expect that the experimental error as well as the theoretical ambiguity will be reduced in future when actual analysis on $b\rightarrow s\,\ell^{+}\ell^{-}$ is done. Fig.2(b) shows the asymmetry in the same kinematical range. From these two figures we can see that the branching ratio can be a few times lager than that of the SM, which is $3.8\times 10^{-6}$ in this kinematical region, and the asymmetry can be as large as 20 % compared to about 5 % in the SM. It is also interesting to see that the branching ratio is sensitive to the value of $|Y|^{2}$, on the other hand the asymmetry gives an independent constraint on the parameter space. Therefore the values of $|Y|^{2}$ and $\mbox{Re}(XY^{\ast})$ are determined if the branching ratio and the asymmetry are measured with reasonable accuracy. Fig.2(c) shows the $b\rightarrow s\,\nu\overline{\nu}$ branching ratio normalized by the SM prediction (${\rm Br}(b\rightarrow s\,\nu\overline{\nu})_{SM}=4.3\times 10^{-5}$) in the same parameter space. In this case the branching ratio only depends on the parameter $|Y|^{2}$ and is enhanced by a similar factor as the branching ratio of $b\rightarrow s\,\ell^{+}\ell^{-}$. If $\mbox{Im}(XY^{\ast})\neq 0$, there is possibility to observe the CP violating charge asymmetry, i.e. the difference of $b\rightarrow s\,\ell^{+}\ell^{-}$ and ${\overline{b}}\rightarrow\overline{s}\,\ell^{+}\ell^{-}$. Since this quantity is induced with help of the phase in $Y(\hat{s})$ in Eq.(23), the asymmetry appears only above the $c{\overline{c}}$ threshold of the lepton invariant mass. The charge asymmetry is defined as $$\displaystyle A_{CP}=\frac{\int_{(m_{\psi^{\prime}}+\delta)^{2}}^{m_{b}^{2}}ds% \left(\frac{d\,{\rm Br}(b\rightarrow s\,\ell^{+}\ell^{-})}{ds}-\frac{d\,{\rm Br% }({\overline{b}}\rightarrow{\overline{s}}\,\ell^{+}\ell^{-})}{ds}\right)}{\int% _{(m_{\psi^{\prime}}+\delta)^{2}}^{m_{b}^{2}}ds\left(\frac{d\,{\rm Br}(b% \rightarrow s\,\ell^{+}\ell^{-})}{ds}+\frac{d\,{\rm Br}({\overline{b}}% \rightarrow{\overline{s}}\,\ell^{+}\ell^{-})}{ds}\right)},$$ (30) where $m_{\psi^{\prime}}$ is the $\psi^{\prime}$ mass and we take $\delta=100$ MeV. In Fig.2(d) this quantity is shown for $m_{H}=100$ GeV. Since this asymmetry is at most a few percent, we need large statistics to measure it. Note that there is a twofold ambiguity in sign of $\mbox{Im}(XY^{\ast})$. Fig.2(a)-(c) do not depend on the sign of $\mbox{Im}(XY^{\ast})$ while $A_{CP}$ (Fig.2(d)) change its sign according to the sign of $\mbox{Im}(XY^{\ast})$. Fig.2(d) shows the case of $\mbox{Im}(XY^{\ast})>0$. In Fig.3(a)-(d) we show the same quantities for $m_{H}=300$ GeV. As in the case of $m_{H}=100$ GeV we can see similar enhancements on the branching ratios and the forward-backward asymmetry in the allowed parameter space. In summary we have investigated possible constraints on the charged Higgs coupling parameters from the $b\rightarrow s\,\gamma$, $b\rightarrow s\,\ell^{+}\ell^{-}$ and $b\rightarrow s\,\nu\overline{\nu}$ processes. Within the present experimental constraints including $b\rightarrow s\,\gamma$ process, we have shown that the branching ratio of the $b\rightarrow s\,\ell^{+}\ell^{-}$ and $b\rightarrow s\,\nu\overline{\nu}$ processes can be enhanced a few times compared with the SM. The lepton forward-backward asymmetry in the $b\rightarrow s\,\ell^{+}\ell^{-}$ process can also be enhanced by a factor of several. Since these quantities depend on the model parameters in different ways, we can obtain useful information on the model once these processes are observed experimentally. This work is supported by the Grant-in-aid for Scientific Research from the Ministry of Education, Science and Culture of Japan. References [1] M.S. Alam et al. (CLEO Collaboration), Phys. Rev. Lett. 74 (1995) 2885. [2] T.G. Rizzo, Phys. Rev. D 38 (1988) 820; W.S. Hou and R.S. Wiley, Phys. Lett. B 202 (1988) 591; C.Q. Geng and J.N. Ng, Phys. Rev. D 38 (1988) 2857; V. Barger, J.L. Hewett and R.J.N. Phillips, Phys. Rev. D 41 (1990) 3421. [3] T. Goto and Y. Okada, Prog. Theor. Phys. 94 (1995) 407, and references therein. [4] CLEO Collaboration, R. Balest et al., CLEO-CONF 94-4 (1994); CDF Collaboration, F. Abe et al., Phys. Rev. Lett. 76 (1996) 4675. [5] Y. Grossman, Z. Ligeti, and E. Nardi, Nucl. Phys. B 465 (1996) 369; ALEPH Collaboration, D.Buskulic et al., Phys. Lett. B343 (1995) 444. [6] Y. Grossman, Nucl. Phys. B426 (1994) 355. [7] Y. Grossman and Y. Nir, Phys. Lett. B313 (1993) 126. [8] G.L. Glashow and S. Weinberg, Phys. Rev. D15 (1977) 1958. [9] B. Grinstein, M.J. Savage and M.B. Wise, Nucl. Phys. B319 (1989) 271. [10] A. Ali, G. Giudice and T. Mannel, Z. Phys. C 67 (1995) 417. [11] T. Goto, Y. Okada, Y. Shimizu, and M. Tanaka, hep-ph/9609512, To be published in Phys. Rev. D. [12] M. Boulware and D. Finnell, Phys. Rev. D44 (1991) 2054. [13] LEP Electroweak Working Group and the SLD Heavy Flavor Group, CERN-PPE-96-183(1996). [14] ALEPH Collaboration, CERN-PPE-97-018(1997).
Kepler-78 and the Ultra-Short-Period Planets Joshua N. Winn [email protected] Roberto Sanchis-Ojeda [email protected] Saul Rappaport [email protected] Princeton University, Princeton, NJ 08540, USA Netflix, Los Gatos, CA 95032, USA Massachusetts Institute of Technology, Cambridge, MA 02139, USA Abstract Kepler-78b is a planet of comparable size to the Earth (1.2 $R_{\oplus}$), but with an orbital period a thousand times shorter (8.5 hours). Currently it is the smallest planet for which the mass, radius, and dayside brightness have all been measured. Kepler-78b is an exemplar of the ultra-short-period (USP) planets, a category defined by the simple criterion $P_{\rm orb}<1$ day. We describe our Fourier-based search of the Kepler data that led to the discovery of Kepler-78b, and review what has since been learned about the population of USP planets. They are about as common as hot Jupiters, almost always smaller than 2 $R_{\oplus}$, and often members of compact multi-planet systems. They might be the exposed rocky cores of “gas dwarfs,” the planets between 2–4 $R_{\oplus}$ in size that are commonly found in somewhat wider orbits. keywords: planets, time-series photometry ††journal: New Astronomy Reviews 1 Introduction One of the earliest surprises of exoplanetary science was the existence of planets with very short orbital periods. The main pre-exoplanet theory of planet formation predicted that gas giant planets like Jupiter could only form in wide orbits, more than 2–3 times the size of Earth’s orbit around the Sun. Then in 1995, the star 51 Pegasi was found to host a Jupiter-mass planet with an orbit only 5% of the size of Earth’s orbit (MayorQueloz1995, ). The planet rushes around the star every 3.5 days, an orbital period far shorter than Jupiter’s 4300-day period, or even Mercury’s 88-day period. How planets attain such short periods is the oldest unresolved problem in the field. Figure 1 illustrates the chronology of short-period planet discoveries. After 51 Peg, a good place to continue the story is in 2003 with the discovery of OGLE-TR-56b Konacki+2003 . The initial report of this planet was met with skepticism because of the unusually small orbital distance of 0.023 AU and short orbital period of 1.2 days. Practitioners of the Doppler technique wondered why the very first planet to emerge from transit surveys would have such a short period while the longstanding Doppler surveys had not yet found any such planets. The resolution of this problem is that transit surveys are even more strongly biased toward short periods than Doppler surveys (Gaudi+2005, ). Consider a transit survey in which all the nearby stars within some field of view are repeatedly imaged, allowing the flux $F$ of each star to be measured with a fractional uncertainty $\propto$ $F^{-1/2}$. If all the transit signals exceeding a certain signal-to-noise threshold can be detected, then the number of stars for which a planet of period $P$ would produce a detectable transit signal scales as $P^{-5/3}$ Pepper+2003 ; Gaudi+2005 . A survey capable of searching a certain number of stars for Earth-sized planets with a period of 365 days — such as the Kepler survey — will also be capable of searching $(365)^{5/3}\approx 19{,}000$ times as many stars for Earth-sized planets with a period of 1 day. Hence it is possible to find very short-period planets even if they are exceedingly rare. Indeed, as ground-based transit surveys made further progress, it became clear that giant planets are exceedingly rare for periods shorter than a few days. By early 2018 the ground-based transit surveys have discovered 68 giant planets with periods between 2 and 3 days, but only 37 with periods between 1 and 2 days, and a mere six with periods shorter than one day. This is despite the strong selection bias favoring the shortest periods. The next major advance came in 2009 during the European Corot mission, the first spaceborne transit survey. That year saw the announcement of Corot-7b Leger+2009 , which was then the smallest known planet (1.7 $R_{\oplus}$), with the shortest known orbital period (0.85 days). A couple of years later, the NASA Kepler mission found a similar planet, Kepler-10b, with a radius of 1.4 $R_{\oplus}$ and an orbital period of 0.84 days Batalha+2011 . In between these discoveries was a curious episode involving the innermost planet of the 55 Cnc system. The planet was initially discovered through the Doppler technique McArthur+2004 ; Fischer+2008 , but the period was misidentified as 2.8 days due to aliasing. Subsequent analysis (DawsonFabrycky2010, ) and the detection of transits with space-based photometry Winn+2011 ; Demory+2011 showed that the true period is 0.74 days. It is now clear that for Sun-like stars, planets with periods shorter than one day — which we designate “ultra-short-period” (USP) planets — occur just as frequently as hot Jupiters with any period from 1 to 10 days. The reason that USP planets had previously escaped detection is that they are small, and produce signals that are difficult to detect without the precision of space telescopes. These planets have also been called “hot Earths”, or more evocatively, “lava worlds” Leger+2011 , as their surface temperatures are higher than the melting point of any plausible rock-forming minerals. The current record holder is KOI-1843.03, which circles its star every 4.2 hours OfirDreizler2013 ; Rappaport+2013 . Not far behind is K2-137b, with an orbital period of 4.3 hours Smith+2018 . When trying to understand a process as complex as planet formation, the most extreme cases are often the most revealing. This is one reason why the detection and characterization of USP planets is rewarding. Studying them may help us to understand the formation and orbital evolution of short-period planets, as well as star-planet interactions, atmospheric erosion, and other phenomena arising from strong irradiation and strong tidal forces. In addition there are many practical advantages to studying USP planets. They are easier to detect than longer-period planets. Their masses and sizes, the most basic inputs to theories of planetary interiors, are easier to measure. They are sometimes hot enough to emit a detectable glow, enabling observations to determine their surface temperature and reflectivity, which is usually impossible for wider-orbiting planets. We must immediately acknowledge that the defining criterion of $P_{\rm orb}<1$ day is arbitrary. It was chosen because 1 is a nice round number, and because planets with such short periods were relatively unexplored and were accessible using our chosen search technique. Nature does not seem to produce any astrophysical distinction between planets just inside or outside of the one-day boundary. Rather, as reviewed below, the period range of about 5–10 days is where we start to see differences in the radius distribution, occurrence rate density, and mean stellar metallicity of the host stars. In the spirit of this special issue, Section 2 tells the story of Kepler-78b, a planet with Earth-like proportions and an orbital period of 8.5 hours. This planet is important because it is one of the very smallest planets for which both the mass and radius have been measured to better than 20%. It is also the smallest planet for which the brightness of the dayside has been determined, through the detection of planetary occultations. Kepler-78b was one of the first USP planets that emerged from our systematic search through the Kepler data. We also take this opportunity, in Section 3, to review the searches that have been undertaken by other groups, and the growing collection of related investigations into this enigmatic population of planets. Section 4 describes some other intriguing ultra-short-period phenomena that might, or might not, be related to planets. 2 Kepler-78 The first appearance in the literature of the star now known as Kepler-78 is in the second data release of the Kepler Eclipsing Binary catalog Slawson+2011 , published on October 12, 2011. It is listed with Kepler Input Catalog (KIC) number 8435766, and deemed a detached eclipsing binary with a period of 0.71 days (17 hours). We do not know what led to this classification, or why the period was found to be twice as long as the true value. In fact we were unaware of this entry in the eclipsing binary catalog until the star came to our attention through a different route. We had been searching the Kepler data for transiting planets based on the Fourier Transform (FT) of the light curve, instead of the more widely used Box-fitting Least Squares (BLS) spectrum Kovacs+2002 . The BLS method was invented specifically for the detection of transit signals: brief drops in brightness modeled as a “box” (rectangular-pulse) function with duration $T$ and period $P$. The motivation for the FT method might not be obvious, because the FT of a box function has power spread over many harmonics in addition to the fundamental frequency. Specifically, it consists of a series of peaks with $f_{n}=n/P$, modulated by the amplitude function $$A(f)=\frac{T\sin(\pi fT)}{\pi fT}$$ (1) Most of the power is concentrated in the frequency range from zero to the first null at $1/T$, within which the number of harmonics is approximately $P/T$, which for a transiting planet around a Sun-like star is $$\frac{P}{T}\sim 500~{}\left(\frac{P}{1~{}{\rm year}}\right)^{2/3}.$$ (2) Splitting the signal among hundreds of harmonics is clearly a terrible idea. The peaks would be easily lost in the noise. But for a planet with an 8-hour period, there are only 5–6 strong harmonics. Thus, the FT method is reasonably effective for USP planets. We also found it to have some practical advantages: it is fast and easy to compute and performs well in the presence of the most common types of stellar variability. By early March of 2013, one of us (S.A.R.) had inspected the FTs of about 10,000 Kepler light curves. Planet candidates were identified based on the presence of 5–6 strong harmonics and no subharmonics, which are indicative of eclipsing binaries with alternating eclipse depths. This resulted in a list of 93 candidates which were vetted in the usual ways Jenkins+2010 ; Morton+2011 . We produced a phase-folded light curve after filtering out the strong stellar variability, and confirmed that there was no detectable alternation in eclipse depths, no detectable ellipsoidal light variations, and no detectable motion of the center of light in the Kepler images that would have suggested a blend between an eclipsing binary and a brighter foreground star. We realized on March 7 that KIC 8435766 was special. It earned one of the only two ‘‘A+’’ grades that were awarded during the visual inspection111The other A+ grade went to KOI-1843.03, which we later learned had already been reported in the literatureOfirDreizler2013 .; it had the brightest host star ($m_{\rm Kep}=11.5)$; and it showed evidence for tiny dips in between eclipses that were consistent with planetary occultations. It was an unexpected gift. We thought the brightest Kepler stars had already been thoroughly picked over by other groups. After a few days of further analysis, we contacted David Latham to request spectroscopy, and by the beginning of April we could rule out radial-velocity variations at the 100 m s${}^{-1}$ level, placing the mass of the transiting companion within the planetary regime. Our paper was submitted on May 15, and on July 12 the system was christened Kepler-78 by the staff at the NASA Exoplanet Archive.222Although it appears in the published paper as “Kepler-XX” because we forgot to inform the ApJ editorial staff of the name change before it was too late. The spectra obtained by Latham’s group confirmed that the host star was a late G dwarf and suggested it might be suitable for precise Doppler monitoring. The possibility beckoned of measuring the mass of a nearly Earth-sized planet. On May 20 and 21, a conference was held at the Harvard-Smithsonian Center for Astrophysics in honor of Latham’s distinguished career. During a lunch break we met with members of the California Planet Search (CPS) to discuss the possibility of using the Keck I 10 m telescope and the HIRES spectrograph to measure the mass of the planet. During the same meeting, unbeknownst to us, the members of the HARPS-North consortium were also planning to conduct precise Doppler observations at the nearest possible opportunity. Given the brightness of the star and the high level of confidence in the planetary nature of Kepler-78b, it was not too surprising that two different teams embarked on campaigns to measure the mass of the planet. Both teams began collecting data. It soon became clear that the biggest hurdle in measuring the mass of Kepler-78b would be the starspot activity of the host star. The Kepler data showed fluctuations in total light of 1%, presumably due to starspots rotating across the star’s visible hemisphere. This level of activity leads to spurious Doppler shifts on the order of 10 m s${}^{-1}$, much larger than the expected planetary signal of 1–2 m s${}^{-1}$. Fortunately, the rotation period (12.5 days) is much longer than the orbital period (0.36 day), allowing a clear separation of timescales. It proved possible to detect the planet-induced radial-velocity signal through intensive observations on individual nights, during which the effects of stellar rotation are minimal. By early June, the CPS and HARPS-N teams had learned of each other’s plans. An arrangement was made to submit our journal articles at the same time, but without sharing any results until just beforehand. As it turned out, the two completely independent results for the planet mass agreed to within 1-$\sigma$, despite the different approaches that were taken to cope with the effects of stellar activity. The articles were submitted on September 25 Howard+2013 ; Pepe+2013 . Since then, other groups have confirmed the robustness of these measurements by combining both datasets and using different analysis techniques Hatzes2014 ; Grunblatt+2015 . The most recent such study found $R_{p}=1.20\pm 0.09$ and $M_{p}=1.87\pm 0.27$ $M_{\oplus}$, giving a mean density of $6.0^{+1.9}_{-1.4}$ g cm${}^{-3}$ Grunblatt+2015 . This is consistent with the Earth’s mean density of 5.5 g cm${}^{-3}$, although the uncertainty is large enough to allow for a wide range of possible combinations. Using a simple model consisting only of rock and iron, the iron fraction is constrained to be $0.32\pm 0.16$. The brightness of the host star has enabled many other interesting measurements. Any long-term change in orbital period due to tidal effects, planet-planet interactions, or other reasons must satisfy $|P/\dot{P}|>2.8$ Myr. The detection of occultations and the planet’s “phase curve” (the gradual light variations that are seen with the same period as the planetary orbit) help to limit the possibilities for the planet’s visual albedo and surface temperature. If the Bond albedo is 0.5, the dayside temperature must be $\approx$ 2600 K, while if the albedo is very small then the temperature could be as high as 3000 K. The nightside temperature is limited to $<2700\,{\rm K}$ (3-$\sigma$). Future observations at multiple wavelengths, perhaps with the James Webb Space Telescope, will lead to better constraints. The relatively young age of the star (750 Myr) and high level of activity have allowed some investigators to characterize the stellar magnetic field. Spectropolarimetry was used to infer a surface magnetic field strength of 16 G for the host star, by exploiting the Zeeman effect (Moutou+2016, ). It has been suggested that planet-star interactions could lead to a detectable modulation in magnetic activity, through a similar mechanism as the electrodynamic coupling between Jupiter and Io Laine+2012 , although no such modulation has yet been detected. 3 The USP planet population 3.1 Physical characteristics Having indulged ourselves in telling the story of Kepler-78, we turn to the more scientifically relevant task of summarizing what has been learned about USP planets in general. To get oriented, we begin with some basic physical considerations. We also allude to some of the more sophisticated models that have been developed for extremely hot rocky planets, in response to the discoveries of Corot-7b, Kepler-10b, 55 Cnc e, and Kepler-78b. To set the scale of the orbit, we apply Kepler’s third law, $$a=0.0196~{}{\rm AU}~{}\left(\frac{P_{\rm orb}}{1~{}{\rm day}}\right)^{2/3}% \left(\frac{M_{\star}}{M_{\odot}}\right)^{1/3},~{}~{}\frac{a}{R_{\star}}=4.2~{% }\left(\frac{P_{\rm orb}}{1~{}{\rm day}}\right)^{2/3}\left(\frac{\rho_{\star}}% {\rho_{\odot}}\right)^{1/3}.$$ (3) For these fiducial parameters, the angular diameter of the star in the sky of the planet is $27^{\circ}$, i.e., fifty times wider than the Sun in our sky. At this short range, tidal interactions lead to relatively rapid orbital and spin evolution. In the constant-lag-angle model of the equilibrium tide, the timescale for orbital circularization is GoldreichSoter1966 ; Patra+2017 $$\frac{e}{\dot{e}}\sim 1.7~{}{\rm Myr}\left(\frac{Q_{\rm p}^{\prime}}{10^{3}}% \right)\left(\frac{M_{\rm p}/M_{\star}}{M_{\oplus}/M_{\odot}}\right)\left(% \frac{R_{\rm p}/R_{\star}}{R_{\oplus}/R_{\odot}}\right)^{5}\left(\frac{\rho_{% \star}}{\rho_{\odot}}\right)^{5/3}\left(\frac{P_{\rm orb}}{1~{}{\rm day}}% \right)^{13/3},$$ (4) where $Q_{\rm p}^{\prime}$ is the modified tidal quality factor characterizing the dissipation rate of tidal oscillations, scaled to a customary value for the Earth. The timescale for the planet to achieve spin-orbit synchrony is even shorter, by a factor on the order of $(R_{\rm p}/a)^{2}$. Therefore when we see a USP planet around a mature main-sequence star it reasonable to assume (until proven otherwise) that the planet has a circular orbit and a permanent dayside and nightside. However, the orbit of a terrestrial-mass USP planet does not have nearly enough angular momentum to spin up the star and achieve a stable double-synchronous state; instead the planet spirals into the star on a timescale GoldreichSoter1966 ; Patra+2017 $$\frac{P}{\dot{P}}\sim 30~{}{\rm Gyr}\left(\frac{Q_{\star}^{\prime}}{10^{6}}% \right)\left(\frac{M_{\star}/M_{\rm p}}{M_{\odot}/M_{\star}}\right)\left(\frac% {P_{\rm orb}}{1~{}{\rm day}}\right)^{13/3}\left(\frac{\rho_{\star}}{\rho_{% \odot}}\right)^{5/3},$$ (5) where $Q_{\star}^{\prime}$ is the star’s modified tidal quality parameter. Since $Q_{\star}^{\prime}$ is uncertain by at least an order of magnitude, it is less clear whether tidal orbital decay is important on astrophysical timescales. A one-day planet around a Sun-like star intercepts a flux of about 3.5 MW m${}^{-2}$, about 2600 times the flux of the Sun impinging on the Earth. Under the simplifying assumption that all the incident energy is reradiated locally, the planet’s surface at the substellar point (high noon) is raised to a temperature of 2800 K. This is not far from the temperature of the glowing tungsten filament in an incandescent light Leger+2011 . It is also hot enough to melt silicates and iron, a fact which has led to theoretical work on the properties of the resulting lava oceans Leger+2011 ; Rouan+2011 ; Kite+2016 and mantle convection Gelman+2011 ; Wagner+2012 . If a USP planet were present during the first $10^{7}$ years of its host star’s active youth, it would have been bathed in ultraviolet and X-ray radiation. Models of the thermal evolution of planetary atmospheres strongly suggest that this leads to the complete loss of any hydrogen-helium atmosphere that might once have existed. In this “photo-evaporation” process the gas near the XUV photosphere is heated to such a degree that the pressure gradient drives a hydrodynamic wind, leading to atmospheric escape LopezFortney2016 ; OwensWu2016 ; ChenRogers2016 . What remains might be a thin atmosphere of heavier elements with a maximum pressure of order $10^{-5}$ atm Leger+2011 . Models which track the chemistry of Earth’s crust as it is heated to a temperature of 1500–3000 K suggest that the outgassed atmosphere would be mainly composed of Na, O${}_{2}$, O, and SiO SchaeferFegley2009 . 3.2 Detections Several groups have undertaken systematic searches of the Kepler data for short-period planets, only a few of which were specifically designed for periods shorter than about one day. In 2014, we and our collaborators published a list of 106 USP planets based on the concatenation of our own detections as well as those of other groups OfirDreizler2013 ; Jackson+2013 ; Huang2013a . The characteristics of the stars and planets were later clarified based on high-resolution optical spectroscopy, and a few false positives were uncovered Winn+2017 . In 2016, the population of USP planets was noted by other investigators. One group, apparently unfamiliar with the earlier work, noted that the sample of strongly irradiated planets does not include many with sizes exceeding 2 $R_{\oplus}$ Lundkvist+2016 . Their sample consisted of Kepler systems for which we have unusually good knowledge of the stellar and planetary sizes, thanks to the detection of asteroseismic oscillations. Another study concluded that at least 17% of the “hot Earths” detected by Kepler have a different radius/period distribution than the planets in the collection of Kepler multi-planet systems SteffenCoughlin2016 . In addition, over the last few years, a fresh sample of USP planets has been found using data from the ongoing NASA K2 mission. A systematic search was undertaken by the Short Period Planets Group Adams+2016 , resulting in 19 candidates. Other groups have validated and characterized about a dozen additional candidates Dressing+2017 ; Barragan+2017 ; Guenther+2017 ; Dai+2017 ; Malavolta+2018 ; Smith+2018 . 3.3 Occurrence rate About one out of 200 Sun-like stars (G dwarfs) has an ultra-short-period planet. This result is based on a systematic and largely automated search of the Kepler data using our FT pipeline, calibrated with inject-and-recover simulations SanchisOjeda+2014 . The simulations showed that the efficiency of detecting planets larger than $2\,R_{\oplus}$ was higher than $90\%$, and that it dropped below $50\%$ for planets smaller than $1\,R_{\oplus}$. After correcting for this sensitivity function, and for the transit probabilities, the occurrence rate was found to be $(0.51\pm 0.07)\%$. Among the other results of this survey was evidence for a strong dependence of the occurrence rate upon the mass of the host star. The measured occurrence rate falls from $(1.1\pm 0.4)\%$ for M dwarfs to $(0.15\pm 0.05)\%$ for F dwarfs. There are still substantial uncertainties in the occurrence rates for stars at either end of this range, due to the relatively small number of detections. It is perhaps telling, though, that most of the USP planets found with K2 data are around K and M dwarfs. 3.4 Period and radius distributions Figure 3 shows the inferred radius and period distribution of USP planets based on our FT survey, after accounting for the survey sensitivity and completeness. As we consider periods shrinking from 24 to 4 hours, the occurrence rate falls by more than an order of magnitude. The trend with period is compatible with an extrapolation of the trend that had been noted previously for periods between 1 and 10 days Howard+2012 ; LeeChiang2017 . This is illustrated in Figure 4. Likewise, the occurrence rate of planets larger than $2\,R_{\oplus}$ is at least a factor of five smaller than the rate of Earth-sized planets. This, too, is compatible with a more general trend: the radius distribution of all planets with periods shorter than 100 days shows a dip in occurrence between 1.5 and 2 $R_{\oplus}$ Fulton+2017 . This dip has been attributed to photo-evaporation OwensWu2017 . In this interpretation, close-in planets begin their existence as rocky bodies of approximately $3\,M_{\oplus}$ which accrete differing amounts of hydrogen and helium gas from the surrounding protoplanetary disk. Those that accrete only a little gas — less than a few per cent of the total planet mass — will lose it all during the 10${}^{7}$ years of high-energy irradiation by the young and magnetically active star. Such planets are observed today as rocky bodies with sizes between 1 and $1.5\,R_{\oplus}$. Most of the USP planets seem to be in this category. If instead a higher mass of gas is accreted, a substantial fraction is still left over by the time the star quiets down and loses its evaporative effect. Such an atmosphere, even when its mass is on the order of only a per cent of the total mass, has such a large scale height that it increases the planet’s effective size by a factor of two. We observe these planets today to be swollen to sizes of 2–3 $R_{\oplus}$. Such planets are commonly seen with orbital periods from a few to 100 days, but as we have stated above, they are rarely seen as USP planets. 3.5 Masses An important part of the Kepler-78 story was the feasibility of Doppler mass measurement. The USP planets in general are attractive targets for Doppler monitoring. The radial-velocity amplitude scales as $P^{-1/3}$ and is therefore higher for shorter-period planets; a full orbit can be sampled in just a few nights, or even a single night; and the Doppler signal is insulated from the effects of stellar variability to some degree, because the orbital period is usually much shorter than the stellar rotation period. Still, because the planets tend to be small, the Doppler signals have amplitudes of only a few meters per second, making them challenging to detect. Masses have been measured for nine USP planets (see Figure 5). Although each measurement is subject to large uncertainties, as a group the planets are consistent with an Earth-like composition of about 70% rock and 30% iron. At the very shortest periods, a constraint on the planet’s composition can be obtained even without any Doppler data. The mere requirement that the planet is outside of the Roche limit — that it has not been ripped apart by the star’s tidal gravitational force — gives a lower bound on the planet’s mean density. An approximate expression for the Roche-limiting orbital period is $$P_{\rm min}\approx 12.6~{}{\rm hr}\left(\frac{\langle\rho\rangle}{1~{}{\rm g~{% }cm}^{-3}}\right)^{-1/2}\left(\frac{\rho_{c}}{\langle\rho\rangle}\right)^{-0.1% 6},$$ (6) where $\langle\rho\rangle$ and $\rho_{c}$ are the planet’s mean and central density, respectively. This formula is derived from the classical expression for the Roche limit of an incompressible fluid body, along with a correction for compressibility (the second factor) based on polytrope models. It has been applied to two planets, KOI-1843.03 ($P_{\rm orb}=4.2$ hours) and K2-137b (4.3 hours), to argue that they probably have large iron cores Rappaport+2013 ; Smith+2018 . We note, however, that the leading coefficient in Eqn. (6) could be substantially lower, depending on the roles of material strength and friction, and whether the body actually splits apart once the Roche limit is violated Davidsson1999 ; HolsappleMichel2006 . More theoretical work is warranted before we can have confidence in constraints on planet compositions based on the Roche limit. 3.6 Occultations When a planet’s orbit carries it behind its host star and out of view, the total system flux decreases by an amount proportional to the average brightness of the planet’s dayside. As noted earlier, Kepler-78b is the smallest planet for which it has been possible to detect the loss of light during planetary occultations. The next smallest such planet is K2-141b Malavolta+2018 , which has a size of 1.5 $R_{\oplus}$ and an orbital period of 6.7 hours. In both cases the detection was based on white-light observations with the Kepler telescope. In neither case has it been possible to determine what fraction of the dayside flux arises from reflected light, as opposed to reprocessed light (thermal emission). This would require data obtained over a wider range of wavelengths. Another USP planet for which occultations have been detected is 55 Cnc e (1.9 $R_{\oplus}$, $P=17.7$ hours). In this case, the detections were made with the Spitzer space telescope at a wavelength of 4.5 $\mu$m, far enough into the infrared range of the spectrum that the signal is probably dominated by thermal emission Demory+2012 . The measurement was repeated eight times, and the results for the brightness temperature ranged from 1300–2800 K Demory+2016a . The minimum and maximum amplitudes of the observed occultation signal were found to differ by 3.3-$\sigma$. If this apparent variability represents genuine fluctuations of the brightness of the planet’s dayside, the observers suggested they could be caused by widespread volcanic activity. The observed variations in flux over the course of the entire orbit, when attributed to the changing planetary phase, imply dayside and nightside temperatures of $2700\pm 270$ and $1380\pm 400$ K, respectively Demory+2016b . 3.7 Metallicity Giant planets with periods shorter than a few years, including hot Jupiters, have long been known to be more common around stars of high metallicity. A recent Kepler study concluded that the occurrence of hot Jupiters rises with the 3rd or 4th power of the metal abundance Petigura+2017b . The USP planets do not show nearly as strong an association with high metallicity as the hot Jupiters (Winn+2017, ). However, the Kepler planets with periods shorter than 10 days do seem to have stars with slightly elevated metallicity, in general, relative to those with longer-period planets Mulders+2016 ; Petigura+2017b ; Wilson+2018 . The lack of strong association between high metallicity and the occurrence of USP planets was of particular interest because it had been postulated that hot Jupiters are the progenitors of USP planets Jackson+2013 ; Valsecchi+2015 ; Konigl+2017 . In this scenario, the USP planets are the bare rocky cores of giant planets that approached the star too closely and lost their gas, due to photo-evaporation, Roche lobe overflow, or some other process. However, in such a scenario, one would expect the host stars of hot Jupiters and USP planets to have similar characteristics, including metallicity. Since this is not the case Winn+2017 it seems unlikely that hot Jupiters are the progenitors of USP planets. This still leaves open the possibility that the progenitors are gas-ensheathed planets of only a few Earth masses, as discussed in Section 3.4. 3.8 Longer period companions Another sharp difference between hot Jupiters and USP planets is in their tendency to have nearby planetary companions. Hot Jupiters are rarely found with other planets within a factor of 2–3 in orbital period or distance Steffen+2012 . In contrast, USP planets are almost always associated with longer period planetary companions SanchisOjeda+2014 ; Adams+2017 . This conclusion is partly based on the numerous detections of longer-period transiting planets in systems that have a USP planet (see Figure 6). It is also based on a statistical argument involving the decrease in transit probability with orbital period: many of the Kepler stars for which only the transits of a USP planet are detected must be multi-planet systems for which the outer planet does not happen to be transiting SanchisOjeda+2014 . One system that does not fit this pattern is WASP-47, which has a hot Jupiter and a USP planet Becker+2015 . There is also a third planet orbiting just outside of the orbit of the hot Jupiter. This remarkable system seems too dynamically fragile for the hot Jupiter to have undergone high-eccentricity migration, a scenario that is often invoked to explain how a giant planet could find its way into such a tight orbit. A similar system is Kepler-487, which has a USP planet and a “warm Jupiter” with a period of 15.4 days, in addition to two other transiting planets Rowe+2014 . This system has not yet been as well characterized as WASP-47. The multi-planet systems that include a USP planet differ from the other Kepler multi-planet systems in an intriguing way. A typical system without a USP planet has a ratio of between 1.5 and 4 between the periods of the outer and inner planets of an adjacent pair Fabrycky+2014 . But when the inner planet has a period shorter than one day, the period ratio is almost always greater than 3 Steffen+2013 , as illustrated in Figure 6. This suggests that some process has widened the period ratio. Perhaps the period of the USP planet has shrunk due to tidal orbital decay that is either ongoing, or that took place early in the system’s history when the star had still not contracted onto the main sequence and would have been more susceptible to tides. 3.9 Formation theories Before describing some of the theories for the formation of USP planets, it is time for another reminder that there is no sharp astrophysical distinction between the “ultra-short-period” planets with periods of less than one day and the merely “short-period” planets with periods of 1–10 days. The problems related to the formation of short-period planets have been with us since 1995, and we will not attempt a comprehensive review here. Instead we make note of some of the work relating specifically to the planets with the very shortest orbital periods. Even before the discovery of Corot-7b, Raymond et al. enumerated six possible formation pathways for “hot Earths” Raymond+2008 . These six pathways, summarized below, differ in their predictions for the final compositions for the USP planets, and for the properties of any additional planets around the same star. In light of the knowledge we have gained over the last 10 years, we can try to fill in the score card: 1. Accretion from solid material in the very innermost part of the protoplanetary disk. This “in situ accretion” process would typically lead to several hot Earths spaced by 20–60 mutual Hill radii. Such systems have indeed been observed. This formation pathway would also result in a dry composition (0.1–1% water), which is consistent with the existing mass and radius measurements. 2. Spiraling-in of an initially wide-orbiting planet due to gravitational interactions with the disk (Type 1 migration). This would likely lead to resonant chains of multiple planets. Such resonant chains are infrequent in the Kepler sample, and to our knowledge there is are no examples involving a USP planet. This sceanrio would also lead to water-rich planets, with 10% or more of their mass in water. The observations are not yet good enough to try and confirm or rule out a water fraction of 10%. 3. Accretion of material that is locked in mean-motion resonance with a migrating giant planet and thereby “shepherded” inward. This would lead to systems in which the USP planet is parked next to a giant planet. The only known system that fits this description is WASP-47. 4. Accretion of material shepherded by sweeping secular resonances. In this scenario, there are at least two giant planets; the resonance is between the precession rate induced by the other planet, and that induced by the (gradually disappearing) disk. This would lead to systems in which hot Earths and multiple giant planet coexist, which have not been observed. 5. Eccentricity excitation followed by tidal circularization of an initially wide-orbiting planet. This would result in isolated USP planets, whereas in reality, the USP planets usually have nearby planetary companions in wider orbits. 6. Photo-evaporation of a formerly gaseous planet that approached the star too closely. As noted earlier, this scenario was later elaborated to predict a gap in the radius distribution of close-in planets — or a “valley” in the space of radius and period — which has been observed. Of course, this theory by itself does not explain where the progenitor planet came from. Obviously the picture is not yet clear, nor is it clear that these six pathways are the only possibilities. Nevertheless, it does seem that elements of the in situ and photo-evaporation theories have withstood a decade of observations. In recent years, a couple of more specific theories have been offered. They make different predictions for how the USP planet population should depend on the properties of the host stars. Schlaufman et al. proposed that a planet can be driven to short periods by dynamical interactions with nearby planets in wider orbits, at which point tidal interactions with the stars shrink the orbit still further, forming an ultra-short-period planet Schlaufman2010b . They predicted that USP planets should be more common around massive (F-type) stars than around lower-mass stars, because massive stars should have weaker tidal dissipation. This appears to be the opposite of what is observed; in our survey, the occurrence was lowest for the F stars and highest for M dwarfs. Lee & Chiang agreed that tidal dissipation is responsible for shrinking the orbits, but argue for a different initial condition: the planet forms from material that collects near the innermost edge of the protoplanetary disk LeeChiang2017 . The location of the inner edge is the distance at which the orbital period matched the stellar rotation rate at early times, which is $\approx$ 10 days for Sun-like stars. In this way, they explain why the occurrence rate of planets drops sharply for periods less than 10 days. In their theory, the USP planets are gradually spiraling inward due to tidal orbital decay. They predict that for more massive and rapidly rotating A stars, the break in the period distribution of close-in rocky planets (should they exist) will occur closer to one day. 4 Other ultra-short-period phenomena 4.1 Giant planets There are 6 known examples of giant planets (8–12 $R_{\oplus}$) with a period shorter than one day. These rare butterflies are WASP-18b, WASP-43b, WASP-103b, HATS-18b, KELT-16b, and WASP-19b, the last of which has the shortest known period of 0.788 days Hebb+2010 . None of them were found in the Kepler survey; rather, they were found in ground-based transit surveys that were capable of searching a larger sample of stars. Their occurrence must be lower by at least an order of magnitude than that of terrestrial-sized USP planets. Because of the large masses of these planets, they are most valuable for testing theories of gravitational tidal interactions. As described above and in Section 3.1, tides should cause the star to spin faster and the orbit to shrink, ultimately leading to the engulfment of the planet. However, tidal theory makes no firm prediction for the timescale over which the orbit decays. The timescale depends not only on the planet mass and orbital distance, but also on the rate of tidal dissipation within the star, which is uncertain by at least an order of magnitude. If we could observe the steady shrinkage of the orbital period of a hot Jupiter, we would be able to confirm a fundamental theoretical prediction and clarify the rate of tidal dissipation within Sun-like stars, a longstanding uncertainty in stellar astrophysics. So far the best candidate for period decay is WASP-12 Maciejewski+2016 ; Patra+2017 . Rarer still are USP planets with sizes in between 2 and 8 $R_{\oplus}$. The relatively low occurrence of short-period planets smaller than 8 $R_{\oplus}$ has long been noted in the literature, going by various names such as the “hot Neptune desert” Mazeh+2016 and the “sub-Jovian pampas” BeaugeNesvorny2013 . 4.2 Pseudoplanets The T Tauri star PTFO 8-8695, less than a few million years old, exhibits periodic fading events that were interpreted as the transits of a giant planet on a precessing orbit VanEyken+2012 ; Barnes+2013 ; Ciardi+2015 . This discovery was greeted with great interest, because the study of hot Jupiters around very young stars would provide information about the timing of planet formation, the structure of newborn planets still cooling and contracting, and the mechanism for shrinking planetary orbits and creating hot Jupiters. However, follow-up observations revealed some problems with the planet hypothesis: the “transits” are not strictly periodic, the shape of the light curve varies substantially with wavelength and orbital cycle, and an occultation signal with the expected amplitude has been ruled out Yu+2015 . The origin of the fading events is still unknown, though, and some investigators are still pursuing the planet hypothesis JohnsKrull+2017 . Another case of questionable USP planets is KIC 05807616, a hot B subdwarf that showed evidence for two planets with orbital periods 5.8 and 8.2 hours. The evidence was based on the observed periodic modulations in the light from the system, which were consistent with the illumination variations of the putative planets Charpinet+2011 . However, additional data showed that these periodic variations were not coherent enough to arise from orbital motion Krzesinski2015 . Again, the true origin of these planet-like signals has not been ascertained. 4.3 Disintegrating planets Another discovery that emerged from the Fourier-based search of the Kepler data was the first of a small subset of objects referred to as “disintegrating” planets. The objects are KIC 12557548b (KIC 1255b for short), KOI-2700b, and K2-22b, with orbital periods of 15.7, 22, and 9.5 hours, respectively Rappaport2012a ; Rappaport2014a ; SanchisOjeda+2015b ; vanLieshout2017 . In all three cases, the transit signal is asymmetric around the time of minimum light, and there are variations in the transit depth. KIC 1255b and KOI-2700b have much longer egress times than ingress times, while KIC 1255b and K2-22b exhibit highly variable depths, often from one transit to the next. These characteristics strongly suggest that the occulter is an elongated tail of dusty material streaming from an underlying hot rocky exoplanet. It is important to note that the transits in these objects do not reveal any evidence of the underlying solid body itself. For two of the objects the transit depths are on the order of 0.5%, implying an obscuring area comparable to that of Jupiter, but the obscuration is presumed to be almost entirely due to dust extinction. In all three cases the upper limits on the size of the solid body itself is on the order of the size of the Earth, and theoretical considerations suggest the true sizes may be comparable to that of the Moon PerezBeckerChiang2013 . The shape of the dust tail is largely dictated by radiation pressure forces. The likely result is a trailing dust tail unless the radiation pressure forces are small, which can occur for either very large ($\;\buildrel>\over{\sim}\;$ 10 $\mu$m) or tiny dust particles ($\;\buildrel<\over{\sim}\;$ 0.1 $\mu$m). The inferred mass-loss rates from the planets are based on the amount of dust required to yield such significant extinction of the host star, and are in the range of $10^{10}$–$10^{11}$ g s${}^{-1}$. This is roughly equivalent to a few lunar masses per Gyr. These and other properties have been recently reviewed by Van Lieshout & Rappaport vanLieshout2017 . 4.4 Disintegrating asteroids WD 1145+017 is a unique object thought to be a white dwarf with a set of disintegrating asteroids in ultra-short-period orbits Vanderburg+2015 . The system exhibits transits with multiple periods in the range from 4.5 to 4.9 hours. It is believed that the asteroids are responsible for dust clouds which then produce the transits. These transits can be as deep as 60% and last anywhere from about 10 minutes to an hour. All the bodies that have been inferred are orbiting approximately 1 $R_{\odot}$ from the white dwarf, which has a luminosity of $\sim$ 0.1 $L_{\odot}$. This combination leads to equilibrium temperatures of the dust grains of about 1000–2000 K, depending on their size and composition Xu+2018 . The ultimate significance of this system is that we are likely witnessing the disintegration of planetary bodies that survived the giant evolutionary phase of the white dwarf progenitor. 4.5 Unknown unknowns The disintegrating planets and asteroids are examples of new and interesting phenomena that were discovered by sifting through Kepler data. The next few years will bring another good opportunity for a comprehensive exploration of ultra-short-period phenomena. The Transiting Exoplanet Survey Satellite (TESS ) scheduled for launch in 2018, will perform time-series photometry over about 90% of the sky, using four 10 cm telescopes Ricker+2015 . Stars will be observed for a duration ranging from one month to one year, depending on ecliptic latitude. With TESS data we will be able to find USP planets around stars that are several magnitudes brighter than typical Kepler stars. This will provide more targets that are suitable for precise Doppler mass measurements. With a sample of brighter stars, it will be easier to test for compositional similarities between stars with USP planets and stars with other types of planets that might be the progenitors of USP planets. More generally, though, the TESS data will be another giant haystack within which to search for interesting needles: planets in unusual configurations, disintegrating planets and asteroids, and hitherto unknown phenomena. Searches for USP planets may reveal the hypothesized “iron planets” that could be formed when rocky planets are battered by giant impacts, vaporizing the rocky mantle but leaving the iron core intact. By searching for hot Jupiters around orders of magnitude more stars than in the Kepler survey, we may be able to find vanishingly rare hot Jupiters that are undergoing rapid orbital decay. Although TESS attracts the most attention for its potential to find potentially habitable planets around red dwarf stars, the mission can also be regarded as a search for rare short-period phenomena over a wider range of stellar masses and ages than has been explored before. Acknowledgements.—We thank Jack Lissauer for the invitation to write this review, and Scott Tremaine for helpful consultations. We thank Eve Lee and Eugene Chiang for granting us permission to reproduce Figure 1 from their 2017 paper (which appears as Figure 4 in this article). J.N.W. thanks the Heising-Simons foundation for support. We are all deeply grateful to the engineers, managers, and scientists who were reponsible for the Kepler mission which has led to so many important discoveries. References References (1) B. Jackson, C. C. Stark, E. R. Adams, J. Chambers, D. Deming, A Survey for Very Short-period Planets in the Kepler Data, ApJ779 (2013) 165. arXiv:1308.1379, doi:10.1088/0004-637X/779/2/165. (2) R. Sanchis-Ojeda, S. Rappaport, J. N. Winn, M. C. Kotson, A. Levine, I. El Mellah, A Study of the Shortest-period Planets Found with Kepler, ApJ787 (2014) 47. arXiv:1403.2379, doi:10.1088/0004-637X/787/1/47. (3) M. Mayor, D. Queloz, A Jupiter-mass companion to a solar-type star, Nature378 (1995) 355–359. doi:10.1038/378355a0. (4) M. Konacki, G. Torres, S. Jha, D. D. Sasselov, An extrasolar planet that transits the disk of its parent star, Nature421 (2003) 507–509. doi:10.1038/nature01379. (5) B. S. Gaudi, S. Seager, G. Mallen-Ornelas, On the Period Distribution of Close-in Extrasolar Giant Planets, ApJ623 (2005) 472–481. arXiv:astro-ph/0409443, doi:10.1086/428478. (6) J. Pepper, A. Gould, D. L. Depoy, Using All-Sky Surveys to Find Planetary Transits, AcA53 (2003) 213–228. arXiv:astro-ph/0208042. (7) A. Léger, D. Rouan, J. Schneider, P. Barge, M. Fridlund, B. Samuel, M. Ollivier, E. Guenther, M. Deleuil, H. J. Deeg, M. Auvergne, R. Alonso, S. Aigrain, A. Alapini, J. M. Almenara, A. Baglin, M. Barbieri, H. Bruntt, P. Bordé, F. Bouchy, J. Cabrera, C. Catala, L. Carone, S. Carpano, S. Csizmadia, R. Dvorak, A. Erikson, S. Ferraz-Mello, B. Foing, F. Fressin, D. Gandolfi, M. Gillon, P. Gondoin, O. Grasset, T. Guillot, A. Hatzes, G. Hébrard, L. Jorda, H. Lammer, A. Llebaria, B. Loeillet, M. Mayor, T. Mazeh, C. Moutou, M. Pätzold, F. Pont, D. Queloz, H. Rauer, S. Renner, R. Samadi, A. Shporer, C. Sotin, B. Tingley, G. Wuchterl, M. Adda, P. Agogu, T. Appourchaux, H. Ballans, P. Baron, T. Beaufort, R. Bellenger, R. Berlin, P. Bernardi, D. Blouin, F. Baudin, P. Bodin, L. Boisnard, L. Boit, F. Bonneau, S. Borzeix, R. Briet, J.-T. Buey, B. Butler, D. Cailleau, R. Cautain, P.-Y. Chabaud, S. Chaintreuil, F. Chiavassa, V. Costes, V. Cuna Parrho, F. de Oliveira Fialho, M. Decaudin, J.-M. Defise, S. Djalal, G. Epstein, G.-E. Exil, C. Fauré, T. Fenouillet, A. Gaboriaud, A. Gallic, P. Gamet, P. Gavalda, E. Grolleau, R. Gruneisen, L. Gueguen, V. Guis, V. Guivarc’h, P. Guterman, D. Hallouard, J. Hasiba, F. Heuripeau, G. Huntzinger, H. Hustaix, C. Imad, C. Imbert, B. Johlander, M. Jouret, P. Journoud, F. Karioty, L. Kerjean, V. Lafaille, L. Lafond, T. Lam-Trong, P. Landiech, V. Lapeyrere, T. Larqué, P. Laudet, N. Lautier, H. Lecann, L. Lefevre, B. Leruyet, P. Levacher, A. Magnan, E. Mazy, F. Mertens, J.-M. Mesnager, J.-C. Meunier, J.-P. Michel, W. Monjoin, D. Naudet, K. Nguyen-Kim, J.-L. Orcesi, H. Ottacher, R. Perez, G. Peter, P. Plasson, J.-Y. Plesseria, B. Pontet, A. Pradines, C. Quentin, J.-L. Reynaud, G. Rolland, F. Rollenhagen, R. Romagnan, N. Russ, R. Schmidt, N. Schwartz, I. Sebbag, G. Sedes, H. Smit, M. B. Steller, W. Sunter, C. Surace, M. Tello, D. Tiphène, P. Toulouse, B. Ulmer, O. Vandermarcq, E. Vergnault, A. Vuillemin, P. Zanatta, Transiting exoplanets from the CoRoT space mission. VIII. CoRoT-7b: the first super-Earth with measured radius, A&A506 (2009) 287–302. arXiv:0908.0241, doi:10.1051/0004-6361/200911933. (8) N. M. Batalha, W. J. Borucki, S. T. Bryson, L. A. Buchhave, D. A. Caldwell, J. Christensen-Dalsgaard, D. Ciardi, E. W. Dunham, F. Fressin, T. N. Gautier, III, R. L. Gilliland, M. R. Haas, S. B. Howell, J. M. Jenkins, H. Kjeldsen, D. G. Koch, D. W. Latham, J. J. Lissauer, G. W. Marcy, J. F. Rowe, D. D. Sasselov, S. Seager, J. H. Steffen, G. Torres, G. S. Basri, T. M. Brown, D. Charbonneau, J. Christiansen, B. Clarke, W. D. Cochran, A. Dupree, D. C. Fabrycky, D. Fischer, E. B. Ford, J. Fortney, F. R. Girouard, M. J. Holman, J. Johnson, H. Isaacson, T. C. Klaus, P. Machalek, A. V. Moorehead, R. C. Morehead, D. Ragozzine, P. Tenenbaum, J. Twicken, S. Quinn, J. VanCleve, L. M. Walkowicz, W. F. Welsh, E. Devore, A. Gould, Kepler’s First Rocky Planet: Kepler-10b, ApJ729 (2011) 27. arXiv:1102.0605, doi:10.1088/0004-637X/729/1/27. (9) B. E. McArthur, M. Endl, W. D. Cochran, G. F. Benedict, D. A. Fischer, G. W. Marcy, R. P. Butler, D. Naef, M. Mayor, D. Queloz, S. Udry, T. E. Harrison, Detection of a Neptune-Mass Planet in the $\rho$${}^{1}$ Cancri System Using the Hobby-Eberly Telescope, ApJL614 (2004) L81–L84. arXiv:astro-ph/0408585, doi:10.1086/425561. (10) D. A. Fischer, G. W. Marcy, R. P. Butler, S. S. Vogt, G. Laughlin, G. W. Henry, D. Abouav, K. M. G. Peek, J. T. Wright, J. A. Johnson, C. McCarthy, H. Isaacson, Five Planets Orbiting 55 Cancri, ApJ675 (2008) 790–801. arXiv:0712.3917, doi:10.1086/525512. (11) R. I. Dawson, D. C. Fabrycky, Radial Velocity Planets De-aliased: A New, Short Period for Super-Earth 55 Cnc e, ApJ722 (2010) 937–953. arXiv:1005.4050, doi:10.1088/0004-637X/722/1/937. (12) J. N. Winn, J. M. Matthews, R. I. Dawson, D. Fabrycky, M. J. Holman, T. Kallinger, R. Kuschnig, D. Sasselov, D. Dragomir, D. B. Guenther, A. F. J. Moffat, J. F. Rowe, S. Rucinski, W. W. Weiss, A Super-Earth Transiting a Naked-eye Star, ApJL737 (2011) L18. arXiv:1104.5230, doi:10.1088/2041-8205/737/1/L18. (13) B.-O. Demory, M. Gillon, D. Deming, D. Valencia, S. Seager, B. Benneke, C. Lovis, P. Cubillos, J. Harrington, K. B. Stevenson, M. Mayor, F. Pepe, D. Queloz, D. Ségransan, S. Udry, Detection of a transit of the super-Earth 55 Cancri e with warm Spitzer, A&A533 (2011) A114. arXiv:1105.0415, doi:10.1051/0004-6361/201117178. (14) A. Léger, O. Grasset, B. Fegley, F. Codron, A. F. Albarede, P. Barge, R. Barnes, P. Cance, S. Carpy, F. Catalano, C. Cavarroc, O. Demangeon, S. Ferraz-Mello, P. Gabor, J.-M. Grießmeier, J. Leibacher, G. Libourel, A.-S. Maurin, S. N. Raymond, D. Rouan, B. Samuel, L. Schaefer, J. Schneider, P. A. Schuller, F. Selsis, C. Sotin, The extreme physical properties of the CoRoT-7b super-Earth, Icarus213 (2011) 1–11. arXiv:1102.1629, doi:10.1016/j.icarus.2011.02.004. (15) A. Ofir, S. Dreizler, An independent planet search in the Kepler dataset. I. One hundred new candidates and revised Kepler objects of interest, A&A555 (2013) A58. arXiv:1206.5347, doi:10.1051/0004-6361/201219877. (16) S. Rappaport, R. Sanchis-Ojeda, L. A. Rogers, A. Levine, J. N. Winn, The Roche Limit for Close-orbiting Planets: Minimum Density, Composition Constraints, and Application to the 4.2 hr Planet KOI 1843.03, ApJL773 (2013) L15. arXiv:1307.4080, doi:10.1088/2041-8205/773/1/L15. (17) A. M. S. Smith, J. Cabrera, S. Csizmadia, F. Dai, D. Gandolfi, T. Hirano, J. N. Winn, S. Albrecht, R. Alonso, G. Antoniciello, O. Barragán, H. Deeg, P. Eigmüller, M. Endl, A. Erikson, M. Fridlund, A. Fukui, S. Grziwa, E. W. Guenther, A. P. Hatzes, D. Hidalgo, A. W. Howard, H. Isaacson, J. Korth, M. Kuzuhara, J. Livingston, N. Narita, D. Nespral, G. Nowak, E. Palle, M. Pätzold, C. M. Persson, E. Petigura, J. Prieto-Arranz, H. Rauer, I. Ribas, V. Van Eylen, K2-137 b: an Earth-sized planet in a 4.3-h orbit around an M-dwarf, MNRAS474 (2018) 5523–5533. arXiv:1707.04549, doi:10.1093/mnras/stx2891. (18) R. W. Slawson, A. Prša, W. F. Welsh, J. A. Orosz, M. Rucker, N. Batalha, L. R. Doyle, S. G. Engle, K. Conroy, J. Coughlin, T. A. Gregg, T. Fetherolf, D. R. Short, G. Windmiller, D. C. Fabrycky, S. B. Howell, J. M. Jenkins, K. Uddin, F. Mullally, S. E. Seader, S. E. Thompson, D. T. Sanderfer, W. Borucki, D. Koch, Kepler Eclipsing Binary Stars. II. 2165 Eclipsing Binaries in the Second Data Release, AJ142 (2011) 160. arXiv:1103.1659, doi:10.1088/0004-6256/142/5/160. (19) G. Kovács, S. Zucker, T. Mazeh, A box-fitting algorithm in the search for periodic transits, A&A391 (2002) 369–377. arXiv:astro-ph/0206099, doi:10.1051/0004-6361:20020802. (20) J. M. Jenkins, D. A. Caldwell, H. Chandrasekaran, J. D. Twicken, S. T. Bryson, E. V. Quintana, B. D. Clarke, J. Li, C. Allen, P. Tenenbaum, H. Wu, T. C. Klaus, J. Van Cleve, J. A. Dotson, M. R. Haas, R. L. Gilliland, D. G. Koch, W. J. Borucki, Initial Characteristics of Kepler Long Cadence Data for Detecting Transiting Planets, ApJL713 (2010) L120–L125. arXiv:1001.0256, doi:10.1088/2041-8205/713/2/L120. (21) T. D. Morton, J. A. Johnson, On the Low False Positive Probabilities of Kepler Planet Candidates, ApJ738 (2011) 170. arXiv:1101.5630, doi:10.1088/0004-637X/738/2/170. (22) A. W. Howard, R. Sanchis-Ojeda, G. W. Marcy, J. A. Johnson, J. N. Winn, H. Isaacson, D. A. Fischer, B. J. Fulton, E. Sinukoff, J. J. Fortney, A rocky composition for an Earth-sized exoplanet, Nature503 (2013) 381–384. arXiv:1310.7988, doi:10.1038/nature12767. (23) F. Pepe, A. C. Cameron, D. W. Latham, E. Molinari, S. Udry, A. S. Bonomo, L. A. Buchhave, D. Charbonneau, R. Cosentino, C. D. Dressing, X. Dumusque, P. Figueira, A. F. M. Fiorenzano, S. Gettel, A. Harutyunyan, R. D. Haywood, K. Horne, M. Lopez-Morales, C. Lovis, L. Malavolta, M. Mayor, G. Micela, F. Motalebi, V. Nascimbeni, D. Phillips, G. Piotto, D. Pollacco, D. Queloz, K. Rice, D. Sasselov, D. Ségransan, A. Sozzetti, A. Szentgyorgyi, C. A. Watson, An Earth-sized planet with an Earth-like density, Nature503 (2013) 377–380. arXiv:1310.7987, doi:10.1038/nature12768. (24) A. P. Hatzes, The detection of Earth-mass planets around active stars. The mass of Kepler-78b, A&A568 (2014) A84. arXiv:1407.0853, doi:10.1051/0004-6361/201424025. (25) S. K. Grunblatt, A. W. Howard, R. D. Haywood, Determining the Mass of Kepler-78b with Nonparametric Gaussian Process Estimation, ApJ808 (2015) 127. arXiv:1501.00369, doi:10.1088/0004-637X/808/2/127. (26) R. Sanchis-Ojeda, S. Rappaport, J. N. Winn, A. Levine, M. C. Kotson, D. W. Latham, L. A. Buchhave, Transits and Occultations of an Earth-sized Planet in an 8.5 hr Orbit, ApJ774 (2013) 54. arXiv:1305.4180, doi:10.1088/0004-637X/774/1/54. (27) C. Moutou, J.-F. Donati, D. Lin, R. O. Laine, A. Hatzes, The magnetic properties of the star Kepler-78, MNRAS459 (2016) 1993–2007. arXiv:1605.03255, doi:10.1093/mnras/stw809. (28) R. O. Laine, D. N. C. Lin, Interaction of Close-in Planets with the Magnetosphere of Their Host Stars. II. Super-Earths as Unipolar Inductors and Their Orbital Evolution, ApJ745 (2012) 2. arXiv:1201.1584, doi:10.1088/0004-637X/745/1/2. (29) P. Goldreich, S. Soter, Q in the Solar System, Icarus5 (1966) 375–389. doi:10.1016/0019-1035(66)90051-0. (30) K. C. Patra, J. N. Winn, M. J. Holman, L. Yu, D. Deming, F. Dai, The Apparently Decaying Orbit of WASP-12b, AJ154 (2017) 4. arXiv:1703.06582, doi:10.3847/1538-3881/aa6d75. (31) D. Rouan, H. J. Deeg, O. Demangeon, B. Samuel, C. Cavarroc, B. Fegley, A. Léger, The Orbital Phases and Secondary Transits of Kepler-10b. A Physical Interpretation Based on the Lava-ocean Planet Model, ApJL741 (2011) L30. arXiv:1109.2768, doi:10.1088/2041-8205/741/2/L30. (32) E. S. Kite, B. Fegley, Jr., L. Schaefer, E. Gaidos, Atmosphere-interior Exchange on Hot, Rocky Exoplanets, ApJ828 (2016) 80. arXiv:1606.06740, doi:10.3847/0004-637X/828/2/80. (33) S. E. Gelman, L. T. Elkins-Tanton, S. Seager, Effects of Stellar Flux on Tidally Locked Terrestrial Planets: Degree-1 Mantle Convection and Local Magma Ponds, ApJ735 (2011) 72. doi:10.1088/0004-637X/735/2/72. (34) F. W. Wagner, N. Tosi, F. Sohl, H. Rauer, T. Spohn, Rocky super-Earth interiors. Structure and internal dynamics of CoRoT-7b and Kepler-10b, A&A541 (2012) A103. doi:10.1051/0004-6361/201118441. (35) E. D. Lopez, J. J. Fortney, Re-inflated Warm Jupiters around Red Giants, ApJ818 (2016) 4. arXiv:1510.00067, doi:10.3847/0004-637X/818/1/4. (36) J. E. Owen, Y. Wu, Atmospheres of Low-mass Planets: The “Boil-off”, ApJ817 (2016) 107. arXiv:1506.02049, doi:10.3847/0004-637X/817/2/107. (37) H. Chen, L. A. Rogers, Evolutionary Analysis of Gaseous Sub-Neptune-mass Planets with MESA, ApJ831 (2016) 180. arXiv:1603.06596, doi:10.3847/0004-637X/831/2/180. (38) L. Schaefer, B. Fegley, Chemistry of Silicate Atmospheres of Evaporating Super-Earths, ApJL703 (2009) L113–L117. arXiv:0906.1204, doi:10.1088/0004-637X/703/2/L113. (39) X. Huang, G. Á. Bakos, J. D. Hartman, 150 new transiting planet candidates from Kepler Q1-Q6 data, MNRAS429 (2013) 2001–2018. arXiv:1205.6492, doi:10.1093/mnras/sts463. (40) J. N. Winn, R. Sanchis-Ojeda, L. Rogers, E. A. Petigura, A. W. Howard, H. Isaacson, G. W. Marcy, K. C. Schlaufman, P. Cargile, L. Hebb, Absence of a Metallicity Effect for Ultra-short-period Planets, AJ154 (2017) 60. arXiv:1704.00203, doi:10.3847/1538-3881/aa7b7c. (41) M. S. Lundkvist, H. Kjeldsen, S. Albrecht, G. R. Davies, S. Basu, D. Huber, A. B. Justesen, C. Karoff, V. Silva Aguirre, V. van Eylen, C. Vang, T. Arentoft, T. Barclay, T. R. Bedding, T. L. Campante, W. J. Chaplin, J. Christensen-Dalsgaard, Y. P. Elsworth, R. L. Gilliland, R. Handberg, S. Hekker, S. D. Kawaler, M. N. Lund, T. S. Metcalfe, A. Miglio, J. F. Rowe, D. Stello, B. Tingley, T. R. White, Hot super-Earths stripped by their host stars, Nature Communications 7 (2016) 11201. arXiv:1604.05220, doi:10.1038/ncomms11201. (42) J. H. Steffen, J. L. Coughlin, A Population of planetary systems characterized by short-period, Earth-sized planets, Proceedings of the National Academy of Science 113 (2016) 12023–12028. arXiv:1610.03550, doi:10.1073/pnas.1606658113. (43) E. R. Adams, B. Jackson, M. Endl, Ultra-short-period Planets in K2 SuPerPiG Results for Campaigns 0-5, AJ152 (2016) 47. arXiv:1603.06488, doi:10.3847/0004-6256/152/2/47. (44) C. D. Dressing, A. Vanderburg, J. E. Schlieder, I. J. M. Crossfield, H. A. Knutson, E. R. Newton, D. R. Ciardi, B. J. Fulton, E. J. Gonzales, A. W. Howard, H. Isaacson, J. Livingston, E. A. Petigura, E. Sinukoff, M. Everett, E. Horch, S. B. Howell, Characterizing K2 Candidate Planetary Systems Orbiting Low-mass Stars. II. Planetary Systems Observed During Campaigns 1-7, AJ154 (2017) 207. arXiv:1703.07416, doi:10.3847/1538-3881/aa89f2. (45) O. Barragán, D. Gandolfi, F. Dai, J. Livingston, C. M. Persson, T. Hirano, N. Narita, S. Csizmadia, J. N. Winn, D. Nespral, J. Prieto-Arranz, A. M. S. Smith, G. Nowak, S. Albrecht, G. Antoniciello, A. B. Justesen, J. Cabrera, W. D. Cochran, H. Deeg., P. Eigmuller, M. Endl, A. Erikson, M. Fridlund, A. Fukui, S. Grziwa, E. Guenther, A. P. Hatzes, D. Hidalgo, M. C. Johnson, J. Korth, E. Palle, M. Patzold, H. Rauer, Y. Tanaka, V. Van Eylen, K2-141 b: A 5-M$\_\oplus$ super-Earth transiting a K7 V star every 6.7 hours, ArXiv e-printsarXiv:1711.02097. (46) E. W. Guenther, O. Barragan, F. Dai, D. Gandolfi, T. Hirano, M. Fridlund, L. Fossati, J. Korth, J. Arraz-Prieto, D. Nespral, G. Antoniciello, H. Deeg, M. Hjorth, S. Grziwa, S. Albrecht, A. P. Hatzes, H. Rauer, S. Csizmadia, A. M. S. Smith, J. Cabrera, N. Narita, P. Arriagada, J. Burt, R. P. Butler, W. D. Cochran, J. D. Crane, P. Eigmueller, A. Erikson, J. A. Johnson, A. Kiilerich, D. Kubyshkina, S. Kunz, E. Palle, C. M. Persson, M. Paetzold, J. Prieto-Arranz, B. Sato, S. A. Shectman, J. K. Teske, I. B. Thompson, V. Van Eylen, G. Nowak, A. Vanderburg, J. N. Winn, R. A. Wittenmyer, K2-106, a system containing a metal rich planet and a planet of lower density, ArXiv e-printsarXiv:1705.04163. (47) F. Dai, J. N. Winn, D. Gandolfi, S. X. Wang, J. K. Teske, J. Burt, S. Albrecht, O. Barragán, W. D. Cochran, M. Endl, M. Fridlund, A. P. Hatzes, T. Hirano, L. A. Hirsch, M. C. Johnson, A. B. Justesen, J. Livingston, C. M. Persson, J. Prieto-Arranz, A. Vanderburg, R. Alonso, G. Antoniciello, P. Arriagada, R. P. Butler, J. Cabrera, J. D. Crane, F. Cusano, S. Csizmadia, H. Deeg, S. B. Dieterich, P. Eigmüller, A. Erikson, M. E. Everett, A. Fukui, S. Grziwa, E. W. Guenther, G. W. Henry, S. B. Howell, J. A. Johnson, J. Korth, M. Kuzuhara, N. Narita, D. Nespral, G. Nowak, E. Palle, M. Pätzold, H. Rauer, P. Montañés Rodríguez, S. A. Shectman, A. M. S. Smith, I. B. Thompson, V. Van Eylen, M. W. Williamson, R. A. Wittenmyer, The Discovery and Mass Measurement of a New Ultra-short-period Planet: K2-131b, AJ154 (2017) 226. arXiv:1710.00076, doi:10.3847/1538-3881/aa9065. (48) L. Malavolta, A. W. Mayo, T. Louden, V. M. Rajpaul, A. S. Bonomo, L. A. Buchhave, L. Kreidberg, M. H. Kristiansen, M. Lopez-Morales, A. Mortier, A. Vanderburg, A. Coffinet, D. Ehrenreich, C. Lovis, F. Bouchy, D. Charbonneau, D. R. Ciardi, A. Collier Cameron, R. Cosentino, I. J. M. Crossfield, M. Damasso, C. D. Dressing, X. Dumusque, M. E. Everett, P. Figueira, A. F. M. Fiorenzano, E. J. Gonzales, R. D. Haywood, A. Harutyunyan, L. Hirsch, S. B. Howell, J. A. Johnson, D. W. Latham, E. Lopez, M. Mayor, G. Micela, E. Molinari, V. Nascimbeni, F. Pepe, D. F. Phillips, G. Piotto, K. Rice, D. Sasselov, D. Ségransan, A. Sozzetti, S. Udry, C. Watson, An ultra-short period rocky super-Earth with a secondary eclipse and a Neptune-like companion around K2-141, ArXiv e-printsarXiv:1801.03502. (49) E. J. Lee, E. Chiang, Magnetospheric Truncation, Tidal Inspiral, and the Creation of Short-period and Ultra-short-period Planets, ApJ842 (2017) 40. arXiv:1702.08461, doi:10.3847/1538-4357/aa6fb3. (50) A. W. Howard, G. W. Marcy, S. T. Bryson, J. M. Jenkins, J. F. Rowe, N. M. Batalha, W. J. Borucki, D. G. Koch, E. W. Dunham, T. N. Gautier, III, J. Van Cleve, W. D. Cochran, D. W. Latham, J. J. Lissauer, G. Torres, T. M. Brown, R. L. Gilliland, L. A. Buchhave, D. A. Caldwell, J. Christensen-Dalsgaard, D. Ciardi, F. Fressin, M. R. Haas, S. B. Howell, H. Kjeldsen, S. Seager, L. Rogers, D. D. Sasselov, J. H. Steffen, G. S. Basri, D. Charbonneau, J. Christiansen, B. Clarke, A. Dupree, D. C. Fabrycky, D. A. Fischer, E. B. Ford, J. J. Fortney, J. Tarter, F. R. Girouard, M. J. Holman, J. A. Johnson, T. C. Klaus, P. Machalek, A. V. Moorhead, R. C. Morehead, D. Ragozzine, P. Tenenbaum, J. D. Twicken, S. N. Quinn, H. Isaacson, A. Shporer, P. W. Lucas, L. M. Walkowicz, W. F. Welsh, A. Boss, E. Devore, A. Gould, J. C. Smith, R. L. Morris, A. Prsa, T. D. Morton, M. Still, S. E. Thompson, F. Mullally, M. Endl, P. J. MacQueen, Planet Occurrence within 0.25 AU of Solar-type Stars from Kepler, ApJS201 (2012) 15. arXiv:1103.2541, doi:10.1088/0067-0049/201/2/15. (51) B. J. Fulton, E. A. Petigura, A. W. Howard, H. Isaacson, G. W. Marcy, P. A. Cargile, L. Hebb, L. M. Weiss, J. A. Johnson, T. D. Morton, E. Sinukoff, I. J. M. Crossfield, L. A. Hirsch, The California-Kepler Survey. III. A Gap in the Radius Distribution of Small Planets, ArXiv e-printsarXiv:1703.10375. (52) J. E. Owen, Y. Wu, The Evaporation Valley in the Kepler Planets, ApJ847 (2017) 29. arXiv:1705.10810, doi:10.3847/1538-4357/aa890a. (53) B. J. R. Davidsson, Tidal Splitting and Rotational Breakup of Solid Spheres, Icarus142 (1999) 525–535. doi:10.1006/icar.1999.6214. (54) K. A. Holsapple, P. Michel, Tidal disruptions: A continuum theory for solid bodies, Icarus183 (2006) 331–348. doi:10.1016/j.icarus.2006.03.013. (55) L. Zeng, D. D. Sasselov, S. B. Jacobsen, Mass-Radius Relation for Rocky Planets Based on PREM, ApJ819 (2016) 127. arXiv:1512.08827, doi:10.3847/0004-637X/819/2/127. (56) E. Sinukoff, A. W. Howard, E. A. Petigura, B. J. Fulton, I. J. M. Crossfield, H. Isaacson, E. Gonzales, J. R. Crepp, J. M. Brewer, L. Hirsch, L. M. Weiss, D. R. Ciardi, J. E. Schlieder, B. Benneke, J. L. Christiansen, C. D. Dressing, B. M. S. Hansen, H. A. Knutson, M. Kosiarek, J. H. Livingston, T. P. Greene, L. A. Rogers, S. Lépine, K2-66b and K2-106b: Two Extremely Hot Sub-Neptune-size Planets with High Densities, AJ153 (2017) 271. arXiv:1705.03491, doi:10.3847/1538-3881/aa725f. (57) B.-O. Demory, M. Gillon, S. Seager, B. Benneke, D. Deming, B. Jackson, Detection of Thermal Emission from a Super-Earth, ApJL751 (2012) L28. arXiv:1205.1766, doi:10.1088/2041-8205/751/2/L28. (58) B.-O. Demory, M. Gillon, N. Madhusudhan, D. Queloz, Variability in the super-Earth 55 Cnc e, MNRAS455 (2016) 2018–2027. arXiv:1505.00269, doi:10.1093/mnras/stv2239. (59) B.-O. Demory, M. Gillon, J. de Wit, N. Madhusudhan, E. Bolmont, K. Heng, T. Kataria, N. Lewis, R. Hu, J. Krick, V. Stamenković, B. Benneke, S. Kane, D. Queloz, A map of the large day-night temperature gradient of a super-Earth exoplanet, Nature532 (2016) 207–209. arXiv:1604.05725, doi:10.1038/nature17169. (60) E. A. Petigura, G. W. Marcy, J. N. Winn, L. M. Weiss, B. J. Fulton, A. W. Howard, E. Sinukoff, H. Isaacson, T. D. Morton, J. A. Johnson, The California-Kepler Survey. IV. Metal-rich Stars Host a Greater Diversity of Planets, ArXiv e-printsarXiv:1712.04042. (61) G. D. Mulders, I. Pascucci, D. Apai, A. Frasca, J. Molenda-Żakowicz, A Super-solar Metallicity for Stars with Hot Rocky Exoplanets, AJ152 (2016) 187. arXiv:1609.05898, doi:10.3847/0004-6256/152/6/187. (62) R. F. Wilson, J. Teske, S. R. Majewski, K. Cunha, V. Smith, D. Souto, C. Bender, S. Mahadevan, N. Troup, C. Allende Prieto, K. G. Stassun, M. F. Skrutskie, A. Almeida, D. A. García-Hernández, O. Zamora, J. Brinkmann, Elemental Abundances of Kepler Objects of Interest in APOGEE. I. Two Distinct Orbital Period Regimes Inferred from Host Star Iron Abundances, AJ155 (2018) 68. arXiv:1712.01198, doi:10.3847/1538-3881/aa9f27. (63) F. Valsecchi, S. Rappaport, F. A. Rasio, P. Marchant, L. A. Rogers, Tidally-driven Roche-lobe Overflow of Hot Jupiters with MESA, ApJ813 (2015) 101. arXiv:1506.05175, doi:10.1088/0004-637X/813/2/101. (64) A. Königl, S. Giacalone, T. Matsakos, On the Origin of Dynamically Isolated Hot Earths, ApJL846 (2017) L13. arXiv:1708.08159, doi:10.3847/2041-8213/aa861f. (65) T. D. Morton, S. T. Bryson, J. L. Coughlin, J. F. Rowe, G. Ravichandran, E. A. Petigura, M. R. Haas, N. M. Batalha, False Positive Probabilities for all Kepler Objects of Interest: 1284 Newly Validated Planets and 428 Likely False Positives, ApJ822 (2016) 86. arXiv:1605.02825, doi:10.3847/0004-637X/822/2/86. (66) J. F. Rowe, S. T. Bryson, G. W. Marcy, J. J. Lissauer, D. Jontof-Hutter, F. Mullally, R. L. Gilliland, H. Issacson, E. Ford, S. B. Howell, W. J. Borucki, M. Haas, D. Huber, J. H. Steffen, S. E. Thompson, E. Quintana, T. Barclay, M. Still, J. Fortney, T. N. Gautier, III, R. Hunter, D. A. Caldwell, D. R. Ciardi, E. Devore, W. Cochran, J. Jenkins, E. Agol, J. A. Carter, J. Geary, Validation of Kepler’s Multiple Planet Candidates. III. Light Curve Analysis and Announcement of Hundreds of New Multi-planet Systems, ApJ784 (2014) 45. arXiv:1402.6534, doi:10.1088/0004-637X/784/1/45. (67) L. M. Weiss, L. A. Rogers, H. T. Isaacson, E. Agol, G. W. Marcy, J. F. Rowe, D. Kipping, B. J. Fulton, J. J. Lissauer, A. W. Howard, D. Fabrycky, Revised Masses and Densities of the Planets around Kepler-10, ApJ819 (2016) 83. arXiv:1601.06168, doi:10.3847/0004-637X/819/1/83. (68) J. D. Twicken, J. M. Jenkins, S. E. Seader, P. Tenenbaum, J. C. Smith, L. S. Brownston, C. J. Burke, J. H. Catanzarite, B. D. Clarke, M. T. Cote, F. R. Girouard, T. C. Klaus, J. Li, S. D. McCauliff, R. L. Morris, B. Wohler, J. R. Campbell, A. Kamal Uddin, K. A. Zamudio, A. Sabale, S. T. Bryson, D. A. Caldwell, J. L. Christiansen, J. L. Coughlin, M. R. Haas, C. E. Henze, D. T. Sanderfer, S. E. Thompson, Detection of Potential Transit Signals in 17 Quarters of Kepler Data: Results of the Final Kepler Mission Transiting Planet Search (DR25), AJ152 (2016) 158. arXiv:1604.06140, doi:10.3847/0004-6256/152/6/158. (69) P. S. Muirhead, J. A. Johnson, K. Apps, J. A. Carter, T. D. Morton, D. C. Fabrycky, J. S. Pineda, M. Bottom, B. Rojas-Ayala, E. Schlawin, K. Hamren, K. R. Covey, J. R. Crepp, K. G. Stassun, J. Pepper, L. Hebb, E. N. Kirby, A. W. Howard, H. T. Isaacson, G. W. Marcy, D. Levitan, T. Diaz-Santos, L. Armus, J. P. Lloyd, Characterizing the Cool KOIs. III. KOI 961: A Small Star with Large Proper Motion and Three Small Planets, ApJ747 (2012) 144. arXiv:1201.2189, doi:10.1088/0004-637X/747/2/144. (70) E. R. Adams, B. Jackson, M. Endl, W. D. Cochran, P. J. MacQueen, D. A. Duev, R. Jensen-Clem, M. Salama, C. Ziegler, C. Baranec, S. Kulkarni, N. M. Law, R. Riddle, Ultra-short-period Planets in K2 with Companions: A Double Transiting System for EPIC 220674823, AJ153 (2017) 82. arXiv:1611.00397, doi:10.3847/1538-3881/153/2/82. (71) D. C. Fabrycky, E. B. Ford, J. H. Steffen, J. F. Rowe, J. A. Carter, A. V. Moorhead, N. M. Batalha, W. J. Borucki, S. Bryson, L. A. Buchhave, J. L. Christiansen, D. R. Ciardi, W. D. Cochran, M. Endl, M. N. Fanelli, D. Fischer, F. Fressin, J. Geary, M. R. Haas, J. R. Hall, M. J. Holman, J. M. Jenkins, D. G. Koch, D. W. Latham, J. Li, J. J. Lissauer, P. Lucas, G. W. Marcy, T. Mazeh, S. McCauliff, S. Quinn, D. Ragozzine, D. Sasselov, A. Shporer, Transit Timing Observations from Kepler. IV. Confirmation of Four Multiple-planet Systems by Simple Physical Models, ApJ750 (2012) 114. arXiv:1201.5415, doi:10.1088/0004-637X/750/2/114. (72) A. W. Mayo, A. Vanderburg, D. W. Latham, A. Bieryla, T. D. Morton, L. A. Buchhave, C. D. Dressing, C. Beichman, P. Berlind, M. L. Calkins, D. R. Ciardi, I. J. M. Crossfield, G. A. Esquerdo, M. E. Everett, E. J. Gonzales, L. A. Hirsch, E. P. Horch, A. W. Howard, S. B. Howell, J. Livingston, R. Patel, E. A. Petigura, J. E. Schlieder, N. J. Scott, C. F. Schumer, E. Sinukoff, J. Teske, J. G. Winters, 275 Candidates and 149 Validated Planets Orbiting Bright Stars in K2 Campaigns 0-10, ArXiv e-printsarXiv:1802.05277. (73) A. Vanderburg, A. Bieryla, D. A. Duev, R. Jensen-Clem, D. W. Latham, A. W. Mayo, C. Baranec, P. Berlind, S. Kulkarni, N. M. Law, M. N. Nieberding, R. Riddle, M. Salama, Two Small Planets Transiting HD 3167, ApJL829 (2016) L9. arXiv:1607.05248, doi:10.3847/2041-8205/829/1/L9. (74) J. L. Christiansen, A. Vanderburg, J. Burt, B. J. Fulton, K. Batygin, B. Benneke, J. M. Brewer, D. Charbonneau, D. R. Ciardi, A. Collier Cameron, J. L. Coughlin, I. J. M. Crossfield, C. Dressing, T. P. Greene, A. W. Howard, D. W. Latham, E. Molinari, A. Mortier, F. Mullally, F. Pepe, K. Rice, E. Sinukoff, A. Sozzetti, S. E. Thompson, S. Udry, S. S. Vogt, T. S. Barman, N. E. Batalha, F. Bouchy, L. A. Buchhave, R. P. Butler, R. Cosentino, T. J. Dupuy, D. Ehrenreich, A. Fiorenzano, B. M. S. Hansen, T. Henning, L. Hirsch, B. P. Holden, H. T. Isaacson, J. A. Johnson, H. A. Knutson, M. Kosiarek, M. López-Morales, C. Lovis, L. Malavolta, M. Mayor, G. Micela, F. Motalebi, E. Petigura, D. F. Phillips, G. Piotto, L. A. Rogers, D. Sasselov, J. E. Schlieder, D. Ségransan, C. A. Watson, L. M. Weiss, Three’s Company: An Additional Non-transiting Super-Earth in the Bright HD 3167 System, and Masses for All Three Planets, AJ154 (2017) 122. arXiv:1706.01892, doi:10.3847/1538-3881/aa832d. (75) J. C. Becker, A. Vanderburg, F. C. Adams, S. A. Rappaport, H. M. Schwengeler, WASP-47: A Hot Jupiter System with Two Additional Planets Discovered by K2, ApJL812 (2015) L18. arXiv:1508.02411, doi:10.1088/2041-8205/812/2/L18. (76) M. G. MacDonald, D. Ragozzine, D. C. Fabrycky, E. B. Ford, M. J. Holman, H. T. Isaacson, J. J. Lissauer, E. D. Lopez, T. Mazeh, L. Rogers, J. F. Rowe, J. H. Steffen, G. Torres, A Dynamical Analysis of the Kepler-80 System of Five Transiting Planets, AJ152 (2016) 105. arXiv:1607.07540, doi:10.3847/0004-6256/152/4/105. (77) J. H. Steffen, D. Ragozzine, D. C. Fabrycky, J. A. Carter, E. B. Ford, M. J. Holman, J. F. Rowe, W. F. Welsh, W. J. Borucki, A. P. Boss, D. R. Ciardi, S. N. Quinn, Kepler constraints on planets near hot Jupiters, Proceedings of the National Academy of Science 109 (2012) 7982–7987. arXiv:1205.2309, doi:10.1073/pnas.1120970109. (78) D. C. Fabrycky, J. J. Lissauer, D. Ragozzine, J. F. Rowe, J. H. Steffen, E. Agol, T. Barclay, N. Batalha, W. Borucki, D. R. Ciardi, E. B. Ford, T. N. Gautier, J. C. Geary, M. J. Holman, J. M. Jenkins, J. Li, R. C. Morehead, R. L. Morris, A. Shporer, J. C. Smith, M. Still, J. Van Cleve, Architecture of Kepler’s Multi-transiting Systems. II. New Investigations with Twice as Many Candidates, ApJ790 (2014) 146. arXiv:1202.6328, doi:10.1088/0004-637X/790/2/146. (79) J. H. Steffen, W. M. Farr, A Lack of Short-period Multiplanet Systems with Close-proximity Pairs and the Curious Case of Kepler-42, ApJL774 (2013) L12. arXiv:1306.3526, doi:10.1088/2041-8205/774/1/L12. (80) S. N. Raymond, R. Barnes, A. M. Mandell, Observable consequences of planet formation models in systems with close-in terrestrial planets, MNRAS384 (2008) 663–674. arXiv:0711.2015, doi:10.1111/j.1365-2966.2007.12712.x. (81) K. C. Schlaufman, D. N. C. Lin, S. Ida, A Population of Very Hot Super-Earths in Multiple-planet Systems Should be Uncovered by Kepler, ApJL724 (2010) L53–L58. arXiv:1010.3705, doi:10.1088/2041-8205/724/1/L53. (82) L. Hebb, A. Collier-Cameron, A. H. M. J. Triaud, T. A. Lister, B. Smalley, P. F. L. Maxted, C. Hellier, D. R. Anderson, D. Pollacco, M. Gillon, D. Queloz, R. G. West, S. Bentley, B. Enoch, C. A. Haswell, K. Horne, M. Mayor, F. Pepe, D. Segransan, I. Skillen, S. Udry, P. J. Wheatley, WASP-19b: The Shortest Period Transiting Exoplanet Yet Discovered, ApJ708 (2010) 224–231. arXiv:1001.0403, doi:10.1088/0004-637X/708/1/224. (83) G. Maciejewski, D. Dimitrov, M. Fernández, A. Sota, G. Nowak, J. Ohlert, G. Nikolov, Ł. Bukowiecki, T. C. Hinse, E. Pallé, B. Tingley, D. Kjurkchieva, J. W. Lee, C.-U. Lee, Departure from the constant-period ephemeris for the transiting exoplanet WASP-12, A&A588 (2016) L6. arXiv:1602.09055, doi:10.1051/0004-6361/201628312. (84) T. Mazeh, T. Holczer, S. Faigler, Dearth of short-period Neptunian exoplanets: A desert in period-mass and period-radius planes, A&A589 (2016) A75. arXiv:1602.07843, doi:10.1051/0004-6361/201528065. (85) C. Beaugé, D. Nesvorný, Emerging Trends in a Period-Radius Distribution of Close-in Planets, ApJ763 (2013) 12. arXiv:1211.4533, doi:10.1088/0004-637X/763/1/12. (86) J. C. van Eyken, D. R. Ciardi, K. von Braun, S. R. Kane, P. Plavchan, C. F. Bender, T. M. Brown, J. R. Crepp, B. J. Fulton, A. W. Howard, S. B. Howell, S. Mahadevan, G. W. Marcy, A. Shporer, P. Szkody, R. L. Akeson, C. A. Beichman, A. F. Boden, D. M. Gelino, D. W. Hoard, S. V. Ramírez, L. M. Rebull, J. R. Stauffer, J. S. Bloom, S. B. Cenko, M. M. Kasliwal, S. R. Kulkarni, N. M. Law, P. E. Nugent, E. O. Ofek, D. Poznanski, R. M. Quimby, R. Walters, C. J. Grillmair, R. Laher, D. B. Levitan, B. Sesar, J. A. Surace, The PTF Orion Project: A Possible Planet Transiting a T-Tauri Star, ApJ755 (2012) 42. arXiv:1206.1510, doi:10.1088/0004-637X/755/1/42. (87) J. W. Barnes, J. C. van Eyken, B. K. Jackson, D. R. Ciardi, J. J. Fortney, Measurement of Spin-orbit Misalignment and Nodal Precession for the Planet around Pre-main-sequence Star PTFO 8-8695 from Gravity Darkening, ApJ774 (2013) 53. arXiv:1308.0629, doi:10.1088/0004-637X/774/1/53. (88) D. R. Ciardi, J. C. van Eyken, J. W. Barnes, C. A. Beichman, S. J. Carey, C. J. Crockett, J. Eastman, C. M. Johns-Krull, S. B. Howell, S. R. Kane, J. N. . Mclane, P. Plavchan, L. Prato, J. Stauffer, G. T. van Belle, K. von Braun, Follow-up Observations of PTFO 8-8695: A 3 Myr Old T-Tauri Star Hosting a Jupiter-mass Planetary Candidate, ApJ809 (2015) 42. arXiv:1506.08719, doi:10.1088/0004-637X/809/1/42. (89) L. Yu, J. N. Winn, M. Gillon, S. Albrecht, S. Rappaport, A. Bieryla, F. Dai, L. Delrez, L. Hillenbrand, M. J. Holman, A. W. Howard, C. X. Huang, H. Isaacson, E. Jehin, M. Lendl, B. T. Montet, P. Muirhead, R. Sanchis-Ojeda, A. H. M. J. Triaud, Tests of the Planetary Hypothesis for PTFO 8-8695b, ApJ812 (2015) 48. arXiv:1509.02176, doi:10.1088/0004-637X/812/1/48. (90) C. M. Johns-Krull, L. Prato, J. N. McLane, D. R. Ciardi, J. C. van Eyken, W. Chen, J. R. Stauffer, C. A. Beichman, S. A. Frazier, A. F. Boden, M. Morales-Calderón, L. M. Rebull, H$\alpha$ Variability in PTFO8-8695 and the Possible Direct Detection of Emission from a 2 Million Year Old Evaporating Hot Jupiter, ApJ830 (2016) 15. arXiv:1606.02701, doi:10.3847/0004-637X/830/1/15. (91) S. Charpinet, G. Fontaine, P. Brassard, E. M. Green, V. Van Grootel, S. K. Randall, R. Silvotti, A. S. Baran, R. H. Østensen, S. D. Kawaler, J. H. Telting, A compact system of small planets around a former red-giant star, Nature480 (2011) 496–499. doi:10.1038/nature10631. (92) J. Krzesinski, Planetary candidates around the pulsating sdB star KIC 5807616 considered doubtful, A&A581 (2015) A7. doi:10.1051/0004-6361/201526346. (93) S. Rappaport, A. Levine, E. Chiang, I. El Mellah, J. Jenkins, B. Kalomeni, E. S. Kite, M. Kotson, L. Nelson, L. Rousseau-Nepton, K. Tran, Possible Disintegrating Short-period Super-Mercury Orbiting KIC 12557548, ApJ752 (2012) 1. arXiv:1201.2662, doi:10.1088/0004-637X/752/1/1. (94) S. Rappaport, T. Barclay, J. DeVore, J. Rowe, R. Sanchis-Ojeda, M. Still, KOI-2700b A Planet Candidate with Dusty Effluents on a 22 hr Orbit, ApJ784 (2014) 40. arXiv:1312.2054, doi:10.1088/0004-637X/784/1/40. (95) R. Sanchis-Ojeda, S. Rappaport, E. Pallè, L. Delrez, J. DeVore, D. Gandolfi, A. Fukui, I. Ribas, K. G. Stassun, S. Albrecht, F. Dai, E. Gaidos, M. Gillon, T. Hirano, M. Holman, A. W. Howard, H. Isaacson, E. Jehin, M. Kuzuhara, A. W. Mann, G. W. Marcy, P. A. Miles-Páez, P. Montañés-Rodríguez, F. Murgas, N. Narita, G. Nowak, M. Onitsuka, M. Paegert, V. Van Eylen, J. N. Winn, L. Yu, The K2-ESPRINT Project I: Discovery of the Disintegrating Rocky Planet K2-22b with a Cometary Head and Leading Tail, ApJ812 (2015) 112. arXiv:1504.04379, doi:10.1088/0004-637X/812/2/112. (96) R. van Lieshout, S. Rappaport, Disintegrating Rocky Exoplanets, ArXiv e-printsarXiv:1708.00633. (97) D. Perez-Becker, E. Chiang, Catastrophic evaporation of rocky planets, MNRAS433 (2013) 2294–2309. arXiv:1302.2147, doi:10.1093/mnras/stt895. (98) A. Vanderburg, J. A. Johnson, S. Rappaport, A. Bieryla, J. Irwin, J. A. Lewis, D. Kipping, W. R. Brown, P. Dufour, D. R. Ciardi, R. Angus, L. Schaefer, D. W. Latham, D. Charbonneau, C. Beichman, J. Eastman, N. McCrady, R. A. Wittenmyer, J. T. Wright, A disintegrating minor planet transiting a white dwarf, Nature526 (2015) 546–549. arXiv:1510.06387, doi:10.1038/nature15527. (99) S. Xu, S. Rappaport, R. van Lieshout, A. Vanderburg, B. Gary, N. Hallakoun, V. D. Ivanov, M. C. Wyatt, J. DeVore, D. Bayliss, J. Bento, A. Bieryla, A. Cameron, J. M. Cann, B. Croll, K. A. Collins, P. A. Dalba, J. Debes, D. Doyle, P. Dufour, J. Ely, N. Espinoza, M. D. Joner, M. Jura, T. Kaye, J. L. McClain, P. Muirhead, E. Palle, P. A. Panka, J. Provencal, S. Randall, J. E. Rodriguez, J. Scarborough, R. Sefako, A. Shporer, W. Strickland, G. Zhou, B. Zuckerman, A dearth of small particles in the transiting material around the white dwarf WD 1145+017, MNRAS474 (2018) 4795–4809. arXiv:1711.06960, doi:10.1093/mnras/stx3023. (100) G. R. Ricker, J. N. Winn, R. Vanderspek, D. W. Latham, G. Á. Bakos, J. L. Bean, Z. K. Berta-Thompson, T. M. Brown, L. Buchhave, N. R. Butler, R. P. Butler, W. J. Chaplin, D. Charbonneau, J. Christensen-Dalsgaard, M. Clampin, D. Deming, J. Doty, N. De Lee, C. Dressing, E. W. Dunham, M. Endl, F. Fressin, J. Ge, T. Henning, M. J. Holman, A. W. Howard, S. Ida, J. M. Jenkins, G. Jernigan, J. A. Johnson, L. Kaltenegger, N. Kawai, H. Kjeldsen, G. Laughlin, A. M. Levine, D. Lin, J. J. Lissauer, P. MacQueen, G. Marcy, P. R. McCullough, T. D. Morton, N. Narita, M. Paegert, E. Palle, F. Pepe, J. Pepper, A. Quirrenbach, S. A. Rinehart, D. Sasselov, B. Sato, S. Seager, A. Sozzetti, K. G. Stassun, P. Sullivan, A. Szentgyorgyi, G. Torres, S. Udry, J. Villasenor, Transiting Exoplanet Survey Satellite (TESS), Journal of Astronomical Telescopes, Instruments, and Systems 1 (1) (2015) 014003. doi:10.1117/1.JATIS.1.1.014003.
aainstitutetext: Department of Physics and Astronomy, Uppsala University, Box 516, 75120 Uppsala, Swedenbbinstitutetext: Nordita, Stockholm University and KTH Royal Institute of Technology, Hannes Alfvéns väg 12, 10691 Stockholm, Sweden Recursion in the classical limit and the neutron-star Compton amplitude Kays Haddad [email protected] Abstract We study the compatibility of recursive techniques with the classical limit of scattering amplitudes through the construction of the classical Compton amplitude for general spinning compact objects. This is done using BCFW recursion on three-point amplitudes expressed in terms of the classical spin vector and tensor, and expanded to next-to-leading-order in $\hbar$ by using the heavy on-shell spinors. Matching to the result of classical computations, we find that lower-point quantum contributions are, in general, required for the recursive construction of classical, spinning, higher-point amplitudes with massive propagators. We are thus led to conclude that BCFW recursion and the classical limit do not commute. In possession of the classical Compton amplitude, we remove non-localities to all orders in spin for opposite graviton helicities, and to fifth order in the same-helicity case. Finally, all possible on-shell contact terms potentially relevant to black-hole scattering at the second post-Minkowskian order are enumerated and written explicitly. 1 Introduction Since the first observations of gravitational waves from compact binary coalescence LIGOScientific:2016aoc ; LIGOScientific:2017vwq ; LIGOScientific:2018mvr , massive effort from the scattering-amplitudes community has been dedicated to understanding the link between quantum scattering amplitudes and classical gravitational phenomena. Several of these works have improved the (post-Minkowskian, PM) precision to which we understand compact binaries Cheung:2018wkq ; Bern:2019nnu ; Bern:2019crd ; Cheung:2020gyp ; Kalin:2020fhe ; Bern:2021dqo ; Dlapa:2021npj ; Bern:2021yeh ; Dlapa:2021vgp ; Jakobsen:2022fcj , including in the presence of additional effects such as spin Guevara:2018wpp ; Chung:2018kqs ; Maybee:2019jus ; Guevara:2019fsj ; Damgaard:2019lfh ; Aoude:2020onz ; Bern:2020buy ; Liu:2021zxr ; Kosmopoulos:2021zoq ; Jakobsen:2021lvp ; Jakobsen:2021zvh ; Chen:2021kxt ; Aoude:2022trd ; Bern:2022kto ; Aoude:2022thd ; FebresCordero:2022jts , radiation AccettulliHuber:2020dal ; Herrmann:2021lqe ; DiVecchia:2021bdo ; Herrmann:2021tct ; Bjerrum-Bohr:2021din ; Alessio:2022kwv ; Jakobsen:2022psy ; Jakobsen:2022zsx , tidal effects Cheung:2020sdj ; Haddad:2020que ; Kalin:2020lmz ; Bern:2020uwk ; Cheung:2020gbf ; Aoude:2020ygw ; Heissenberg:2022tsn , and various combinations thereof. Equally as foundational has been the derivation of connections between scattering amplitudes and classical observables Cheung:2018wkq ; Kosower:2018adc ; Cristofoli:2019neg ; Maybee:2019jus ; Kalin:2019rwq ; Bjerrum-Bohr:2019kec ; Kalin:2019inp ; Cristofoli:2021vyo ; Bautista:2021wfy ; Aoude:2021oqj ; Cho:2021arx ; Adamo:2022rmp ; Bautista:2022wjf . Many developments emerging from this mobilization have centered on or been motivated by the need for the efficient extraction of the classically-relevant portion of a scattering amplitude. Along these lines, there have been works involving convenient $\hbar$-counting schemes Kosower:2018adc ; Maybee:2019jus , Lagrangian-level as well as spinor-helicity heavy/classical limits Damgaard:2019lfh ; Aoude:2020onz ; Brandhuber:2021kpo ; Brandhuber:2021eyq ; Bjerrum-Bohr:2023jau , effective field theories (EFTs) with classical spin degrees of freedom (DoFs) Bern:2020buy ; Bern:2022kto , and a worldline EFT obtained by integrating out quantum DoFs Mogull:2020sak ; Jakobsen:2021zvh . The efficient computation of classically-relevant amplitudes is a central inspiration for the present work. Specifically, we attempt to qualify the compatibility of BCFW recursion Britto:2004ap ; Britto:2005fq with the classical limit, seeking instances of potential simplifications to the recursive construction of higher-multiplicity amplitudes in this limit. Our conduit for this study is the classical Compton amplitude. While much focus has recently been dedicated to the appropriate Compton amplitude for describing Kerr black holes Arkani-Hamed:2017jhn ; Guevara:2018wpp ; Chung:2018kqs ; Falkowski:2020aso ; Bautista:2021wfy ; Chiodaroli:2021eug ; Aoude:2022thd ; Bern:2022kto ; Cangemi:2022bew ; Bautista:2022wjf , in this paper we focus on deriving this amplitude for a general spinning compact object. Though conceptually simpler – one must "only" enumerate all possible Wilson coefficients and leave their values unspecified – this task is computationally more subtle. Certain serendipitous cancellations occur in the recursive construction of the black-hole Compton amplitude,111In the rest of the paper, we use ”black-hole Compton amplitude” and ”black-hole limit” to refer to the amplitude with the black-hole values for the linear-in-curvature, spin-induced multipole coefficients, $C_{S^{j}}=1$. We are not concerned with the values of $R^{2}$ Wilson coefficients that describe black holes. allowing one to build this amplitude using only leading-in-$\hbar$ information throughout the computation. As we will see below, this fortune does not extend to the case of general objects, where subleading-in-$\hbar$ information is needed in intermediate steps to match to known classical results Saketh:2022wap . Apart from phenomenological applications, there are two reasons why an on-shell study of the Compton amplitude for general compact objects is itself timely. The first is the discrepancy between the number of free parameters in the spinning effective field theory of refs. Bern:2020buy ; Bern:2022kto and worldline theories of spinning objects Levi:2015msa ; Siemonsen:2019dsu ; Jakobsen:2021zvh at linear order in the curvature (but contributing first to the Compton amplitude). An on-shell perspective provides a different take on the total possible number of free parameters, in a setting where relations between different structures may be easier to identify than in the off-shell context. Recently, ref. Kim:2023drc argued that the actions of refs. Bern:2020buy ; Bern:2022kto possess too many degrees of freedom due to incompletely removing unphysical massive modes. Our on-shell analysis corroborates the conclusion of ref. Kim:2023drc , in that we find no freedom to introduce coefficients to the three-point amplitude other than those mapping directly to the parameters of ref. Levi:2015msa , including at next-to-leading order in $\hbar$. The second reason is the burgeoning interest in and necessity for an amplitudes description of Kerr black holes. Extrapolating the properties of Kerr-black-hole scattering at low spin orders, it was proposed in refs. Aoude:2022trd ; Bern:2022kto ; Aoude:2022thd that higher spin orders in Kerr-black-hole scattering would exhibit a favorable high-energy limit and a so-called spin-shift symmetry. However, these conjectures are in tension with recent comparisons to solutions of the Teukolsky equation Bautista:2022wjf , indicating a need for a better understanding of the amplitudes pertinent to Kerr black holes.222Refs. Chiodaroli:2021eug ; Cangemi:2022bew have proposed that Kerr-black-hole amplitudes exhibit a massive higher-spin gauge symmetry. This gauge symmetry uniquely selects the Kerr three-point amplitude, but the dependence of their Compton amplitudes on the spin quantum number of the massive particle necessitates a more careful infinite-spin limit before comparisons to known Kerr Compton amplitudes can be made. We thank Lucile Cangemi, Henrik Johansson, and Paolo Pichini for clarifications about this. Computation of the most-general Compton amplitude works towards this end by allowing for comparisons to be made to the computation relevant for the Kerr Compton amplitude. The Compton amplitude for general spin-induced multipoles has been computed recursively in ref. Chen:2021kxt . We do not fully agree with their results, and address this in more detail below. Put succinctly, the cause of the disagreement – as well as a conclusion of our analysis – is that the classical limit and BCFW recursion generally do not commute. Our analysis here postpones the full classical limit until the end of the computation, and produces the opposite-helicity Compton amplitude for general spin-induced multipoles to all spin orders and without any unphysical poles. The result matches the classical computation of ref. Saketh:2022wap where there is overlap. In the same-helicity case we must cure a new type of non-locality appearing already at quadratic order in spin, which we do up to fifth order in spin and again reproduce the results of ref. Saketh:2022wap where there is overlap. Beyond fifth order in spin, a non-local form of the same-helicity amplitude is presented for all spin orders. Consideration of the massless limit in the same-helicity case will hint at a potential extension of the notion of minimal coupling of ref. Arkani-Hamed:2017jhn past three points. Apart from the linear-in-curvature spin-induced multipole coefficients, the identity of the compact object in the Compton amplitude is also dictated by the values for coefficients of contact terms, corresponding to $R^{2}$ operators in an action. Due to the presence of more non-vanishing invariants than at three points, there are an infinite number of such coefficients at each spin order. Once we have constructed the factorizable portion of the Compton amplitude, we content ourselves with the enumeration of a certain finite subset of all possible contact deformations of the Compton amplitude – specifically, the set of terms which can potentially contribute to Kerr-black-hole scattering at 2PM. The remainder of this paper is organized as follows. In Section 2 we expand the most-general three-point amplitude up to subleading order in $\hbar$ while expressing it in terms of classically relevant quantities: the spin vector, the spin tensor, and the linear-in-curvature spin-induced multipole coefficients. The three-point amplitude thus expanded is sufficient for the computation of the factorizable part of the Compton amplitude, which we carry out for all helicity configurations in Section 3. Contact deformations of the Compton amplitudes of the variety mentioned above are counted and written explicitly in Section 4. The analysis in this section includes both conservative and dissipative contact terms. A discussion of our results and their implications concludes the paper in Section 5. 2 The most-general three-point amplitude As our analysis hinges on the application of BCFW recursion to construct the Compton amplitude, our starting point is the expression of the three-point amplitude in terms of classically-relevant quantities. The most-general spin-$s$ three-point amplitude in terms of the massive on-shell spinors of ref. Arkani-Hamed:2017jhn is Conde:2016izb ; Conde:2016vxs ; Arkani-Hamed:2017jhn $$\displaystyle\left(\mathcal{M}^{s,+}_{3}\right)^{IJ}$$ $$\displaystyle=i\frac{x^{2}}{m^{2s}}\sum_{k=0}^{2s}g_{k}\langle 2^{I}1^{J}\rangle^{\odot(2s-k)}\odot\left(\frac{\langle 2^{I}|q|1^{J}]}{2m}\right)^{\odot k},$$ (1) $$\displaystyle\left(\mathcal{M}^{s,-}_{3}\right)^{IJ}$$ $$\displaystyle=i(-1)^{2s}\frac{x^{-2}}{m^{2s}}\sum_{k=0}^{2s}\tilde{g}_{k}[2^{I}1^{J}]^{\odot(2s-k)}\odot\left(\frac{[2^{I}|q|1^{J}\rangle}{2m}\right)^{\odot k},$$ (2) for an emitted graviton of any helicity. The helicity weights of the gravitons are encoded in the factors $$\displaystyle x\equiv\frac{[q|p_{1}|\xi\rangle}{m\langle q\xi\rangle},\quad x^{-1}\equiv\frac{\langle q|p_{1}|\xi]}{m[q\xi]},$$ (3) for an arbitrary reference vector $\xi^{\mu}$. Amplitudes describing the scattering of massive particles with spins $s_{i}$ are symmetric functions in the $2s_{i}$ little group indices of each massive spinning particle Wigner:1939cj ; Bargmann:1948ck ; weinberg_1995ch2 . The $I=\{I_{1},\dots,I_{2s}\}$ and $J=\{J_{1},\dots,J_{2s}\}$ represent these sets of $2s$ indices for the outgoing and incoming massive legs, respectively. We have used the $\odot$ notation first introduced in ref. Guevara:2018wpp to represent the symmetrization of the tensor product over the little group indices.333For example, ${x^{I}}_{J}\odot{y^{I}}_{J}={x^{\{I_{1}}}_{\{J_{2}}{y^{I_{2}\}}}_{J_{2}\}}$, where curly brackets denote normalized symmetrization. Parameters of classical relevance will shortly be introduced with which we will identify the coefficients $g_{k}$ and $\tilde{g}_{k}$. To express these in terms of the classical spin vector up to subleading order in $\hbar$, we will convert the on-shell spinors to heavy on-shell spinors Aoude:2020onz ; Aoude:2022trd . For a momentum $p^{\mu}=mv^{\mu}+k^{\mu}$, $$\displaystyle|p^{I}\rangle=\frac{m}{\sqrt{m_{k}}}\left(|v^{I}\rangle+\frac{\not{k}}{2m}|v^{I}]\right),$$ $$\displaystyle\quad|p^{I}]=\frac{m}{\sqrt{m_{k}}}\left(|v^{I}]+\frac{\not{k}}{2m}|v^{I}\rangle\right),$$ (4) $$\displaystyle\langle p^{I}|=\frac{m}{\sqrt{m_{k}}}\left(\langle v^{I}|-[v^{I}|\frac{\not{k}}{2m}\right),$$ $$\displaystyle\quad[p^{I}|=\frac{m}{\sqrt{m_{k}}}\left([v^{I}|-\langle v^{I}|\frac{\not{k}}{2m}\right),$$ (5) where $m_{k}\equiv\left(1-\frac{k^{2}}{4m^{2}}\right)m$. Note that the residual momentum $k^{\mu}\sim\mathcal{O}(\hbar)$ Damgaard:2019lfh . Writing $p_{1}^{\mu}=mv^{\mu}+k_{1}^{\mu}$ and $p_{2}^{\mu}=mv^{\mu}+k_{2}^{\mu}=mv^{\mu}+k_{1}^{\mu}-q^{\mu}$, the spinor brackets are $$\displaystyle\langle 2^{I}1^{J}\rangle$$ $$\displaystyle=m\left(\langle v^{I}v^{J}\rangle+\langle v^{I}|q\cdot a_{1/2}|v^{J}\rangle-\frac{i}{2m^{2}}k_{1\mu}q_{\nu}\langle v^{I}|S^{\mu\nu}_{1/2}|v^{J}\rangle\right)+\mathcal{O}(\hbar^{2}),$$ $$\displaystyle[2^{I}1^{J}]$$ $$\displaystyle=m\left([v^{I}v^{J}]-[v^{I}|q\cdot a_{1/2}|v^{J}]-\frac{i}{2m^{2}}k_{1\mu}q_{\nu}[v^{I}|S^{\mu\nu}_{1/2}|v^{J}]\right)+\mathcal{O}(\hbar^{2}),$$ (6) $$\displaystyle\frac{\langle 2^{I}|q|1^{J}]}{2m}$$ $$\displaystyle=m\langle v^{I}|q\cdot a_{1/2}|v^{I}\rangle+\mathcal{O}(\hbar^{2}),\quad\frac{[2^{I}|q|1^{J}\rangle}{2m}=-m[v^{I}|q\cdot a_{1/2}|v^{I}]+\mathcal{O}(\hbar^{2}),$$ where $S^{\mu\nu}_{1/2}\equiv\frac{i}{4}[\gamma^{\mu},\gamma^{\nu}]$ is the Lorentz generator in the spin-1/2 representation, and we have indicated that the ring radius $a^{\mu}$ is accordingly in the spin-1/2 representation. The ring radius is related to the spin vector through $S^{\mu}=a^{\mu}/m$; see Appendix A for our conventions pertaining to the ring radius, as well as some of its germane properties. With eq. 6 in hand, we can express the three-point amplitudes in terms of the classical spin. Focusing on the positive-helicity amplitude and expanding up to next-to-leading-order in $\hbar$, $$\displaystyle\left(\mathcal{M}^{s,+}_{3}\right)^{IJ}$$ $$\displaystyle=ix^{2}\sum_{j=0}^{2s}\langle v^{I}v^{J}\rangle^{\odot(2s-j)}\odot\langle v^{I}|q\cdot a|v^{J}\rangle^{\odot(j-1)}\odot\left(\langle v^{I}|q\cdot a|v^{J}\rangle\sum_{k=0}^{j}\frac{g_{k}(2s-k)!}{(2s)!(j-k)!}\right.$$ $$\displaystyle\quad\left.-\frac{i}{2m^{2}}k_{1\mu}q_{\nu}\langle v^{I}|S^{\mu\nu}|v^{J}\rangle\sum_{k=0}^{j}\frac{g_{k}(2s-k)!}{(2s)!(j-k-1)!}\right)+\mathcal{O}(\hbar^{2}),$$ (7) where we interpret $1/(-1)!=0$. Note that we have converted the products of the ring radius and the spin tensor to the spin-$s$ representation, which we denote with no subscript on these quantities. Now, the coefficients $g_{i}$ are not classically relevant,444As such, we do not concern ourselves with the fact that they must depend on the total spin quantum number to preserve spin universality. but they can be related to linear combinations of the spin-induced multipole coefficients $C_{ES^{2k}}$ and $C_{BS^{2k+1}}$ of ref. Levi:2015msa . We adopt the notation of ref. Chung:2018kqs with respect to these coefficients, writing $C_{S^{2k}}\equiv C_{ES^{2k}}$ and $C_{S^{2k+1}}\equiv C_{BS^{2k+1}}$. Matching to the three-point amplitude that can be derived from the worldline action there, it’s straightforward to show that555See ref. Chung:2018kqs for details of the extraction of an on-shell three-point amplitude from the worldline action of ref. Levi:2015msa . Ref. Chen:2021kxt finds similar expressions relating the amplitude and spinning-worldline coefficients. $$\displaystyle\sum_{k=0}^{j}g_{k}\frac{(2s-k)!}{(2s)!(j-k)!}=\frac{C_{S^{j}}}{j!}\quad\Rightarrow\quad g_{k}=\frac{(2s)!}{(2s-k)!}\sum_{n=0}^{k}(-1)^{n+k}\frac{C_{S^{n}}}{n!(k-n)!}.$$ (8) Consequently, $$\displaystyle\left(\mathcal{M}^{s,+}_{3}\right)^{IJ}=ix^{2}\left(\sum_{j=0}^{2s}\frac{C_{S^{j}}}{j!}\langle v^{I}v^{J}\rangle^{\odot 2s-j}\odot\langle v^{I}|q\cdot a|v^{J}\rangle^{\odot j}\right.$$ (9) $$\displaystyle\qquad\left.-\frac{i}{2m^{2}}k_{1\mu}q_{\nu}\langle v^{I}|S^{\mu\nu}|v^{J}\rangle\odot\sum_{j=1}^{2s}\frac{C_{S^{j-1}}}{(j-1)!}\langle v^{I}v^{J}\rangle^{\odot 2s-j}\odot\langle v^{I}|q\cdot a|v^{J}\rangle^{\odot(j-1)}\right)+\mathcal{O}(\hbar^{2}).$$ The two coefficients $C_{S^{0}}$ and $C_{S^{1}}$ are equal to $1$ for any object, while the coefficients $C_{S^{j\geq 2}}$ are all equal to $1$ for black holes only Levi:2015msa . This form of the amplitude with open little group indices is important to correctly account for polarization sums over massive internal states when computing the Compton amplitude recursively. Nevertheless, the amplitude can be compactified by using the bold notation as formulated in ref. Chiodaroli:2021eug and employed in ref. Aoude:2022trd : $$\displaystyle|\boldsymbol{v}\rangle\equiv|v^{I}\rangle z_{p,I},\quad|\boldsymbol{v}]\equiv|v^{I}]z_{p,I},$$ $$\displaystyle\langle\bar{\boldsymbol{v}}|\equiv\bar{z}_{p,I}\langle v^{I}|,\quad[\bar{\boldsymbol{v}}|\equiv\bar{z}_{p,I}[v^{I}|,$$ where $z_{p,I}$ is a complex auxiliary variable and $\bar{z}_{p,I}$ its complex conjugate. Contracting the amplitude with $2s$ factors of this auxiliary variable for each massive leg, $$\displaystyle\mathcal{M}^{s,+}_{3}$$ $$\displaystyle=ix^{2}\langle\bar{\boldsymbol{v}}\boldsymbol{v}\rangle^{2s}\left(\sum_{j=0}^{2s}\frac{C_{S^{j}}}{j!}\left(q\cdot\mathfrak{a}\right)^{j}-\frac{i}{2m^{2}}k_{1\mu}q_{\nu}\mathfrak{s}^{\mu\nu}\sum_{j=0}^{2s-1}\frac{C_{S^{j}}}{j!}(q\cdot\mathfrak{a})^{j}\right)+\mathcal{O}(\hbar^{2}).$$ (10) We have identified (products of) the classical ring radius and spin tensor – $\mathfrak{a}^{\mu}$ and $\mathfrak{s}^{\mu\nu}$ respectively – through Bern:2020buy ; Cangemi:2022abk 666The symmetrization of the spin in the expectation value was not important in ref. Aoude:2022trd because the product of spin vectors there was contracted with a totally-symmetric tensor when identified in the amplitude. $$\displaystyle\mathfrak{a}^{\mu_{1}}\dots\mathfrak{a}^{\mu_{n}}$$ $$\displaystyle\equiv\frac{\langle\bar{\boldsymbol{v}}|^{2s}\{a^{\mu_{1}},\ldots,a^{\mu_{n}}\}|\boldsymbol{v}\rangle^{2s}}{\langle\bar{\boldsymbol{v}}\boldsymbol{v}\rangle^{2s}}=\frac{[\bar{\boldsymbol{v}}|^{2s}\{a^{\mu_{1}},\ldots,a^{\mu_{n}}\}|\boldsymbol{v}]^{2s}}{[\bar{\boldsymbol{v}}\boldsymbol{v}]^{2s}},$$ (11) $$\displaystyle\mathfrak{s}^{\alpha\beta}\mathfrak{a}^{\mu_{1}}\dots\mathfrak{a}^{\mu_{n}}$$ $$\displaystyle\equiv\frac{\langle\bar{\boldsymbol{v}}|^{2s}\{S^{\alpha\beta},a^{\mu_{1}},\ldots,a^{\mu_{n}}\}|\boldsymbol{v}\rangle^{2s}}{\langle\bar{\boldsymbol{v}}\boldsymbol{v}\rangle^{2s}}=\frac{[\bar{\boldsymbol{v}}|^{2s}\{S^{\alpha\beta},a^{\mu_{1}},\ldots,a^{\mu_{n}}\}|\boldsymbol{v}]^{2s}}{[\bar{\boldsymbol{v}}\boldsymbol{v}]^{2s}}.$$ (12) Finally, once in possession of the positive-helicity amplitude, its negative-helicity counterpart is easily obtained by swapping angle and square brackets and changing the sign on $q\cdot\mathfrak{a}$, as can be seen by inspection of eq. 6: $$\displaystyle\mathcal{M}^{s,-}_{3}$$ $$\displaystyle=ix^{-2}\langle\bar{\boldsymbol{v}}\boldsymbol{v}\rangle^{2s}\left(\sum_{j=0}^{2s}\frac{C_{S^{j}}}{j!}\left(-q\cdot\mathfrak{a}\right)^{j}-\frac{i}{2m^{2}}k_{1\mu}q_{\nu}\mathfrak{s}^{\mu\nu}\sum_{j=0}^{2s-1}\frac{C_{S^{j}}}{j!}(-q\cdot\mathfrak{a})^{j}\right)+\mathcal{O}(\hbar^{2}).$$ (13) Note that we switched back from square to angle brackets to absorb the overall $(-1)^{2s}$ in eq. 2. In the infinite-spin limit, one should drop the overall factors of $\langle\bar{\boldsymbol{v}}\boldsymbol{v}\rangle^{2s}$ and take $s\rightarrow\infty$ in the upper bounds of the sums in eqs. 10 and 13. However, we must postpone this procedure until after the Compton amplitudes have been constructed for arbitrary, but finite, $s$: the presence of spinors is necessary to correctly perform massive polarization sums in intermediate steps. The second terms in the brackets of both eqs. 10 and 13 are subleading in $\hbar$. While they are needed to correctly construct the classical Compton amplitude, they are irrelevant to classical physics at three points.777One could also set $k_{1}^{\mu}=0$ by reparametrizing the heavy momentum. This not only eliminates the second terms in the brackets of eqs. 10 and 13, but sets all subleading-in-$\hbar$ terms to zero. Indeed, dropping these terms and taking the black-hole limit $C_{S^{j}}=1$, we recover the spin exponential characteristic of Kerr black holes at three points Vines:2017hyw ; Guevara:2018wpp . We thus see the potential to introduce new Wilson coefficients that would not affect the classical three-point amplitude but would enter in the classical Compton amplitude. Specifically, a bottom-up construction of a three-point amplitude in powers of $\hbar$ would require that we give the second terms in the brackets of eqs. 10 and 13 coefficients different from $C_{S^{j}}$. However, knowledge of the underlying – quantum – theory indicates that there are in fact no additional parameters to the $C_{S^{j}}$ if we are to match to the classical three-point amplitude, as we have derived above. One would reach the same conclusion from the bottom up if, in addition to enumerating all possible on-shell structures, one also imposes invariance of the amplitude under reparametrization of the heavy momentum DUGAN1992142 ; Luke:1992cs . We have derived the three-point amplitudes for the emission of an arbitrary-helicity graviton from a spin-$s$ massive particle up to subleading order in $\hbar$ and in terms of classically relevant quantities (the ring radius, spin tensor, and spin-induced multipole coefficients). This is all the input we need to construct the classical Compton amplitude using recursive methods. 3 BCFW construction of Compton scattering We turn now to the sewing of the three-point amplitudes derived in the previous section into Compton amplitudes. Inspection of the factorization channels will validate our assertion that BCFW recursion and the classical limit do not commute, as we will see interference between quantum and superclassical terms that generally does not vanish. We begin with the opposite-helicity case before proceeding to same-helicity scattering,888Note that the opposite- and same-helicity amplitudes are often referred to as helicity-preserving and helicity-reversing amplitudes respectively in the general relativity literature, e.g. refs. Dolan:2008kf ; Saketh:2022wap . This nomenclature reflects momentum conventions where one graviton is incoming and one is outgoing. and take both gravitons to be outgoing and the initial matter momentum to be incoming. 3.1 Opposite-helicity Compton scattering We label the negative-helicity graviton’s momentum with $q_{3}^{\mu}$, while the positive-helicity graviton has momentum $q_{4}^{\mu}$. We use the $[3,4\rangle$ shift to construct the amplitude: $$\displaystyle|\hat{3}]=|3]+z|4],\quad|\hat{4}\rangle=|4\rangle-z|3\rangle.$$ (14) Two factorization channels comprise the amplitude under this shift, which are shown in fig. 1. On either cut, the factors of $x$ in eqs. 10 and 13 can be written as $$\displaystyle\hat{x}_{4}=\frac{y}{m\langle 43\rangle},\quad\hat{x}_{3}^{-1}=\frac{y}{m[34]},$$ (15) where $y\equiv 2p_{1}\cdot w$ and $w^{\mu}\equiv[4|\bar{\sigma}^{\mu}|3\rangle/2$. The inverse powers of the mass will cancel with the overall coupling, so we omit them in the following. The shifted momenta on the cuts are $$\displaystyle\hat{q}_{3,1}^{\mu}$$ $$\displaystyle=\hat{q}_{3}^{\mu}-\frac{s_{34}}{2y}w^{\mu},\quad\hat{q}_{3,2}^{\mu}=\hat{q}_{3}^{\mu}+\frac{s_{34}}{2y}w^{\mu},$$ (16) $$\displaystyle\hat{q}_{4,1}^{\mu}$$ $$\displaystyle=\hat{q}_{4}^{\mu}+\frac{s_{34}}{2y}w^{\mu},\quad\hat{q}_{4,2}^{\mu}=\hat{q}^{\mu}_{4}-\frac{s_{34}}{2y}w^{\mu},$$ (17) where $$\displaystyle\hat{q}_{3}^{\mu}\equiv q_{3}^{\mu}-\frac{t_{14}-t_{13}}{2y}w^{\mu},\quad\hat{q}_{4}^{\mu}\equiv q_{4}^{\mu}+\frac{t_{14}-t_{13}}{2y}w^{\mu}.$$ (18) Summing the two factorization channels yields the spin-$s$ amplitude: $$\displaystyle\mathcal{M}^{s,-+}_{4}$$ $$\displaystyle=\mathcal{F}_{1}^{s,-+}+\mathcal{F}_{2}^{s,-+},$$ (19) $$\displaystyle\mathcal{F}_{1}^{s,-+}$$ $$\displaystyle\equiv\frac{\mathcal{M}_{R}(\hat{P}_{13,I}^{s},-\hat{q}_{4,1}^{+},-p_{2}^{s})\mathcal{M}_{L}(p_{1}^{s},-\hat{q}_{3,1}^{-},-\hat{P}_{13}^{s,I})}{t_{13}},$$ $$\displaystyle\mathcal{F}_{2}^{s,-+}$$ $$\displaystyle\equiv\frac{\mathcal{M}_{R}(\hat{P}_{14,I}^{s},-\hat{q}_{3,2}^{-},-p_{2}^{s})\mathcal{M}_{L}(p_{1}^{s},-\hat{q}_{4,2}^{+},-\hat{P}_{14}^{s,I})}{t_{14}}.$$ The labels $R$ and $L$ indicate whether the amplitude is on the right- or left-hand side of the cut, which affects the sign of the polarization sum. We have defined $\hat{P}_{13}=p_{1}^{\mu}-\hat{q}_{3,1}^{\mu}$, $\hat{P}_{14}=p_{1}^{\mu}-\hat{q}_{4,2}^{\mu}$, $t_{1i}=-2p_{1}\cdot q_{i}$, and $p_{2}^{\mu}$ is determined by momentum conservation. Negative momentum labels in the amplitudes represent outgoing momenta. Since the three-point amplitudes are $\mathcal{O}(\hbar^{0})$ we see that both factorization channels are $\mathcal{O}(\hbar^{-1})$. From previous analyses Arkani-Hamed:2017jhn ; Chung:2018kqs ; Guevara:2018wpp ; Johansson:2019dnu ; Aoude:2020onz ; Aoude:2022trd it is known that the Compton amplitude scales as $\mathcal{O}(\hbar^{0})$ in the classical limit, so $\mathcal{O}(\hbar)$ terms from the three-point amplitudes are needed to capture all $\mathcal{O}(\hbar^{0})$ contributions to the Compton amplitude, as advertised. On the left-hand sides of the cuts we can take $k_{1}^{\mu}=0$ in eqs. 10 and 13, but on the right-hand sides we must take $k_{1}^{\mu}=-\hat{q}_{3,1}^{\mu}$ or $-\hat{q}_{4,2}^{\mu}$, depending on the cut. We see in eq. 16 the second source of $\mathcal{O}(\hbar\times 1/\hbar)$ effects. Namely, the parts of the shifted momenta proportional to $w^{\mu}$ are $\mathcal{O}(\hbar^{2})$ whereas the rest of the shifted momenta are $\mathcal{O}(\hbar)$. There is one more source of interference between quantum and super-classical effects that must be accounted for: the reduction of spin structures after polarization sums have been taken over internal massive states. For example, in the spin-1/2 representation only one factor of the spin vector can appear between a pair of spinors. This leads to the relations $$\displaystyle\hat{q}_{4,1\mu}\hat{q}_{3,1\nu}\langle\bar{\boldsymbol{v}}|a^{\mu}_{1/2}|v_{I}\rangle[v^{I}|a^{\nu}_{1/2}|\boldsymbol{v}]$$ $$\displaystyle=\frac{i}{2m^{2}}\hat{q}_{3,1\mu}\hat{q}_{4,1\nu}\langle\bar{\boldsymbol{v}}|S^{\mu\nu}_{1/2}|\boldsymbol{v}\rangle+\mathcal{O}(\hbar^{2}),$$ (20a) $$\displaystyle\hat{q}_{3,2\mu}\hat{q}_{4,2\nu}[\bar{\boldsymbol{v}}|a^{\mu}_{1/2}|v_{I}]\langle v^{I}|a^{\nu}_{1/2}|\boldsymbol{v}\rangle$$ $$\displaystyle=\frac{i}{2m^{2}}\hat{q}_{4,2\mu}\hat{q}_{3,2\nu}[\bar{\boldsymbol{v}}|S^{\mu\nu}_{1/2}|\boldsymbol{v}]+\mathcal{O}(\hbar^{2}),$$ (20b) for spin-1/2 external states. By expressing the three-point amplitudes of the previous section using spin vectors and tensors in the spin-1/2 representation, such reductions are easy to perform for particles of any spin. See Appendix A for more details. To identify a difference between the black-hole computation and that for a general object, let us examine the individual factorization channels. Accounting for all $\mathcal{O}(\hbar\times 1/\hbar)$ effects, the factorization channels in the infinite-spin limit take the compact forms999We can take the infinite-spin limit and drop the overall spinor contraction $\langle\bar{\boldsymbol{v}}\boldsymbol{v}\rangle^{2s}$ now because we have evaluated the polarization sums. $$\displaystyle\mathcal{F}_{1}^{\infty,-+}$$ $$\displaystyle=-\frac{y^{4}}{t_{13}s_{34}^{2}}\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\frac{\left(\hat{q}_{4}\cdot\mathfrak{a}\right)^{j}\left(-\hat{q}_{3}\cdot\mathfrak{a}\right)^{k}}{j!k!}$$ $$\displaystyle\qquad\times\left[C_{S^{k}}C_{S^{j}}-\left(C_{S^{j}}-C_{S^{j+1}}\right)\left(C_{S^{k}}-C_{S^{k+1}}\right)\frac{s_{34}}{2y}w\cdot\mathfrak{a}\right],$$ (21a) $$\displaystyle\mathcal{F}_{2}^{\infty,-+}$$ $$\displaystyle=-\frac{y^{4}}{t_{14}s_{34}^{2}}\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\frac{\left(\hat{q}_{4}\cdot\mathfrak{a}\right)^{j}\left(-\hat{q}_{3}\cdot\mathfrak{a}\right)^{k}}{j!k!}$$ $$\displaystyle\qquad\times\left[C_{S^{k}}C_{S^{j}}+\left(C_{S^{j}}-C_{S^{j+1}}\right)\left(C_{S^{k}}-C_{S^{k+1}}\right)\frac{s_{34}}{2y}w\cdot\mathfrak{a}\right].$$ (21b) We see in these equations the proof of our statement that BCFW recursion and the classical limit do not commute. The second terms in the square brackets above are subleading in $\hbar$ relative to the first terms, and are the $\mathcal{O}(\hbar\times 1/\hbar)$ terms we have kept track of. Since they have different signs in both factorization channels, they do not drop by one power of $\hbar$ when both channels are added, unlike the first terms. Hence these subleading terms in $\mathcal{F}_{i}$ are not subleading in the amplitude. Furthermore, in the black-hole limit $C_{S^{j}}=1$, these contributions vanish and the $\mathcal{F}_{i}$ are uniform in $\hbar$, demonstrating the cancellations in the black-hole case alluded to in the introduction. While the black-hole limit implies that the factorization channels are uniform in $\hbar$, the converse is also true. Specifically, requiring that the factorization channels are uniform in $\hbar$ imposes either $C_{S^{j}}=0$ or $C_{S^{j}}=C_{S^{0}}$ for all $j$. However, as $C_{S^{0}}=C_{S^{1}}=1$ for any gravitating object Levi:2015msa , the former condition cannot be satisfied. Thus, the uniformity in $\hbar$ of the factorization channels is equivalent to scattering an object with the linear-in-curvature induced multipoles of a Kerr black hole. As mentioned above, this non-uniformity in $\hbar$ does not persist once the factorization channels are combined into the amplitude. We will see hints of the exceptionality of the Kerr-black-hole spin-induced multipoles at the level of the amplitude in the same-helicity case, in a way which will be more reminiscent of the notion of minimal coupling of ref. Arkani-Hamed:2017jhn . Summing the two factorization channels gives the amplitude: $$\displaystyle\mathcal{M}^{\infty,-+}_{4}$$ $$\displaystyle=\frac{y^{4}}{t_{13}t_{14}s_{34}}\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\frac{\left(\hat{q}_{4}\cdot\mathfrak{a}\right)^{j}\left(-\hat{q}_{3}\cdot\mathfrak{a}\right)^{k}}{j!k!}$$ $$\displaystyle\qquad\times\left[C_{S^{k}}C_{S^{j}}+\left(C_{S^{j}}-C_{S^{j+1}}\right)\left(C_{S^{k}}-C_{S^{k+1}}\right)\frac{t_{14}-t_{13}}{2y}w\cdot\mathfrak{a}\right],$$ (22) valid up to fourth order in spin. Expanding up to third order in spin, we reproduce the helicity-preserving amplitude of ref. Saketh:2022wap . Up to fourth order in spin, we agree with the result derived from the action of ref. Bern:2022kto for certain choices of their additional parameters.101010We thank Andres Luna and Fei Teng for sharing unpublished Compton amplitudes. In the black-hole limit the second term in square brackets is vanishing, thus recovering the spin-exponential of the Compton amplitude in the form presented in ref. Aoude:2020onz . As in the black-hole case, unphysical poles in $y$ develop above fourth order in spin. We can remove them without affecting factorization properties exactly as was done in ref. Aoude:2022trd . We must first isolate the problematic parts of the amplitude, which can be done by plugging in eq. 18 and using the binomial theorem. Doing so and collecting like terms gives $$\displaystyle\mathcal{M}^{\infty,-+}_{4}$$ $$\displaystyle=\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\sum_{n_{1}=0}^{j}\sum_{n_{2}=0}^{k}{{j}\choose{n_{1}}}{{k}\choose{n_{2}}}\frac{\left(q_{4}\cdot\mathfrak{a}\right)^{j-n_{1}}\left(-q_{3}\cdot\mathfrak{a}\right)^{k-n_{2}}}{2^{n_{1}+n_{2}}j!k!}$$ $$\displaystyle\qquad\times\left[C_{S^{k}}C_{S^{j}}K_{n_{1}+n_{2}}+\frac{1}{2}\left(C_{S^{j}}-C_{S^{j+1}}\right)\left(C_{S^{k}}-C_{S^{k+1}}\right)K_{n_{1}+n_{2}+1}\right],$$ (23) where $$\displaystyle K_{n}\equiv\frac{y^{4}}{t_{13}t_{14}s_{34}}\left(\frac{t_{14}-t_{13}}{y}w\cdot\mathfrak{a}\right)^{n}.$$ (24) Unphysical poles arise in terms containing $K_{n\geq 5}$, and can be removed without affecting factorization properties by replacing $$\displaystyle K_{n}\rightarrow\bar{K}_{n}$$ $$\displaystyle\equiv\begin{cases}K_{n},&n\leq 4,\\ K_{4}L_{n-4}-K_{3}\mathfrak{s}_{2}L_{n-5},&n>4,\end{cases}$$ (25) with $$\displaystyle L_{m}$$ $$\displaystyle\equiv\sum_{j=0}^{\lfloor m/2\rfloor}\binom{m+1}{2j+1}\mathfrak{s}^{m-2j}_{1}(\mathfrak{s}^{2}_{1}-\mathfrak{s}_{2})^{j},$$ (26) $$\displaystyle\mathfrak{s}_{1}$$ $$\displaystyle\equiv(q_{3}-q_{4})\cdot\mathfrak{a},\quad\mathfrak{s}_{2}\equiv-4(q_{3}\cdot\mathfrak{a})(q_{4}\cdot\mathfrak{a})+s_{34}\mathfrak{a}^{2}.$$ We thus arrive at the final, local result – modulo contact terms – for the opposite-helicity Compton amplitude for general objects and all spins: $$\displaystyle\mathcal{M}^{\infty,-+}_{4}$$ $$\displaystyle=\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\sum_{n_{1}=0}^{j}\sum_{n_{2}=0}^{k}{{j}\choose{n_{1}}}{{k}\choose{n_{2}}}\frac{\left(q_{4}\cdot\mathfrak{a}\right)^{j-n_{1}}\left(-q_{3}\cdot\mathfrak{a}\right)^{k-n_{2}}}{2^{n_{1}+n_{2}}j!k!}$$ $$\displaystyle\qquad\times\left[C_{S^{k}}C_{S^{j}}\bar{K}_{n_{1}+n_{2}}+\frac{1}{2}\left(C_{S^{j}}-C_{S^{j+1}}\right)\left(C_{S^{k}}-C_{S^{k+1}}\right)\bar{K}_{n_{1}+n_{2}+1}\right].$$ (27) Contact terms will be considered in the next section. Let us end by briefly commenting on previous attempts to construct the Compton amplitude using BCFW recursion on a classical three-point amplitude, namely refs. Chen:2021kxt ; Chen:2022yxw . It was stated in ref. Chen:2021kxt that the spin dependence of both cuts is the same in the black-hole limit. While this statement is true, it is the result of cancellations between three quantum $\times$ super-classical effects which we have seen above: 1) $\mathcal{O}(\hbar)$ parts of the three-point amplitudes; 2) $\mathcal{O}(\hbar^{2})$ parts of the shifted momenta; 3) the reduction of products of spin vectors after the polarization sum over massive internal states. The authors of refs. Chen:2021kxt ; Chen:2022yxw missed the former effect, and their accounting of the latter two did not produce $\mathcal{O}(\hbar\times 1/\hbar)$ terms.111111We thank Jung-Wook Kim for discussions about this. Though inconsequential in the black-hole limit, the lack of such effects renders the results of those previous analyses for general spin-induced multipoles discrepant with refs. Bern:2022kto ; Saketh:2022wap ; Levi:2022dqm , as well as with our results above. A concrete example of this disagreement is eq. (B.27) of ref. Chen:2021kxt , which is missing $C_{S^{2}}^{2}$ contributions at cubic order in spin. 3.2 Same-helicity Compton scattering The same-helicity Compton amplitude is much simpler than its opposite-helicity cousin in the case of black-hole scattering, possessing no unphysical poles at any spin order while also expressible as a spin exponential Johansson:2019dnu ; Aoude:2020onz . Ironically, then, in the case of general objects, the computation of the amplitude for this helicity configuration is more involved than for opposite helicities. An attempt to compute this amplitude using recursive techniques can also be found in ref. Chen:2021kxt . Apart from missing $\mathcal{O}(\hbar\times 1/\hbar)$ contributions, the result there possesses unphysical poles at quadratic order in spin and above, which are not present in the classical computation of ref. Saketh:2022wap .121212The helicity-reversing amplitude of ref. Saketh:2022wap has spurious singularities above linear order in spin as $\theta\rightarrow\pi$. We have written their amplitude in a manifestly local form in eq. 60. We thank Justin Vines for discussions about this. In this section we account for the missing contributions to their amplitude and remove the unphysical poles arising from the BCFW computation. To evaluate the two-positive configuration we begin with the same shift as in the opposite-helicity case. Accounting for all $\mathcal{O}(\hbar\times 1/\hbar)$ effects, we find the (non-local) infinite-spin, same-helicity amplitude to be $$\displaystyle\mathcal{M}_{4}^{\infty,++}$$ $$\displaystyle=\frac{y_{++}^{4}}{t_{13}t_{14}s_{34}}\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\frac{\left(\hat{q}_{4}\cdot\mathfrak{a}\right)^{j}\left(\hat{q}_{3}\cdot\mathfrak{a}\right)^{k}}{j!k!}$$ $$\displaystyle\qquad\times\left[C_{S^{j}}C_{S^{k}}+(C_{S^{j}}-C_{S^{j+1}})(C_{S^{k}}+C_{S^{k+1}})\frac{t_{14}-t_{13}}{2y}w\cdot\mathfrak{a}\right]+\mathcal{B}.$$ (28) We have defined $w^{\mu}_{++}\equiv[4|p_{1}\sigma^{\mu}|3]/2m$, so that $y_{++}\equiv 2p_{1}\cdot w_{++}=m[43]$. All $\mathcal{O}(\hbar\times 1/\hbar)$ contributions appear in the second term in square brackets. Similarly to the opposite-helicity case, we can see that these contributions vanish in the black-hole limit, such that we recover the spin-exponential form of the amplitude in ref. Aoude:2020onz . The recursive approach taken has missed BCFW boundary terms $\mathcal{B}\neq 0$, which is signaled by the development of unphysical singularities in $y$ above linear order in spin and since the amplitude does not have the expected $q_{3}\leftrightarrow q_{4}$ crossing symmetry. Interestingly, in the black-hole case the BCFW computation produces a local and crossing-symmetric amplitude, so boundary terms are not needed in this case Johansson:2019dnu . In the general case, however, our task has become to determine the appropriate boundary terms to restore both locality and crossing symmetry. We start with the latter. The missing crossing symmetry can be seen by noting that $w^{\mu}\rightarrow\bar{w}^{\mu}$ under $q_{3}\leftrightarrow q_{4}$. The origin of this asymmetry is that the $[3,4\rangle$ BCFW shift we have used does not treat the two gravitons identically. A remedy to this is to simply average the results of the $[3,4\rangle$ and $[4,3\rangle$ shifts. Since both shifts produce expressions with the correct factorization properties, the average will also have the appropriate residues on physical poles, with the added benefit of posessing the requisite crossing symmetry. The result of the averaging is $$\displaystyle\mathcal{M}_{4}^{\infty,++}=\frac{y_{++}^{4}}{2t_{13}t_{14}s_{34}}\sum_{j=0}^{\infty}\sum_{k=0}^{\infty}\frac{1}{j!k!}\left\{C_{S^{j}}C_{S^{k}}\left[\left(\hat{q}_{4}\cdot\mathfrak{a}\right)^{j}\left(\hat{q}_{3}\cdot\mathfrak{a}\right)^{k}+\left(\bar{\hat{q}}_{4}\cdot\mathfrak{a}\right)^{j}\left(\bar{\hat{q}}_{3}\cdot\mathfrak{a}\right)^{k}\right]\right.$$ (29) $$\displaystyle\left.+(C_{S^{j}}-C_{S^{j+1}})(C_{S^{k}}+C_{S^{k+1}})\frac{t_{14}-t_{13}}{2}\left[\frac{w\cdot\mathfrak{a}}{y}\left(\hat{q}_{4}\cdot\mathfrak{a}\right)^{j}\left(\hat{q}_{3}\cdot\mathfrak{a}\right)^{k}-\frac{\bar{w}\cdot\mathfrak{a}}{\bar{y}}\left(\bar{\hat{q}}_{3}\cdot\mathfrak{a}\right)^{j}\left(\bar{\hat{q}}_{4}\cdot\mathfrak{a}\right)^{k}\right]\right\}+\mathcal{B}^{\prime},$$ where $\mathcal{B}^{\prime}$ are the boundary terms needed to restore locality. The bar over a symbol represents complex conjugation, and we can see from eq. 18 that $\hat{q}_{4}\rightarrow\bar{\hat{q}}_{3}$ and $\hat{q}_{3}\rightarrow\bar{\hat{q}}_{4}$ under the swap $q_{3}\leftrightarrow q_{4}$. Moving on to the restoration of locality, we proceed by introducing non-local contact terms which cancel the poles in $y$ and $\bar{y}$. Both unphysical poles can be removed simultaneously by introducing contact terms with poles in $y\bar{y}=t_{13}t_{14}-m^{2}s_{34}$. The quantities $w\cdot\mathfrak{a}/y$ and $\bar{w}\cdot\mathfrak{a}/\bar{y}$ are expressible in terms of this product and $w_{++}\cdot\mathfrak{a}/y_{++}$ using appendix B. The removal of unphysical poles in $y\bar{y}$ at quadratic order in spin is shown explicitly in Appendix C. Up to cubic order in spin, we find agreement with the helicity-reversing amplitude of ref. Saketh:2022wap , which we write covariantly and in a manifestly-local form in eq. 60. At quartic order in spin, the same-helicity amplitude is $$\displaystyle\mathcal{M}_{4}^{\infty,++}|_{\mathfrak{a}^{4}}=\frac{y_{++}^{4}}{t_{13}t_{14}s_{34}}\left\{\frac{1}{4!}[(q_{3}+q_{4})\cdot\mathfrak{a}]^{4}\right.$$ (30) $$\displaystyle+\frac{C_{S^{2}}-1}{2!}\frac{s_{34}}{8}\left[24m^{2}t_{13}t_{14}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{4}+2m^{2}s_{34}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right.$$ $$\displaystyle\qquad\left.+2\mathfrak{a}^{2}(q_{3}\cdot\mathfrak{a})(q_{4}\cdot\mathfrak{a})+[(q_{3}+q_{4})\cdot\mathfrak{a}](t_{14}-t_{13})\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\left[\mathfrak{a}^{2}+8m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right]$$ $$\displaystyle+\frac{(C_{S^{2}}-1)^{2}}{2!2!}\frac{s_{34}}{8}\left[2\mathfrak{a}^{2}(q_{3}\cdot\mathfrak{a})(q_{4}\cdot\mathfrak{a})+[(q_{3}+q_{4})\cdot\mathfrak{a}](t_{14}-t_{13})\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\left[\mathfrak{a}^{2}+8m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right.$$ $$\displaystyle\qquad\left.-2\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\left[(2t_{13}t_{14}-m^{2}s_{34})\mathfrak{a}^{2}-4m^{2}(3t_{13}t_{14}+m^{2}s_{34})\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right]$$ $$\displaystyle+(C_{S^{2}}-1)(C_{S^{3}}-1)\frac{s_{34}}{16}[(q_{3}+q_{4})\cdot\mathfrak{a}](t_{14}-t_{13})\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]$$ $$\displaystyle+\frac{C_{S^{3}}-1}{3!}\frac{s_{34}}{4}\left[-24m^{2}t_{13}t_{14}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{4}\right.$$ $$\displaystyle\qquad\left.+\left[(q_{3}\cdot\mathfrak{a})^{2}+(q_{4}\cdot\mathfrak{a})^{2}+2[(q_{3}+q_{4})\cdot\mathfrak{a}](t_{14}-t_{13})\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right]\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right]$$ $$\displaystyle+\frac{C_{S^{4}}-1}{4!}\left[[(q_{3}+q_{4})\cdot\mathfrak{a}]^{4}+6m^{2}s_{34}t_{13}t_{14}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{4}-\frac{9s_{34}^{2}}{2}m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right.$$ $$\displaystyle\qquad-s_{34}[(q_{3}\cdot\mathfrak{a})^{2}+(q_{4}\cdot\mathfrak{a})^{2}]\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]-\frac{3s_{34}}{2}(q_{3}\cdot\mathfrak{a})(q_{4}\cdot\mathfrak{a})\left[\mathfrak{a}^{2}-8m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]$$ $$\displaystyle\qquad\left.\left.-\frac{s_{34}}{4}[(q_{3}+q_{4})\cdot\mathfrak{a}](t_{14}-t_{13})\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\left[5\mathfrak{a}^{2}+56m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right]\right\}.$$ Thanks to the overall $y_{++}^{4}$, this expression is also manifestly local. The symmetry of the amplitude under $q_{3}\leftrightarrow q_{4}$ appears to be broken by the factors of $t_{14}-t_{13}$, but this is not the case: under this exchange, $y_{++}$ is antisymmetric while $w_{++}\cdot\mathfrak{a}$ is symmetric thanks to the spin-supplementary condition $p\cdot\mathfrak{a}=0$, so the combination $(t_{14}-t_{13})w_{++}\cdot\mathfrak{a}/y_{++}$ is itself crossing-symmetric. Equation 30 agrees with the amplitude derived from the action of ref. Bern:2022kto , up to contact terms.131313Again, we thank Andres Luna and Fei Teng for sharing unpublished results. Since $y_{++}\sim m$, we can see from eq. 30 (as well as from eq. 60) that the black-hole amplitude scales as $m^{4}$ in the limit $m\rightarrow 0$. Above linear order in spin, the scaling $w_{++}^{\mu}\sim m^{-1}$ dulls this behavior to $m^{2,0,-2}$, at $\mathcal{O}(\mathfrak{a}^{2,3,4})$ respectively. The best behavior in the high-energy/massless limit thus emerges in the black-hole case, and at high enough spin orders the black-hole limit is the required for the existence of a non-divergent massless limit. This is reminiscent of the notion of minimal coupling of ref. Arkani-Hamed:2017jhn . Above fourth order in spin, further non-localities develop in inverse powers of $y_{++}$, analogously to the opposite-helicity case. The vanishing Gram determinant for the five four-vectors $\mathfrak{a}^{\mu},\,p_{1}^{\mu},\,q_{3}^{\mu},\,q_{4}^{\mu},\,w^{\mu}_{++}$, which reads $$\displaystyle 4m^{4}s_{34}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}=-2m^{2}(t_{14}-t_{13})[(q_{3}+q_{4})\cdot\mathfrak{a}]\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}-m^{2}\mathfrak{s}_{2}+\mathfrak{a}^{2}t_{13}t_{14},$$ (31) allows us to trade terms with more inverse powers of $y_{++}$ for terms with fewer such powers. Adding this Gram determinant to our arsenal, we have removed all poles in $y_{++}$ from the same-helicity Compton amplitude at fifth order in spin. We find again that the black-hole amplitude exhibits a finite massless limit at this spin order, with the generic case scaling as $m^{-2}$ when $m\rightarrow 0$. For brevity, we have relegated the amplitude at the fifth order in spin to the ancillary Mathematica notebook SameHelicitySpinFourFive.nb. Equation 30 is also included in this notebook for convenience. Through the evaluation of the classical Compton amplitude for general spinning objects, we have seen in this section that the construction of classical amplitudes using BCFW recursion generally requires that one keep track of subleading parts of intermediate expressions, including quantum pieces of lower-point amplitudes. Doing so, we have produced for the first time an opposite-helicity Compton amplitude which describes all spin multipoles of a general compact object, which also matches classical computations at low spins. In the same-helicity case, we presented a non-local, but crossing-symmetric, form of the amplitude to all spins, and cured non-localities up to fifth order in spin. These amplitudes are not unique, however, as they can be deformed by contact terms. For the sake of completeness, let us discuss this now. 4 Contact terms We can write the amplitudes most generally as $$\displaystyle\mathcal{M}^{\infty,h_{1}h_{2}}_{4}+m^{2}\left(\mathcal{C}^{h_{1}h_{2}}_{\text{even}}+\mathcal{C}^{h_{1}h_{2}}_{\text{odd}}+\mathcal{D}^{h_{1}h_{2}}_{\text{even}}+\mathcal{D}^{h_{1}h_{2}}_{\text{odd}}\right),$$ (32) where $\mathcal{C}^{h_{1}h_{2}}_{\text{even}}$ and $\mathcal{C}^{h_{1}h_{2}}_{\text{odd}}$ are (sums of) conservative contact terms at even and odd spin orders respectively, and $\mathcal{D}^{h_{1}h_{2}}_{\text{even}}$ and $\mathcal{D}^{h_{1}h_{2}}_{\text{odd}}$ account for dissipative effects. We will explain the distinction between the two below. At the risk of pedantry, let us clarify that a contact term is the product of a coefficient potentially dependent on the scales in the scattering, and a spin structure which is a pole-free function of the momenta and is a monomial in the spin vector. Most generally, each spin structure is accompanied by its own coefficient. In the case of Compton scattering, these coefficients are related – non-trivially – to the coefficients of curvature-squared operators in a worldline action, operators which describe tidal and spin-induced multipolar effects at quadratic order in curvature. See ref. Levi:2022rrq for a definition of tidal versus spin-induced multipolar operators in a worldline theory. An infinite number of contact terms can deform the Compton amplitudes at each spin order, all of which were classified at zeroeth and linear order in spin in refs. Haddad:2020que ; Aoude:2020ygw . However, only a finite number of them contribute to a fixed spin order up to a given order in Newton’s constant. Here we count and write down all contact term deformations contributing to classical gravitational scattering at the same orders as the $C_{S^{k}}$ – that is to say, at $\mathcal{O}(G^{2}\mathfrak{a}^{k}/b^{k+1})$, where $b$ is the impact parameter.141414When finite-size effects are allowed, 2PM contributions scale more generally as $\mathcal{O}(G^{2}R^{j}\mathfrak{a}^{k}/b^{j+k+1})$. This simplifies to $\mathcal{O}(G^{2+j}\mathfrak{a}^{k}/b^{j+k+1})$ for black holes, so the set of contact terms we consider here can also be thought of as all contact terms potentially relevant to black-hole scattering at $2$PM. This is equivalent to requiring that the coefficients of the contact terms do not have any $\hbar$ dependence. We will illustrate this in more detail now. 4.1 Relevant scales We are working in a context where we have restored factors of $\hbar$ but left $c=1$. For a general object we therefore have four relevant scales: Planck’s constant $\hbar$, Newton’s constant $G$, the mass of the object $m$, and the scale of the object’s spatial extent $R$. For a black hole there is one less scale, since the spatial extent is identified with the Schwarzschild radius, which is related to the other scales through $R_{s}=2Gm$. With $\hbar$ restored, these scales have the dimensions $$\displaystyle[R]=[L],\quad[m]=[M],\quad[\hbar]=[L][M],\quad[G]=\frac{[L]}{[M]},$$ where $[L]$ represents dimensions of length and $[M]$ dimensions of mass/momentum/energy. The coupling-stripped amplitude has dimensions $[M]^{2}$, and is $\mathcal{O}(\hbar^{0})$ in the classical limit. These dictate the possible scalings of coefficients for classically-relevant contact terms. We now argue that these properties of the amplitude, combined with the available scales, imply that the contact terms contributing at $\mathcal{O}(G^{2}\mathfrak{a}^{k}/b^{k+1})$ are those with Wilson coefficients that do not scale with $\hbar$. Terms in the 2PM scattering angle which scale as $\mathcal{O}(G^{2}\mathfrak{a}^{k}/b^{k+1})$ in impact-parameter space come from terms of the schematic form $G^{2}q^{k}\mathfrak{a}^{k}/\sqrt{-q^{2}}$ in momentum space, where $q$ is the transfer momentum. This can be seen to all spin orders in the results of ref. Aoude:2022thd . The square-root comes from triangle integrals, while the $q^{k}\mathfrak{a}^{k}\sim\mathcal{O}(\hbar^{0})$ come from $\mathcal{O}(\hbar^{0})$ parts of the Compton amplitude. If we consider contact terms in the Compton amplitude with spin structures that scale with some positive power of $\hbar$, the corresponding coefficient must carry a compensating number of inverse factors of $\hbar$ in order for the contact term to scale classically, as was seen in refs. Haddad:2020que ; Aoude:2020ygw . Then, to maintain the correct mass dimensions, the coefficients must also scale with additional powers of $R$ or $Gm$. This translates to terms of the form $G^{2+n}R^{l}q^{k+j}\mathfrak{a}^{k}/\sqrt{-q^{2}}$ in the one-loop amplitude, where $n+l=j$ is the number of inverse factors of $\hbar$ needed in the contact term coefficient. Moving to impact-parameter space, these produce terms scaling as $\mathcal{O}(G^{2+n}R^{l}\mathfrak{a}^{k}/b^{j+k+1})$. Thus $\mathcal{O}(G^{2}\mathfrak{a}^{k}/b^{k+1})$ effects are produced by contact terms in the Compton amplitude which have Wilson coefficients that don’t depend on $\hbar$ – or equivalently, spin structures which are $\mathcal{O}(\hbar^{0})$. These are the contact terms we will construct in the following. For spinning objects there is actually an additional scale: the magnitude of the ring radius itself, $|\mathfrak{a}|\equiv\sqrt{\mathfrak{a}^{2}}$. Allowing the coefficients of their contact terms to depend on the magnitude of the ring radius, the authors of ref. Bautista:2022wjf were able to exactly match the opposite-helicity Compton amplitude to solutions of the Teukolsky equation up to sixth order in spin. This scale only appeared in their coefficients in the dimensionless combination $\hbar|\mathfrak{a}|/Gm$,151515The scaling of this combination with $\hbar$ is superficial, since in the $\hbar\rightarrow 0$ limit the combination $\hbar|\mathfrak{a}|$ is held constant Maybee:2019jus . which does not affect our enumeration of contact terms below since there are no inverse factors of $\hbar$ in this combination that allow us to consider more general spin structures. The same is true of the other dimensionless combination $\hbar|\mathfrak{a}|/R$, which can appear in the neutron-star case. More generally, allowing $|\mathfrak{a}|$ to appear independently as a scale in the coefficients is already accounted for by the dissipative contact terms below. 4.2 Opposite-helicity contact terms We would like to explicitly construct all independent contact terms with Wilson coefficients that do not scale with $\hbar$. Redundancies between contact terms may arise due to the fact that the Gram determinant vanishes for the five four-vectors $\mathfrak{a}^{\mu},\,p_{1}^{\mu},\,q_{3}^{\mu},\,q_{4}^{\mu},\,w^{\mu}$ in four spacetime dimensions. Employing the vanishing of the Gram determinant, $$\displaystyle(t_{14}-t_{13})^{2}(w\cdot\mathfrak{a})^{2}=-4m^{2}s_{34}(w\cdot\mathfrak{a})^{2}+2y(t_{14}-t_{13})\mathfrak{s}_{1}(w\cdot\mathfrak{a})-y^{2}\mathfrak{s}_{2},$$ (33) these redundancies can be avoided by excluding any contact term containing the left-hand side of eq. 33 as a subfactor. Now, in order to carry the correct helicity weight, all contact terms must contain exactly four factors of the helicity vector $w^{\mu}$. Since $w^{\mu}$ is orthogonal to both $q_{3}^{\mu}$ and $q_{4}^{\mu}$, it can only be contracted with $p_{1}^{\mu}$ and $\mathfrak{a}^{\mu}$. All contact terms must therefore contain a factor of the form $$\displaystyle y^{n}(w\cdot\mathfrak{a})^{4-n},\quad 0\leq n\leq 4.$$ (34) At each $n$ a core factor can be identified that is $\mathcal{O}(\hbar^{0})$, out of which all contact terms of interest to us can be constructed by multiplying by the following $\mathcal{O}(\hbar^{0})$ factors:161616One could expand this list by including dressing factors with apparent singularities in $s_{34}$ but whose residues at $s_{34}=0$ actually vanish, as was done in ref. Bautista:2022wjf . However, such terms are redundant in our case as we’ve instead allowed for factors of $\mathfrak{a}^{2}$ to appear in contact terms. This amounts to a different choice of basis on account of eq. 33, so we must agree on the total number of free coefficients. $$\displaystyle q_{3}\cdot\mathfrak{a},\quad q_{4}\cdot\mathfrak{a},\quad s_{34}\mathfrak{a}^{2},\quad(t_{14}-t_{13})^{2}\mathfrak{a}^{2}.$$ (35) For $n\leq 2$, the last term in this list need not be considered because of eq. 33. At each $n$ one can construct two core factors: one involving a single factor of $|\mathfrak{a}|\equiv\sqrt{\mathfrak{a}^{2}}$, and one without. Contact terms containing such a factor were argued in ref. Bautista:2022wjf to encode dissipative effects, because their coefficients depend on the boundary conditions at the black hole horizon chosen for solving the Teukolsky equation. We begin by focusing on conservative contact terms, and subsequently consider dissipative ones. 4.2.1 Conservative contact terms Counting all possible on-shell contact terms is made easier by considering each value of $n$ individually in eq. 34. Let us illustrate this counting for $n=4$ and $n=3$. $\boldsymbol{n=4}:$ The core factor scaling as $\mathcal{O}(\hbar^{0})$ and carrying the correct helicity weights is $y^{4}\mathfrak{a}^{4}$. There are no redundancies because of eq. 33 in this case, so we can dress this with any of the factors in eq. 35. To reach $\mathcal{O}(\mathfrak{a}^{2k\geq 4})$ we must dress the core structure with $2k-4$ powers of spin, with $i$ dressing factors being quadratic in spin. Of these $i$ we take $l$ factors to be $(t_{14}-t_{13})^{2}\mathfrak{a}^{2}$, and the remaining $i-l$ to be $s_{34}\mathfrak{a}^{2}$. We are then left with $2k-4-2i$ linear-in-spin dressing factors, of which $j$ are, say, $q_{3}\cdot\mathfrak{a}$. The total number of even-in-spin contact terms with $n=4$ is then given by the triple sum $$\displaystyle\sum_{i=0}^{k-2}\sum_{l=0}^{i}\sum_{j=0}^{2k-4-2i}1=\frac{1}{6}k(k-1)(2k-1).$$ (36) The same logic for $\mathcal{O}(\mathfrak{a}^{2k+1\geq 5})$ gives $$\displaystyle\sum_{i=0}^{k-2}\sum_{l=0}^{i}\sum_{j=0}^{2k-3-2i}1=\frac{1}{3}k(k^{2}-1)$$ (37) total contact terms. $\boldsymbol{n=3}:$ The core factor scaling as $\mathcal{O}(\hbar^{0})$ and carrying the correct helicity weights is $(t_{14}-t_{13})y^{3}\mathfrak{a}^{4}(w\cdot\mathfrak{a})$. Again, we cannot have redundancies due to the vanishing Gram determinant in this case. Since the core factor already has five spin powers, contact terms for $n=3$ only arise at even-in-spin orders from $\mathcal{O}(\mathfrak{a}^{2k\geq 6})$. This changes the upper bounds on the sums over $i$ and $j$ in eq. 36, since now quadratic-in-spin dressings can only begin to appear for $k\geq 4$ and since five instead of four powers of spin are accounted for in the core factor. Making these modifications, the total number of even-in-spin contact terms for $n=3$ is $$\displaystyle\sum_{i=0}^{k-3}\sum_{l=0}^{i}\sum_{j=0}^{2k-5-2i}1=\frac{1}{3}k(k-1)(k-2).$$ (38) For $\mathcal{O}(\mathfrak{a}^{2k+1\geq 5})$ we have $$\displaystyle\sum_{i=0}^{k-2}\sum_{l=0}^{i}\sum_{j=0}^{2k-4-2i}1=\frac{1}{6}k(k-1)(2k-1)$$ (39) total contact terms. The core factors for the remaining values of $n$ are $$\displaystyle n=2:$$ $$\displaystyle\quad y^{2}\mathfrak{a}^{2}(w\cdot\mathfrak{a})^{2},$$ $$\displaystyle n=1:$$ $$\displaystyle\quad y(t_{14}-t_{13})\mathfrak{a}^{2}(w\cdot\mathfrak{a})^{3},$$ $$\displaystyle n=0:$$ $$\displaystyle\quad(w\cdot\mathfrak{a})^{4}.$$ In each of these cases we must not dress the core factors with $(t_{14}-t_{13})^{2}\mathfrak{a}^{2}$, as such dressings are reducible using eq. 33. Consequently, the counting of contact terms for $n=2,0$ is the same as for the $n=4$ case but with $l$ fixed to zero. Similarly, the counting for $n=1$ is given by the counting for $n=3$ with the sum over $l$ dropped. The result of the counting of independent coefficients is given in tables 1 and 2. The most-general, conservative, contact-term deformation of the opposite-helicity amplitude relevant at $\mathcal{O}(G^{2}\mathfrak{a}^{2k}/b^{2k+1})$ is $$\displaystyle\mathcal{C}_{\text{even}}^{-+}=\sum_{k=0}^{\infty}\left\{\sum_{i=0}^{k-2}\sum_{j=0}^{2k-4-2i}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-4-2i-j}\right.$$ $$\displaystyle\times\left[y^{4}\mathfrak{a}^{4}\sum_{l=0}^{i}a^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+(w\cdot\mathfrak{a})^{2}\left[y^{2}\mathfrak{a}^{2}c^{-+,k}_{i,j}+(w\cdot\mathfrak{a})^{2}e^{-+,k}_{i,j}\right]\right]$$ $$\displaystyle+(t_{14}-t_{13})y\mathfrak{a}^{2}(w\cdot\mathfrak{a})\sum_{i=0}^{k-3}\sum_{j=0}^{2k-5-2i}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-5-2i-j}$$ $$\displaystyle\left.\times\left[y^{2}\mathfrak{a}^{2}\sum_{l=0}^{i}b^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+d^{-+,k}_{i,j}(w\cdot\mathfrak{a})^{2}\right]\right\},$$ (40) where the coefficients $a,\,b,\,c,\,d,\,e$ represent contact terms with $n=4,3,2,1,0$ respectively in eq. 34. At $\mathcal{O}(G^{2}\mathfrak{a}^{2k+1}/b^{2k+2})$ the most-general set of conservative contact terms is $$\displaystyle\mathcal{C}_{\text{odd}}^{-+}=\sum_{k=0}^{\infty}\left\{\sum_{i=0}^{k-2}\sum_{j=0}^{2k-3-2i}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-3-2i-j}\right.$$ $$\displaystyle\times\left[y^{4}\mathfrak{a}^{4}\sum_{l=0}^{i}\tilde{a}^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+(w\cdot\mathfrak{a})^{2}\left[y^{2}\mathfrak{a}^{2}\tilde{c}^{-+,k}_{i,j}+(w\cdot\mathfrak{a})^{2}\tilde{e}^{-+,k}_{i,j}\right]\right]$$ $$\displaystyle+(t_{14}-t_{13})y\mathfrak{a}^{2}(w\cdot\mathfrak{a})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-4-2i}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-4-2i-j}$$ $$\displaystyle\left.\times\left[y^{2}\mathfrak{a}^{2}\sum_{l=0}^{i}\tilde{b}^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+\tilde{d}^{-+,k}_{i,j}(w\cdot\mathfrak{a})^{2}\right]\right\},$$ (41) All spin structures in the above are $\mathcal{O}(\hbar^{0})$, so the coefficients for classical contributions must be $\hbar$-free. However, spin structures containing subfactors of $(t_{14}-t_{13})^{2}\mathfrak{a}^{2}$ have mass dimensions which must be compensated by their coefficients. Specifically, $$\displaystyle[a^{-+,k}_{i,l,j}]=[\tilde{a}^{-+,k}_{i,l,j}]$$ $$\displaystyle=[b^{-+,k}_{i,l,j}]=[\tilde{b}^{-+,k}_{i,l,j}]=[M]^{-4-2l},$$ $$\displaystyle[c^{-+,k}_{i,j}]=[\tilde{c}^{-+,k}_{i,j}]$$ $$\displaystyle=[d^{-+,k}_{i,j}]=[\tilde{d}^{-+,k}_{i,j}]=[M]^{-2}.$$ To achieve this without altering the $\hbar$ scaling of the contact terms, we set171717Other possible combinations of the relevant scales that can produce the requisite mass dimensions are powers of $R/\hbar$, $Gm/\hbar$, or $G/R$. The first two are not classically relevant, while the last one is only relevant for neutron stars past 2PM. $$\displaystyle\{a^{-+,k}_{i,l,j},\tilde{a}^{-+,k}_{i,l,j},b^{-+,k}_{i,l,j},\tilde{b}^{-+,k}_{i,l,j}\}$$ $$\displaystyle\propto m^{-4-2l},$$ $$\displaystyle\{c^{-+,k}_{i,j},\tilde{c}^{-+,k}_{i,j},d^{-+,k}_{i,j},\tilde{d}^{-+,k}_{i,j}\}$$ $$\displaystyle\propto m^{-2}.$$ All other coefficients must be either scaleless or depend on (products of) the factors $\hbar|\mathfrak{a}|/Gm$ and $\hbar|\mathfrak{a}|/R$. 4.2.2 Dissipative contact terms To count dissipative contact terms we modify slightly the core factors to carry the correct helicity weights, scale as $\mathcal{O}(\hbar^{0})$, and also possess one factor of $|\mathfrak{a}|$ Bautista:2022wjf : $$\displaystyle n=4:$$ $$\displaystyle\quad y^{4}\mathfrak{a}^{4}(t_{14}-t_{13})|\mathfrak{a}|,$$ $$\displaystyle n=3:$$ $$\displaystyle\quad y^{3}\mathfrak{a}^{2}(w\cdot\mathfrak{a})|\mathfrak{a}|,$$ $$\displaystyle n=2:$$ $$\displaystyle\quad y^{2}\mathfrak{a}^{2}(w\cdot\mathfrak{a})^{2}(t_{14}-t_{13})|\mathfrak{a}|,$$ $$\displaystyle n=1:$$ $$\displaystyle\quad y(w\cdot\mathfrak{a})^{3}|\mathfrak{a}|$$ $$\displaystyle n=0:$$ $$\displaystyle\quad(w\cdot\mathfrak{a})^{4}(t_{14}-t_{13})|\mathfrak{a}|.$$ All contact terms can again be generated by dressing these core factors with the dressing factors in eq. 35. At $\mathcal{O}(\mathfrak{a}^{4,5})$ ref. Bautista:2022wjf only included dissipative contact terms with factors of $(t_{14}-t_{13})|\mathfrak{a}|$ in their ansatz, while here we also include terms of the form $y|\mathfrak{a}|$. In ref. Bautista:2022wjf the latter were found to not be needed to match to solutions of the Teukolsky equation. So, to compare our counting of dissipative contact terms to that in ref. Bautista:2022wjf at these spin orders, we must ignore those emerging from the $n=1,3$ core factors. At $\mathcal{O}(\mathfrak{a}^{6})$ ref. Bautista:2022wjf included some terms with these core factors.181818Specifically, the contact terms there with the $c_{10}^{(i)}$ coefficients can be rewritten using eq. 33 to involve terms with the $n=1,3$ core factors. We thank Yilber Fabian Bautista for discussions about this. Counting in an identical fashion to the conservative case, we find the number of dissipative contact terms in tables 3 and 4 for even and odd spin powers respectively. The most-general set of dissipative contact terms relevant at $\mathcal{O}(G^{2}\mathfrak{a}^{2k}/b^{2k+1})$ is $$\displaystyle\mathcal{D}^{-+}_{\text{even}}=|\mathfrak{a}|\sum_{k=0}^{\infty}\left\{(t_{14}-t_{13})\sum_{i=0}^{k-3}\sum_{j=0}^{2k-2i-5}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-5-j}\right.$$ $$\displaystyle\times\left[y^{4}\mathfrak{a}^{4}\sum_{l=0}^{i}f^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+(w\cdot\mathfrak{a})^{2}\left[p^{-+,k}_{i,j}y^{2}\mathfrak{a}^{2}+r^{-+,k}_{i,j}(w\cdot\mathfrak{a})^{2}\right]\right]$$ $$\displaystyle+y(w\cdot\mathfrak{a})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-2i-4}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-4-j}$$ $$\displaystyle\left.\times\left[y^{2}\mathfrak{a}^{2}\sum_{l=0}^{i}g^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+q^{-+,k}_{i,j}(w\cdot\mathfrak{a})^{2}\right]\right\}.$$ (42) The coefficients labelled $f,\,g,\,p,\,q,\,r$ correspond to the $n=4,3,2,1,0$ core factors, respectively. The dissipative contact terms needed at $\mathcal{O}(G^{2}\mathfrak{a}^{2k+1}/b^{2k+2})$ are $$\displaystyle\mathcal{D}^{-+}_{\text{odd}}=|\mathfrak{a}|\sum_{k=0}^{\infty}\left\{(t_{14}-t_{13})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-2i-4}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-4-j}\right.$$ $$\displaystyle\times\left[y^{4}\mathfrak{a}^{4}\sum_{l=0}^{i}\tilde{f}^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+(w\cdot\mathfrak{a})^{2}\left[\tilde{p}^{-+,k}_{i,j}y^{2}\mathfrak{a}^{2}+\tilde{r}^{-+,k}_{i,j}(w\cdot\mathfrak{a})^{2}\right]\right]$$ $$\displaystyle+y(w\cdot\mathfrak{a})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-2i-3}(s_{34}\mathfrak{a}^{2})^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-3-j}$$ $$\displaystyle\left.\times\left[y^{2}\mathfrak{a}^{2}\sum_{l=0}^{i}\tilde{g}^{-+,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{-l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{l}+\tilde{q}^{-+,k}_{i,j}(w\cdot\mathfrak{a})^{2}\right]\right\}.$$ (43) As in the conservative case, the condition that the amplitude has mass dimension 2 and scales as $\mathcal{O}(\hbar^{0})$ in the classical limit imposes certain scalings on the parameters. In this case, we must have $$\displaystyle\{f^{-+,k}_{i,l,j},\tilde{f}^{-+,k}_{i,l,j}\}$$ $$\displaystyle\propto m^{-5-2l},$$ $$\displaystyle\{g^{-+,k}_{i,l,j},\tilde{g}^{-+,k}_{i,l,j}\}$$ $$\displaystyle\propto m^{-3-2l},$$ $$\displaystyle\{p^{-+,k}_{i,j},\tilde{p}^{-+,k}_{i,j}\}$$ $$\displaystyle\propto m^{-3},$$ $$\displaystyle\{q^{-+,k}_{i,j},\tilde{q}^{-+,k}_{i,j},r^{-+,k}_{i,j},\tilde{r}^{-+,k}_{i,j}\}$$ $$\displaystyle\propto m^{-1}.$$ Imposing crossing symmetry on the scattering renders nearly half of the parameters redundant. Both opposite-helicity configurations are related under crossing through $$\displaystyle\mathcal{M}_{4}^{+-}=\mathcal{M}_{4}^{-+}|_{q_{3}\leftrightarrow q_{4}}=\bar{\mathcal{M}}_{4}^{-+}|_{\mathfrak{a}\rightarrow-\mathfrak{a}}.$$ (44) Thus the coefficients of contact terms with the subfactor $(q_{3}\cdot\mathfrak{a})^{i}(q_{4}\cdot\mathfrak{a})^{j}$ are related to those of the analogous contact terms with $i$ and $j$ flipped. For example, two such related coefficients at $\mathcal{O}(\mathfrak{a}^{5})$ are $$\displaystyle\tilde{e}^{-+,2}_{0,0}(w\cdot\mathfrak{a})^{4}q_{4}\cdot\mathfrak{a},\quad\tilde{e}^{-+,2}_{0,1}(w\cdot\mathfrak{a})^{4}q_{3}\cdot\mathfrak{a},$$ and eq. 44 imposes $\tilde{e}^{-+,2}_{0,0}=-\tilde{e}^{-+,2}_{0,1}$. The numbers of independent coefficients consistent with crossing symmetry are also shown in tables 1, 2, 3 and 4. The number of crossing-symmetric contact terms agrees with ref. Bautista:2022wjf for the conservative sector, while we have additional terms in the dissipative sector which they found to be unnecessary for matching to solutions of the Teukolsky equation. 4.3 Same-helicity contact terms Moving on to same-helicity scattering, the analysis is nearly identical to the opposite-helicity scenario, with the primary difference being that we now work with the helicity vector $w_{++}^{\mu}$ defined above instead of $w^{\mu}$. Redundancies must again be accounted for because of the vanishing of the Gram determinant eq. 31. It is thus sufficient to construct contact terms that do not include the left-hand side of eq. 31 as a subfactor. Analogously to the opposite-helicity case, each contact term must contain four powers of the helicity vector $w_{++}^{\mu}$ in order to transform appropriately under the little groups of the external massless particles. In this case, the helicity vector is orthogonal to $q_{3}^{\mu}$ but not to $q_{4}^{\mu}$. Nevertheless, it is easy to show that $q_{4}\cdot w_{++}=-\frac{t_{14}}{2m^{2}}y_{++}$, so this contraction may be ignored so long as we account for $y_{++}$. Therefore, each contact term must contain a factor of $$\displaystyle y_{++}^{n}(w_{++}\cdot\mathfrak{a})^{4-n},\quad 0\leq n\leq 4.$$ (45) Another option exists for the helicity vector: $\tilde{w}_{++}^{\mu}=[4|\gamma^{\mu}p_{1}|3]/2m$. We can work with $w_{++}^{\mu}$ exclusively since the two are related by $\tilde{w}_{++}^{\mu}=\frac{p_{1}^{\mu}}{m^{2}}y_{++}-w_{++}^{\mu}$, and hence any contraction with $\tilde{w}_{++}^{\mu}$ is already accounted for in terms of contractions with $w_{++}^{\mu}$. At a fixed $n$ the core factors for both conservative and dissipative contact terms are identical to the opposite-helicity core factors, but with $\{y,w^{\mu}\}\rightarrow\{y_{++},w_{++}^{\mu}\}$. Contact terms are then made by dressing the core factors with the factors in eq. 35. Now, however, eq. 31 tells us that we must not use the third of these dressing factors for $n\leq 2$. All-in-all, carrying out the counting as above shows that there is the same number of conservative and dissipative contact terms at each fixed $n$ in both the general and crossing-symmetric sectors as for opposite-helicity scattering; see tables 1, 2, 3 and 4. The forms of the contact terms are slightly different, however, because of the differing Gram determinants between both helicity configuations. 4.3.1 Conservative contact terms The most-general, conservative, contact-term deformation of the same-helicity amplitude relevant at $\mathcal{O}(G^{2}\mathfrak{a}^{2k}/b^{2k+1})$ is $$\displaystyle\mathcal{C}_{\text{even}}^{++}=\sum_{k=0}^{\infty}\left\{\sum_{i=0}^{k-2}\sum_{j=0}^{2k-4-2i}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-4-2i-j}\right.$$ $$\displaystyle\times\left[y_{++}^{4}\mathfrak{a}^{4}\sum_{l=0}^{i}a^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+(w_{++}\cdot\mathfrak{a})^{2}\left[y_{++}^{2}\mathfrak{a}^{2}c^{++,k}_{i,j}+(w_{++}\cdot\mathfrak{a})^{2}e^{++,k}_{i,j}\right]\right]$$ $$\displaystyle+(t_{14}-t_{13})y_{++}\mathfrak{a}^{2}(w_{++}\cdot\mathfrak{a})\sum_{i=0}^{k-3}\sum_{j=0}^{2k-5-2i}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-5-2i-j}$$ $$\displaystyle\left.\times\left[y^{2}_{++}\mathfrak{a}^{2}\sum_{l=0}^{i}b^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+d^{++,k}_{i,j}(w_{++}\cdot\mathfrak{a})^{2}\right]\right\}.$$ (46) At $\mathcal{O}(G^{2}\mathfrak{a}^{2k+1}/b^{2k+2})$ the set is $$\displaystyle\mathcal{C}^{++}_{\text{odd}}=\sum_{k=0}^{\infty}\left\{\sum_{i=0}^{k-2}\sum_{j=0}^{2k-3-2i}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-3-2i-j}\right.$$ $$\displaystyle\times\left[y_{++}^{4}\mathfrak{a}^{4}\sum_{l=0}^{i}\tilde{a}^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+(w_{++}\cdot\mathfrak{a})^{2}\left[y_{++}^{2}\mathfrak{a}^{2}\tilde{c}^{++,k}_{i,j}+(w_{++}\cdot\mathfrak{a})^{2}\tilde{e}^{++,k}_{i,j}\right]\right]$$ $$\displaystyle+(t_{14}-t_{13})y_{++}\mathfrak{a}^{2}(w_{++}\cdot\mathfrak{a})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-4-2i}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-4-2i-j}$$ $$\displaystyle\left.\times\left[y^{2}_{++}\mathfrak{a}^{2}\sum_{l=0}^{i}\tilde{b}^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+\tilde{d}^{++,k}_{i,j}(w_{++}\cdot\mathfrak{a})^{2}\right]\right\}.$$ (47) As above, contributions with a classical $\hbar$ scaling require $$\displaystyle\{a^{++,k}_{i,l,j},\tilde{a}^{++,k}_{i,l,j},b^{++,k}_{i,l,j},\tilde{b}^{++,k}_{i,l,j}\}$$ $$\displaystyle\propto m^{-4-2(i-l)},$$ $$\displaystyle\{c^{++,k}_{i,j},\tilde{c}^{++,k}_{i,j},d^{++,k}_{i,j},\tilde{d}^{++,k}_{i,j}\}$$ $$\displaystyle\propto m^{-2-2i},$$ $$\displaystyle\{e^{++,k}_{i,j},\tilde{e}^{++,k}_{i,j}\}$$ $$\displaystyle\propto m^{-2i}.$$ Unlike in the conservative sector of the opposite-helicity amplitude, here all parameters must scale with some power of the mass unless $i=0$. 4.3.2 Dissipative contact terms The most-general set of dissipative contact terms relevant at $\mathcal{O}(G^{2}\mathfrak{a}^{2k}/b^{2k+1})$ is $$\displaystyle\mathcal{D}^{++}_{\text{even}}=|\mathfrak{a}|\sum_{k=0}^{\infty}\left\{(t_{14}-t_{13})\sum_{i=0}^{k-3}\sum_{j=0}^{2k-2i-5}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-5-j}\right.$$ $$\displaystyle\times\left[y^{4}_{++}\mathfrak{a}^{4}\sum_{l=0}^{i}f^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+(w_{++}\cdot\mathfrak{a})^{2}\left[p^{++,k}_{i,l,j}y^{2}_{++}\mathfrak{a}^{2}+r^{++,k}_{i,l,j}(w_{++}\cdot\mathfrak{a})^{2}\right]\right]$$ $$\displaystyle+y_{++}(w_{++}\cdot\mathfrak{a})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-2i-4}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-4-j}$$ $$\displaystyle\left.\times\left[y^{2}_{++}\mathfrak{a}^{2}\sum_{l=0}^{i}g^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+q^{++,k}_{i,l,j}(w_{++}\cdot\mathfrak{a})^{2}\right]\right\},$$ (48) while at $\mathcal{O}(G^{2}\mathfrak{a}^{2k+1}/b^{2k+2})$ we find $$\displaystyle\mathcal{D}^{++}_{\text{odd}}=|\mathfrak{a}|\sum_{k=0}^{\infty}\left\{(t_{14}-t_{13})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-2i-4}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-4-j}\right.$$ $$\displaystyle\times\left[y^{4}_{++}\mathfrak{a}^{4}\sum_{l=0}^{i}\tilde{f}^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+(w_{++}\cdot\mathfrak{a})^{2}\left[\tilde{p}^{++,k}_{i,l,j}y^{2}_{++}\mathfrak{a}^{2}+\tilde{r}^{++,k}_{i,l,j}(w_{++}\cdot\mathfrak{a})^{2}\right]\right]$$ $$\displaystyle+y_{++}(w_{++}\cdot\mathfrak{a})\sum_{i=0}^{k-2}\sum_{j=0}^{2k-2i-3}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{i}(q_{3}\cdot\mathfrak{a})^{j}(q_{4}\cdot\mathfrak{a})^{2k-2i-3-j}$$ $$\displaystyle\left.\times\left[y^{2}_{++}\mathfrak{a}^{2}\sum_{l=0}^{i}\tilde{g}^{++,k}_{i,l,j}(s_{34}\mathfrak{a}^{2})^{l}[(t_{14}-t_{13})^{2}\mathfrak{a}^{2}]^{-l}+\tilde{q}^{++,k}_{i,l,j}(w_{++}\cdot\mathfrak{a})^{2}\right]\right\}.$$ (49) In this final set of contact terms, the coefficients contributing at $\mathcal{O}(\hbar^{0})$ have the scalings $$\displaystyle\{f^{++,k}_{i,l,j},\tilde{f}^{++,k}_{i,l,j}\}$$ $$\displaystyle\propto m^{-5-2(i-l)},$$ $$\displaystyle\{g^{-+,k}_{i,l,j},\tilde{g}^{-+,k}_{i,l,j}\}$$ $$\displaystyle\propto m^{-3-2(i-l)},$$ $$\displaystyle\{p^{-+,k}_{i,j},\tilde{p}^{-+,k}_{i,j}\}$$ $$\displaystyle\propto m^{-3-2i},$$ $$\displaystyle\{q^{-+,k}_{i,j},\tilde{q}^{-+,k}_{i,j},r^{-+,k}_{i,j},\tilde{r}^{-+,k}_{i,j}\}$$ $$\displaystyle\propto m^{-1-2i}.$$ Crossing symmetry is satisfied if $$\displaystyle\mathcal{M}^{--}_{4}=\bar{\mathcal{M}}_{4}^{++}|_{\mathfrak{a}\rightarrow-\mathfrak{a}},\quad\mathcal{M}_{4}^{++}=\mathcal{M}_{4}^{++}|_{q_{3}\leftrightarrow q_{4}}.$$ (50) The first of these determines the amplitude with two negative-helicity gravitons, while the second constrains many of the free coefficients. Tables 1, 2, 3 and 4 show the number of remaining free coefficients after requiring crossing symmetry. 5 Summary & outlook We have shed light on subtleties that must be accounted for when recursively computing classical amplitudes with virtual massive spinning particles. Specifically, BCFW recursion can produce intermediate expressions which are superclassical at leading order in $\hbar$, thus demanding that one keeps track of subleading-in-$\hbar$ effects in order to completely construct classical amplitudes. Recursively constructing the classical gravitational Compton amplitude for all helicity configurations has illustrated this necessity. In the opposite-helicity case, we combined BCFW recursion with the technique presented in ref. Aoude:2022trd for removing unphysical poles to write a local classical amplitude for a general spinning compact object to all orders in its classical spin vector. When gravitons of the same helicity were scattered, we were able to write an amplitude with no unphysical poles up to fifth order in spin. Up to third order in spin, both helicity configurations agree with the results of the classical computation of ref. Saketh:2022wap . At fourth order, our results are in agreement with the amplitude derived from the action of ref. Bern:2022kto . To complete the Compton amplitude, we also counted and explicitly wrote down all independent contact terms that could potentially contribute to black-hole scattering at 2PM, including both conservative and dissipative effects. We counted the contact terms both with and without the crossing symmetry imposed in ref. Bautista:2022wjf , and found agreement with their counting in the conservative sector. In the dissipative sector, our space of contact terms contains, and is larger than, the space of contact terms needed in ref. Bautista:2022wjf to match to solutions of the Teukolsky equation. Our analysis has uncovered two differences between the Compton amplitude pertaining to Kerr black holes compared with general compact objects. First, the former can be constructed recursively using only leading-in-$\hbar$ information at all steps in the computation, since all $\mathcal{O}(\hbar\times 1/\hbar)$ effects cancel within each factorization channel. Second, and more similar to the minimal coupling condition of ref. Arkani-Hamed:2017jhn , the same-helicity amplitude exhibited the best massless-limit behavior above linear order in spin in the black-hole case. Above cubic order in spin, the general-object amplitude was divergent as $m\rightarrow 0$ at the spins considered, and the black-hole limit was required to quell this divergence. This makes clear the influence of minimal coupling at three points on higher-point amplitudes, and indicates a potential extension of the notion of minimal coupling to higher multiplicities. In particular, the form of the three-point amplitude in eqs. 10 and 13 hides the significance of the coefficient values $C_{S^{j}}=1$, which is elucidated again by considering the higher-point amplitude. It is conceivable, then, that considering the three-graviton-emission amplitude will suggest values for the contact-term coefficients in Section 4 that improve the massless limit of the higher-point amplitude. Such a method for assigning values to contact-term coefficients is not without its difficulties, however, primarily of which is the likely occurrence of non-localities in higher-point amplitudes at high spin, which must be removed. Second of all, the non-commutativity of the classical limit with BCFW recursion means the construction of higher-point amplitudes in terms of classical quantities becomes cumbersome, necessitating tracking ever-more subleading parts of lower-point amplitudes. Finally, any effective (i.e. amplitudes) determination of contact-term coefficients can only be interpreted as describing Kerr black holes insofar as it matches general-relativistic computations, such as those in refs. Dolan:2008kf ; Levi:2015msa ; Saketh:2022wap ; Bautista:2022wjf . Nevertheless, it is crucial to identify as many differences as possible between black-hole and general-object amplitudes in the pursuit of an amplitudes-based understanding of Kerr black holes. Acknowledgements.Four-vector manipulations were performed using FeynCalc MERTIG1991345 ; Shtabovenko:2016sxi ; Shtabovenko:2020gxv . I am grateful in particular to Lucile Cangemi, Henrik Johansson, and Andres Luna for very helpful discussions about this work. I would also like to thank Francesco Alessio, Rafael Aoude, Fabian Bautista, Alessandro Georgoudis, Andreas Helset, Paolo Pichini, and Justin Vines for stimulating discussions about this and related topics. Furthermore, I thank Andres Luna and Fei Teng, and Muddu Saketh and Justin Vines for sharing unpublished versions of their Compton amplitudes for comparison. For comments on the manuscripts, I thank Rafael Aoude, Andreas Helset, Henrik Johansson, Jung-Wook Kim, and Andres Luna. I am grateful to Nordita for their ongoing hospitality. This work is supported by the Knut and Alice Wallenberg Foundation under grants KAW 2018.0116 (From Scattering Amplitudes to Gravitational Waves) and KAW 2018.0162. Appendix A Spin vector conventions and properties The ring radius $a^{\mu}$ acting on a general spin representation is related to the spin tensor $S^{\mu\nu}$ in that representation by identifying it with the Pauli-Lubanski pseudovector: $$\displaystyle a^{\mu}$$ $$\displaystyle=-\frac{1}{2m}\epsilon^{\mu\nu\alpha\beta}v_{\nu}S_{\alpha\beta}.$$ (51) This relation can be inverted to express the spin tensor in terms of the ring radius, since $v_{\mu}S^{\mu\nu}=0$: $$\displaystyle S^{\mu\nu}=-m\epsilon^{\mu\nu\alpha\beta}v_{\alpha}a_{\beta}.$$ (52) Products of spin vectors in the spin-$s$ representation are related to products in the spin-$1/2$ representation through $$\displaystyle{\left(a^{\mu_{1}}_{s}\dots a^{\mu_{n}}_{s}\right)_{\alpha_{1}\dots\alpha_{2s}}}^{\beta_{1}\dots\beta_{2s}}$$ $$\displaystyle=\frac{(2s)!}{(2s-n)!}{(a^{\mu_{1}}_{1/2})_{\alpha_{1}}}^{\beta_{1}}\dots(a^{\mu_{n}}_{1/2})_{\alpha_{n}}^{\beta_{n}}{\delta_{\alpha_{n+1}\dots\alpha_{2s}}}^{\beta_{n+1}\dots\beta_{2s}}+\dots,$$ (53) where the $+\dots$ represents terms which are subleading in $\hbar$. Every term at next-to-leading-order in this conversion is antisymmetric in exactly two Lorentz indices. So, if the tensor contracted into this relation is totally symmetric, the subleading terms are suppressed by one more power of $\hbar$. It can be much simpler to work in the spin-1/2 representation since only one ring radius vector in this representation can appear between a set of spinors. In the spin-1/2 representation, the ring radius acts on irreps of $SL(2,\mathbb{C})\times SL(2,\mathbb{C})$ as $$\displaystyle{(a^{\mu}_{1/2})_{\alpha}}^{\beta}$$ $$\displaystyle=\frac{1}{4m}\left[\left(\sigma^{\mu}\right)_{\alpha\dot{\alpha}}v^{\dot{\alpha}\beta}-v_{\alpha\dot{\alpha}}\left(\bar{\sigma}^{\mu}\right)^{\dot{\alpha}\beta}\right],$$ (54) $$\displaystyle{(a^{\mu}_{1/2})^{\dot{\alpha}}}_{\dot{\beta}}$$ $$\displaystyle=-\frac{1}{4m}\left[\left(\bar{\sigma}^{\mu}\right)^{\dot{\alpha}\alpha}v_{\alpha\dot{\beta}}-v^{\dot{\alpha}\alpha}\left(\sigma^{\mu}\right)_{\alpha\dot{\beta}}\right],$$ (55) which we have used to derive section 3.1 and its analog for same-helicity scattering. These relations can be extended to higher spin representations and powers by first converting the spin vectors in each little group space (i.e. on each side of the cut) to the spin-$1/2$ representation, then projecting the product of spin-$1/2$ spin vectors onto a symmetric product of spin vectors in the appropriate spin representation. For a general polarization sum we find, for example, $$\displaystyle\langle\bar{\boldsymbol{v}}v_{I}\rangle^{\odot(2s-n)}\langle\bar{\boldsymbol{v}}|a^{\mu_{1}}_{s}\dots a^{\mu_{n}}_{s}|v_{I}\rangle^{\odot n}[v^{I}\boldsymbol{v}]^{\odot(2s-k)}[v^{I}|a^{\nu_{1}}_{s}\dots a^{\nu_{k}}_{s}|\boldsymbol{v}]^{\odot k}$$ (56) $$\displaystyle=\langle\bar{\boldsymbol{v}}|^{2s}\{a^{\mu_{1}}_{s},\dots,a^{\mu_{n}}_{s},a^{\nu_{1}}_{s},\dots,a^{\nu_{k}}_{s}\}|\boldsymbol{v}\rangle^{2s}-\frac{ink}{2m^{2}}\langle\bar{\boldsymbol{v}}|^{2s}\{S^{\mu_{1}\nu_{1}}_{s},a^{\mu_{2}}_{s},\dots a^{\mu_{n}}_{s},a^{\nu_{2}}_{s},\dots,a^{\nu_{k}}_{s}\}|\boldsymbol{v}\rangle^{2s},$$ up to sub-subleading corrections in $\hbar$. We have assumed that the $\mu_{i}$ are all contracted with one four-vector and the $\nu_{i}$ with another. For fixed $n,\,k$, the first term always appears for high enough total spin $s$. The numerator of the second term is determined combinatorially, simply by writing out the little group symmetrizations explicitly. For $s=1/2$ and $n=k=1$, we recover section 3.1. Appendix B Covariantization of classical results The authors of ref. Saketh:2022wap computed the amplitude for the scattering of a gravitational plane wave off of a general compact object up to cubic order in the object’s spin vector. They expressed their amplitudes for polar scattering using the four-vectors $$\displaystyle k^{\mu},$$ $$\displaystyle\quad l^{\mu},$$ $$\displaystyle w_{S}^{\mu}$$ $$\displaystyle=\frac{1}{2\omega\cos^{2}(\theta/2)}\left[\omega(k^{\mu}+l^{\mu})-i\epsilon^{\mu\nu\alpha\beta}k_{\nu}l_{\alpha}v_{\beta}\right],$$ (57) $$\displaystyle w_{O}^{\mu}$$ $$\displaystyle=-\frac{1}{2\omega\sin^{2}(\theta/2)}\left[\omega(k^{\mu}-l^{\mu})+i\epsilon^{\mu\nu\alpha\beta}k_{\nu}l_{\alpha}v_{\beta}\right],$$ where the scattering angle of the plane wave is denoted by $\theta$. To match to $\mathcal{M}_{++}$ and $\mathcal{M}_{+-}$ in ref. Saketh:2022wap we must take $k^{\mu}=-q_{3}^{\mu}$ and $l^{\mu}=q_{4}^{\mu}$, which leads to $$\displaystyle\frac{t_{14}-t_{13}}{2}\frac{w^{\mu}}{y}$$ $$\displaystyle=-\bar{w}_{S}^{\mu}-\frac{(t_{14}-t_{13})s_{34}}{4(t_{13}t_{14}-m^{2}s_{34})}mv^{\mu}+\mathcal{O}(\hbar^{2}),$$ (58a) $$\displaystyle\frac{t_{14}-t_{13}}{2}\frac{w_{++}^{\mu}}{y_{++}}$$ $$\displaystyle=w^{\mu}_{O}+\frac{t_{14}-t_{13}}{4}\frac{v^{\mu}}{m}+\mathcal{O}(\hbar^{2}).$$ (58b) The spin-supplementary condition $v\cdot\mathfrak{a}=0$ makes it so that $\bar{w}_{S}^{\mu}$ and $w_{O}^{\mu}$ encode the non-vanishing parts of the contractions of $w^{\mu}$ and $w_{++}^{\mu}$ with $\mathfrak{a}^{\mu}$. The contractions of the spin with our helicity vectors $w^{\mu}$, $\bar{w}^{\mu}$, and $w_{++}^{\mu}$ are related by $$\displaystyle\frac{t_{14}-t_{13}}{2}\frac{w\cdot\mathfrak{a}}{y}$$ $$\displaystyle=\frac{1}{m^{2}s_{34}-t_{13}t_{14}}\left[t_{13}t_{14}(q_{4}\cdot\mathfrak{a})-m^{2}s_{34}\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right],$$ (59a) $$\displaystyle\frac{t_{14}-t_{13}}{2}\frac{\bar{w}\cdot\mathfrak{a}}{\bar{y}}$$ $$\displaystyle=\frac{1}{m^{2}s_{34}-t_{13}t_{14}}\left[-t_{13}t_{14}(q_{3}\cdot\mathfrak{a})+m^{2}s_{34}\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right],$$ (59b) which were useful in the restoration of locality to the same-helicity amplitude. For the purposes of comparison with the results in Section 3, it is useful to covariantize the amplitudes of ref. Saketh:2022wap . Accounting for the helicity weights of the amplitude, it is possible to do so uniquely in this case. Their helicity-preserving amplitude is covariantized by the expansion of section 3.1 up to third order in spin. In the helicity-reversing case, their amplitude is covariantized as191919We have had to switch the sign on the exponential in eq. (5.9) of ref. Saketh:2022wap in order to obtain full agreement with our results above. $$\displaystyle\mathcal{M}_{+-}$$ $$\displaystyle=\frac{y_{++}^{4}}{t_{13}t_{14}s_{34}}\left\{\exp\left[(q_{3}+q_{4})\cdot\mathfrak{a}\right]\frac{}{}\right.$$ (60) $$\displaystyle+\frac{C_{S^{2}}-1}{2!}\left[(q_{3}+q_{4})\cdot\mathfrak{a}]^{2}-\frac{s_{34}}{2}\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right]$$ $$\displaystyle+\frac{C_{S^{2}}-1}{2!}\frac{s_{34}}{4}\left[(q_{3}+q_{4})\cdot\mathfrak{a}+3(t_{14}-t_{13})\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right]\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]$$ $$\displaystyle+(C_{S^{2}}-1)^{2}\frac{s_{34}}{4}\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]$$ $$\displaystyle\left.+\frac{C_{S^{3}}-1}{3!}\left[[(q_{3}+q_{4})\cdot\mathfrak{a}]^{3}-\frac{3s_{34}}{4}\left[(q_{3}+q_{4})\cdot\mathfrak{a}+(t_{14}-t_{13})\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right]\left[\mathfrak{a}^{2}+4m^{2}\left(\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right]\right]\right\},$$ where we’ve used eq. 31 to write the amplitude in a manifestly local form. Appendix C Example of removal of unphysical poles in same-helicity amplitude We illustrate the removal of poles in $y\bar{y}=t_{13}t_{14}-m^{2}s_{34}$ from the crossing-symmetric, same-helicity Compton amplitude in eq. 29 at quadratic order in spin. The procedure is very similar at higher spins, only with more steps in the iteration. First, the problematic part of the same-helicity amplitude at quadratic order in spin is $$\displaystyle\frac{y_{++}^{4}(C_{S^{2}}-1)}{(m^{2}s_{34}-t_{13}t_{14})^{2}}\left\{\frac{m^{4}s_{34}}{t_{13}t_{14}}\left[-\left(\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}+[(q_{3}+q_{4})\cdot\mathfrak{a}]\left(\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)\right]\right.$$ $$\displaystyle\left.+\frac{m^{4}s_{34}^{2}[(q_{3}\cdot\mathfrak{a})^{2}+(q_{4}\cdot\mathfrak{a})^{2}]+t_{13}^{2}t_{14}^{2}[(q_{3}+q_{4})\cdot\mathfrak{a}]^{2}-m^{2}s_{34}t_{13}t_{14}[3(q_{3}\cdot\mathfrak{a})^{2}+3(q_{4}\cdot\mathfrak{a})^{2}+2(q_{3}\cdot\mathfrak{a})(q_{4}\cdot\mathfrak{a})]}{2s_{34}t_{13}t_{14}}\right\},$$ (61) where we have already applied appendix B. The first step is to add non-local contact terms which convert the double pole in $m^{2}s_{34}-t_{13}t_{14}$ into a simple pole. A semi-systematic way of identifying an appropriate contact term is to swap the factors $m^{2}s_{34}\leftrightarrow t_{13}t_{14}$ in the numerator of each term individually such that the result has no poles on physical factorization channels. Doing so above leads to the boundary term $$\displaystyle\mathcal{B}^{\prime}|_{\mathfrak{a}^{2}}=-$$ $$\displaystyle\frac{m^{2}y_{++}^{4}(C_{S^{2}}-1)}{(m^{2}s_{34}-t_{13}t_{14})^{2}}\left[-\left(\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}\right.$$ $$\displaystyle\qquad\left.+[(q_{3}+q_{4})\cdot\mathfrak{a}]\left(\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)-\frac{(q_{3}\cdot\mathfrak{a})^{2}+(q_{4}\cdot\mathfrak{a})^{2}}{2}\right],$$ (62) which can be seen to have no poles on physical factorization channels, and can thus be freely added to the amplitude without affecting its residues on physical poles. The result of adding the two is $$\displaystyle\frac{y_{++}^{4}(C_{S^{2}}-1)}{m^{2}s_{34}-t_{13}t_{14}}\left\{\frac{m^{2}}{t_{13}t_{14}}\left[-\left(\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)^{2}+[(q_{3}+q_{4})\cdot\mathfrak{a}]\left(\frac{t_{14}-t_{13}}{2}\frac{w_{++}\cdot\mathfrak{a}}{y_{++}}\right)\right]\right.$$ $$\displaystyle\left.+\frac{m^{2}s_{34}[(q_{3}\cdot\mathfrak{a})^{2}+(q_{4}\cdot\mathfrak{a})^{2}]-t_{13}t_{14}[(q_{3}+q_{4})\cdot\mathfrak{a}]^{2}}{2s_{34}t_{13}t_{14}}\right\}.$$ (63) The double pole has thus been alleviated to a simple pole. A common feature of the analysis at the spins considered is that, once the non-locality has been reduced to a simple pole, there are no longer enough Mandelstam variables in the numerator to identify a suitable boundary contribution in the way described above. We got around this by employing the Gram determinant in eq. 31 to eliminate all powers of $(q_{3}\cdot\mathfrak{a})^{i}(q_{4}\cdot\mathfrak{a})^{j}$ in the amplitude, which has generally allowed us to construct a final boundary contribution. In this simple case, however, we do not need this step. Employing the Gram determinant to remove instead the term linear in $w_{++}\cdot\mathfrak{a}$ reveals that the remaining unphysical pole is spurious, and lands us on the result in eq. 60. References (1) LIGO Scientific, Virgo collaboration, Observation of Gravitational Waves from a Binary Black Hole Merger, Phys. Rev. Lett. 116 (2016) 061102 [1602.03837]. (2) LIGO Scientific, Virgo collaboration, GW170817: Observation of Gravitational Waves from a Binary Neutron Star Inspiral, Phys. Rev. Lett. 119 (2017) 161101 [1710.05832]. (3) LIGO Scientific, Virgo collaboration, GWTC-1: A Gravitational-Wave Transient Catalog of Compact Binary Mergers Observed by LIGO and Virgo during the First and Second Observing Runs, Phys. Rev. X 9 (2019) 031040 [1811.12907]. (4) C. Cheung, I. Z. Rothstein and M. P. Solon, From Scattering Amplitudes to Classical Potentials in the Post-Minkowskian Expansion, Phys. Rev. Lett. 121 (2018) 251101 [1808.02489]. (5) Z. Bern, C. Cheung, R. Roiban, C.-H. Shen, M. P. Solon and M. Zeng, Scattering Amplitudes and the Conservative Hamiltonian for Binary Systems at Third Post-Minkowskian Order, Phys. Rev. Lett. 122 (2019) 201603 [1901.04424]. (6) Z. Bern, C. Cheung, R. Roiban, C.-H. Shen, M. P. Solon and M. Zeng, Black Hole Binary Dynamics from the Double Copy and Effective Theory, JHEP 10 (2019) 206 [1908.01493]. (7) C. Cheung and M. P. Solon, Classical gravitational scattering at $\mathcal{O}$(G${}^{3}$) from Feynman diagrams, JHEP 06 (2020) 144 [2003.08351]. (8) G. Kälin, Z. Liu and R. A. Porto, Conservative Dynamics of Binary Systems to Third Post-Minkowskian Order from the Effective Field Theory Approach, Phys. Rev. Lett. 125 (2020) 261103 [2007.04977]. (9) Z. Bern, J. Parra-Martinez, R. Roiban, M. S. Ruf, C.-H. Shen, M. P. Solon et al., Scattering Amplitudes and Conservative Binary Dynamics at ${\cal O}(G^{4})$, Phys. Rev. Lett. 126 (2021) 171601 [2101.07254]. (10) C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, Dynamics of binary systems to fourth Post-Minkowskian order from the effective field theory approach, Phys. Lett. B 831 (2022) 137203 [2106.08276]. (11) Z. Bern, J. Parra-Martinez, R. Roiban, M. S. Ruf, C.-H. Shen, M. P. Solon et al., Scattering Amplitudes, the Tail Effect, and Conservative Binary Dynamics at O(G4), Phys. Rev. Lett. 128 (2022) 161103 [2112.10750]. (12) C. Dlapa, G. Kälin, Z. Liu and R. A. Porto, Conservative Dynamics of Binary Systems at Fourth Post-Minkowskian Order in the Large-Eccentricity Expansion, Phys. Rev. Lett. 128 (2022) 161104 [2112.11296]. (13) G. U. Jakobsen and G. Mogull, Conservative and Radiative Dynamics of Spinning Bodies at Third Post-Minkowskian Order Using Worldline Quantum Field Theory, Phys. Rev. Lett. 128 (2022) 141102 [2201.07778]. (14) A. Guevara, A. Ochirov and J. Vines, Scattering of Spinning Black Holes from Exponentiated Soft Factors, JHEP 09 (2019) 056 [1812.06895]. (15) M.-Z. Chung, Y.-T. Huang, J.-W. Kim and S. Lee, The simplest massive S-matrix: from minimal coupling to Black Holes, JHEP 04 (2019) 156 [1812.08752]. (16) B. Maybee, D. O’Connell and J. Vines, Observables and amplitudes for spinning particles and black holes, JHEP 12 (2019) 156 [1906.09260]. (17) A. Guevara, A. Ochirov and J. Vines, Black-hole scattering with general spin directions from minimal-coupling amplitudes, Phys. Rev. D 100 (2019) 104024 [1906.10071]. (18) P. H. Damgaard, K. Haddad and A. Helset, Heavy Black Hole Effective Theory, JHEP 11 (2019) 070 [1908.10308]. (19) R. Aoude, K. Haddad and A. Helset, On-shell heavy particle effective theories, JHEP 05 (2020) 051 [2001.09164]. (20) Z. Bern, A. Luna, R. Roiban, C.-H. Shen and M. Zeng, Spinning black hole binary dynamics, scattering amplitudes, and effective field theory, Phys. Rev. D 104 (2021) 065014 [2005.03071]. (21) Z. Liu, R. A. Porto and Z. Yang, Spin Effects in the Effective Field Theory Approach to Post-Minkowskian Conservative Dynamics, JHEP 06 (2021) 012 [2102.10059]. (22) D. Kosmopoulos and A. Luna, Quadratic-in-spin Hamiltonian at $\mathcal{O}$(G${}^{2}$) from scattering amplitudes, JHEP 07 (2021) 037 [2102.10137]. (23) G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, Gravitational Bremsstrahlung and Hidden Supersymmetry of Spinning Bodies, Phys. Rev. Lett. 128 (2022) 011101 [2106.10256]. (24) G. U. Jakobsen, G. Mogull, J. Plefka and J. Steinhoff, SUSY in the sky with gravitons, JHEP 01 (2022) 027 [2109.04465]. (25) W.-M. Chen, M.-Z. Chung, Y.-t. Huang and J.-W. Kim, The 2PM Hamiltonian for binary Kerr to quartic in spin, JHEP 08 (2022) 148 [2111.13639]. (26) R. Aoude, K. Haddad and A. Helset, Searching for Kerr in the 2PM amplitude, JHEP 07 (2022) 072 [2203.06197]. (27) Z. Bern, D. Kosmopoulos, A. Luna, R. Roiban and F. Teng, Binary Dynamics Through the Fifth Power of Spin at $\mathcal{O}(G^{2})$, 2203.06202. (28) R. Aoude, K. Haddad and A. Helset, Classical Gravitational Spinning-Spinless Scattering at $\mathcal{O}(G^{2}S^{\infty})$, Phys. Rev. Lett. 129 (2022) 141102 [2205.02809]. (29) F. Febres Cordero, M. Kraus, G. Lin, M. S. Ruf and M. Zeng, Conservative Binary Dynamics with a Spinning Black Hole at O(G3) from Scattering Amplitudes, Phys. Rev. Lett. 130 (2023) 021601 [2205.07357]. (30) M. Accettulli Huber, A. Brandhuber, S. De Angelis and G. Travaglini, From amplitudes to gravitational radiation with cubic interactions and tidal effects, Phys. Rev. D 103 (2021) 045015 [2012.06548]. (31) E. Herrmann, J. Parra-Martinez, M. S. Ruf and M. Zeng, Gravitational Bremsstrahlung from Reverse Unitarity, Phys. Rev. Lett. 126 (2021) 201602 [2101.07255]. (32) P. Di Vecchia, C. Heissenberg, R. Russo and G. Veneziano, The eikonal approach to gravitational scattering and radiation at $\mathcal{O}$(G${}^{3}$), JHEP 07 (2021) 169 [2104.03256]. (33) E. Herrmann, J. Parra-Martinez, M. S. Ruf and M. Zeng, Radiative classical gravitational observables at $\mathcal{O}$(G${}^{3}$) from scattering amplitudes, JHEP 10 (2021) 148 [2104.03957]. (34) N. E. J. Bjerrum-Bohr, P. H. Damgaard, L. Planté and P. Vanhove, The amplitude for classical gravitational scattering at third Post-Minkowskian order, JHEP 08 (2021) 172 [2105.05218]. (35) F. Alessio and P. Di Vecchia, Radiation reaction for spinning black-hole scattering, Phys. Lett. B 832 (2022) 137258 [2203.13272]. (36) G. U. Jakobsen, G. Mogull, J. Plefka and B. Sauer, All things retarded: radiation-reaction in worldline quantum field theory, JHEP 10 (2022) 128 [2207.00569]. (37) G. U. Jakobsen and G. Mogull, Linear response, Hamiltonian, and radiative spinning two-body dynamics, Phys. Rev. D 107 (2023) 044033 [2210.06451]. (38) C. Cheung and M. P. Solon, Tidal Effects in the Post-Minkowskian Expansion, Phys. Rev. Lett. 125 (2020) 191601 [2006.06665]. (39) K. Haddad and A. Helset, Tidal effects in quantum field theory, JHEP 12 (2020) 024 [2008.04920]. (40) G. Kälin, Z. Liu and R. A. Porto, Conservative Tidal Effects in Compact Binary Systems to Next-to-Leading Post-Minkowskian Order, Phys. Rev. D 102 (2020) 124025 [2008.06047]. (41) Z. Bern, J. Parra-Martinez, R. Roiban, E. Sawyer and C.-H. Shen, Leading Nonlinear Tidal Effects and Scattering Amplitudes, JHEP 05 (2021) 188 [2010.08559]. (42) C. Cheung, N. Shah and M. P. Solon, Mining the Geodesic Equation for Scattering Data, Phys. Rev. D 103 (2021) 024030 [2010.08568]. (43) R. Aoude, K. Haddad and A. Helset, Tidal effects for spinning particles, JHEP 03 (2021) 097 [2012.05256]. (44) C. Heissenberg, Angular Momentum Loss Due to Tidal Effects in the Post-Minkowskian Expansion, 2210.15689. (45) D. A. Kosower, B. Maybee and D. O’Connell, Amplitudes, Observables, and Classical Scattering, JHEP 02 (2019) 137 [1811.10950]. (46) A. Cristofoli, N. E. J. Bjerrum-Bohr, P. H. Damgaard and P. Vanhove, Post-Minkowskian Hamiltonians in general relativity, Phys. Rev. D 100 (2019) 084040 [1906.01579]. (47) G. Kälin and R. A. Porto, From Boundary Data to Bound States, JHEP 01 (2020) 072 [1910.03008]. (48) N. E. J. Bjerrum-Bohr, A. Cristofoli and P. H. Damgaard, Post-Minkowskian Scattering Angle in Einstein Gravity, JHEP 08 (2020) 038 [1910.09366]. (49) G. Kälin and R. A. Porto, From boundary data to bound states. Part II. Scattering angle to dynamical invariants (with twist), JHEP 02 (2020) 120 [1911.09130]. (50) A. Cristofoli, R. Gonzo, D. A. Kosower and D. O’Connell, Waveforms from amplitudes, Phys. Rev. D 106 (2022) 056007 [2107.10193]. (51) Y. F. Bautista, A. Guevara, C. Kavanagh and J. Vines, From Scattering in Black Hole Backgrounds to Higher-Spin Amplitudes: Part I, 2107.10179. (52) R. Aoude and A. Ochirov, Classical observables from coherent-spin amplitudes, JHEP 10 (2021) 008 [2108.01649]. (53) G. Cho, G. Kälin and R. A. Porto, From boundary data to bound states. Part III. Radiative effects, JHEP 04 (2022) 154 [2112.03976]. (54) T. Adamo, A. Cristofoli and A. Ilderton, Classical physics from amplitudes on curved backgrounds, JHEP 08 (2022) 281 [2203.13785]. (55) Y. F. Bautista, A. Guevara, C. Kavanagh and J. Vines, Scattering in Black Hole Backgrounds and Higher-Spin Amplitudes: Part II, 2212.07965. (56) A. Brandhuber, G. Chen, G. Travaglini and C. Wen, A new gauge-invariant double copy for heavy-mass effective theory, JHEP 07 (2021) 047 [2104.11206]. (57) A. Brandhuber, G. Chen, G. Travaglini and C. Wen, Classical gravitational scattering from a gauge-invariant double copy, JHEP 10 (2021) 118 [2108.04216]. (58) N. E. J. Bjerrum-Bohr, G. Chen and M. Skowronek, Classical Spin Gravitational Compton Scattering, 2302.00498. (59) G. Mogull, J. Plefka and J. Steinhoff, Classical black hole scattering from a worldline quantum field theory, JHEP 02 (2021) 048 [2010.02865]. (60) R. Britto, F. Cachazo and B. Feng, New recursion relations for tree amplitudes of gluons, Nucl. Phys. B 715 (2005) 499 [hep-th/0412308]. (61) R. Britto, F. Cachazo, B. Feng and E. Witten, Direct proof of tree-level recursion relation in Yang-Mills theory, Phys. Rev. Lett. 94 (2005) 181602 [hep-th/0501052]. (62) N. Arkani-Hamed, T.-C. Huang and Y.-t. Huang, Scattering amplitudes for all masses and spins, JHEP 11 (2021) 070 [1709.04891]. (63) A. Falkowski and C. S. Machado, Soft Matters, or the Recursions with Massive Spinors, JHEP 05 (2021) 238 [2005.08981]. (64) M. Chiodaroli, H. Johansson and P. Pichini, Compton black-hole scattering for s $\leq$ 5/2, JHEP 02 (2022) 156 [2107.14779]. (65) L. Cangemi, M. Chiodaroli, H. Johansson, A. Ochirov, P. Pichini and E. Skvortsov, Kerr Black Holes Enjoy Massive Higher-Spin Gauge Symmetry, 2212.06120. (66) M. V. S. Saketh and J. Vines, Scattering of gravitational waves off spinning compact objects with an effective worldline theory, 2208.03170. (67) M. Levi and J. Steinhoff, Spinning gravitating objects in the effective field theory in the post-Newtonian scheme, JHEP 09 (2015) 219 [1501.04956]. (68) N. Siemonsen and J. Vines, Test black holes, scattering amplitudes and perturbations of Kerr spacetime, Phys. Rev. D 101 (2020) 064066 [1909.07361]. (69) J.-W. Kim and J. Steinhoff, Spin supplementary condition in quantum field theory, Part I : covariant SSC and physical state projection, 2302.01944. (70) E. Conde, E. Joung and K. Mkrtchyan, Spinor-Helicity Three-Point Amplitudes from Local Cubic Interactions, JHEP 08 (2016) 040 [1605.07402]. (71) E. Conde and A. Marzolla, Lorentz Constraints on Massive Three-Point Amplitudes, JHEP 09 (2016) 041 [1601.08113]. (72) E. P. Wigner, On Unitary Representations of the Inhomogeneous Lorentz Group, Annals Math. 40 (1939) 149. (73) V. Bargmann and E. P. Wigner, Group Theoretical Discussion of Relativistic Wave Equations, Proc. Nat. Acad. Sci. 34 (1948) 211. (74) S. Weinberg, RELATIVISTIC QUANTUM MECHANICS, vol. 1, p. 49–106. Cambridge University Press, 1995. 10.1017/CBO9781139644167.004. (75) L. Cangemi and P. Pichini, Classical Limit of Higher-Spin String Amplitudes, 2207.03947. (76) J. Vines, Scattering of two spinning black holes in post-Minkowskian gravity, to all orders in spin, and effective-one-body mappings, Class. Quant. Grav. 35 (2018) 084002 [1709.06016]. (77) M. J. Dugan, M. Golden and B. Grinstein, On the hilbert space of the heavy quark effective theory, Physics Letters B 282 (1992) 142. (78) M. E. Luke and A. V. Manohar, Reparametrization invariance constraints on heavy particle effective field theories, Phys. Lett. B 286 (1992) 348 [hep-ph/9205228]. (79) S. R. Dolan, Scattering and Absorption of Gravitational Plane Waves by Rotating Black Holes, Class. Quant. Grav. 25 (2008) 235002 [0801.3805]. (80) H. Johansson and A. Ochirov, Double copy for massive quantum particles with spin, JHEP 09 (2019) 040 [1906.12292]. (81) W.-M. Chen, M.-Z. Chung, Y.-t. Huang and J.-W. Kim, Lense-Thirring effects from on-shell amplitudes, 2205.07305. (82) M. Levi, R. Morales and Z. Yin, From the EFT of Spinning Gravitating Objects to Poincaré and Gauge Invariance, 2210.17538. (83) M. Levi and Z. Yin, Completing the Fifth PN Precision Frontier via the EFT of Spinning Gravitating Objects, 2211.14018. (84) R. Mertig, M. Böhm and A. Denner, Feyn calc - computer-algebraic calculation of feynman amplitudes, Computer Physics Communications 64 (1991) 345. (85) V. Shtabovenko, R. Mertig and F. Orellana, New Developments in FeynCalc 9.0, Comput. Phys. Commun. 207 (2016) 432 [1601.01167]. (86) V. Shtabovenko, R. Mertig and F. Orellana, FeynCalc 9.3: New features and improvements, Comput. Phys. Commun. 256 (2020) 107478 [2001.04407].